-
Notifications
You must be signed in to change notification settings - Fork 253
Description
In the current implementation of AgentTool.runAsync(...), the runner instance is created directly via constructor:
Runner runner = new InMemoryRunner(this.agent, toolContext.agentName());This introduces a structural limitation:
the code forces a specific runner implementation and does not allow the caller to provide their own Runner or override behaviour.
Why this is problematic
-
No ability to inject alternative Runner implementations
Any attempt to replaceInMemoryRunner(e.g., with a distributed runner, persistent runner, mocked test runner, or custom lifecycle-managed runner) becomes impossible. The hard-codedneweliminates extensibility. -
Loss of execution context
The runner manages sessions and state. By creating it directly inside the tool, the execution context may diverge from the tool context, especially when session/state lifecycles are coordinated elsewhere in the system.
This can lead to unexpected state resets or inconsistent flow. -
Unexpected or non-deterministic behaviour
When tooling relies on a specific execution model, but the tool enforcesInMemoryRunner, callers may get results that differ from their expected environment (e.g., summarization rules, session persistence, or event pipelines). -
Violation of dependency-injection principles
The tool is not inversion-of-control–friendly, which makes it harder to integrate into larger orchestrators or frameworks.
Proposed solution
Two possible fixes:
A. Inject runner instance externally
E.g. pass Runner via constructor or provide a RunnerFactory:
public AgentTool(BaseAgent agent, boolean skipSummarization, RunnerFactory factory) {
this.runnerFactory = factory;
}This keeps the tool composable, testable, and compatible with custom infrastructures.
B. If only a single atomic action is required, avoid the full Runner and call the LLM directly
If the intention is to execute a one-off LLM request without full agent orchestration, then using a full Runner is unnecessary overhead.
A direct LLM call would be more predictable, cheaper, and avoids unwanted runner logic (session creation, events, summarization pipelines, etc.).
Please review
If the current implementation has hidden assumptions requiring InMemoryRunner specifically, or if there are constraints that justify not injecting it, then please adjust this issue accordingly. Otherwise, replacing the direct constructor call with a proper injection mechanism will make the system more modular and avoid context inconsistencies in the future.