Skip to content

Conversation

@BouquetAntoine
Copy link

Add experimental configuration to enable DCP for specific sub-agents based on system prompt pattern matching.

Key changes:

  • Add experimental.subAgents configuration section
  • Allow per-sub-agent configuration of prunable tools
  • Support strategy and tool overrides per sub-agent
  • Detect sub-agent type from system prompt patterns
  • Apply DCP only when enabled and patterns match

This feature allows users to selectively enable context pruning for long-running sub-agents that would benefit from reduced context size.

Code not tried yet but DCP for sub agents would be a really useful features, sometimes we don't really care if the sub agent resume is perfect and we need the precious extra context DCP offers. Can be disabled by default but should be something configurable, even if experimental with warning.

Add experimental configuration to enable DCP for specific sub-agents
based on system prompt pattern matching.

Key changes:
- Add experimental.subAgents configuration section
- Allow per-sub-agent configuration of prunable tools
- Support strategy and tool overrides per sub-agent
- Detect sub-agent type from system prompt patterns
- Apply DCP only when enabled and patterns match

This feature allows users to selectively enable context pruning for
long-running sub-agents that would benefit from reduced context size.
@Tarquinen
Copy link
Collaborator

The main issue I have with allowing DCP into subagents is that if any DCP tools trigger at the end of the models response (which they love to do), it will override the information the subagents sends back to it's parent agent, making the entire subagent session useless and confusing the parent session. This happens because subagents send only the very last message back up to the parent session.

As far as I can tell, your code doesn't handle this core issue. What do you think?

@BouquetAntoine
Copy link
Author

BouquetAntoine commented Jan 20, 2026

Interesting, I'm sure we can find a workaround for that.
What you are saying is that Opencode subagents aren't using a dedicated tool to send their resume to the parent like "completetask" but instead they just finish the conversation ? We can probably find a way to prevent DCP tools after the completion message or if it's impossible by design I guess it wouldn't be that hard to push a fix for that in Opencode, will also check that tomorrow 😊 thanks for your answers

@Tarquinen
Copy link
Collaborator

Yea there's no tool or anything, it's a pretty simple system. As far as I know, subagents just send the text part from the last assistant message before going idle up to the parent session.

@BouquetAntoine
Copy link
Author

May be the correct implémentation would be to change current behaviour of DCP, instead of prompting prunable-tools after each message we could do it before each message.

That would prevent any tool to be called after the completion message, it will also correct the current behaviour that we have sometimes in main agent after the completion message -> DCP -> "...The message is a context management remember..." (+ Will avoid an extra completion request (with cache invalidation) at the end of each session=bucks saved)

It might also be possible to just prompt thè sub agent (in context info) to avoid use of extract/discord for the final completion ?

Because the second option is to enforce a completion tool at subagents level in Opencode but less performing models can have troubles doing that, even with auto reminder...

What do you think ?

@Tarquinen
Copy link
Collaborator

pruneable-tools is just a list of tools the LLM has used earlier in the session that are available for pruning, it looks something like:

1. read: some-file.txt 2. glob: some-pattern 3. grep: something 4. read: another-file.txt

The model uses this list as reference material for the extract/discard tools, so it can pick what to prune. Changing where you inject that list will not prevent the model from using extract/discard tools. It also wouldn't affect any of the other issues you mentioned, the second of which was fixed recently when the injection system moved to assistant role messages.

Prompting the subagent is not a good enough solution, models ignore prompts all the time and there's too big a risk here as it could ruin all the information in your parent session.

Honestly i'm not really sure how to solve this, let me know if you have any other ideas

@BouquetAntoine
Copy link
Author

Hey @Tarquinen! 👋
Dug deep into this and tried a DCP-only solution (no OpenCode changes needed): cb408fa

You were right. When the LLM calls discard/extract in the same message as its final summary, it creates a finish: "tool-calls" that keeps the loop running. The new message (or lack of text in it) becomes the "result" sent to parent instead of the actual summary.

I think the best we can do is :

  1. Opt-in config per sub-agent type - DCP stays disabled by default in sub-agents (experimental.subAgents.enabled = false). You can selectively enable it by matching system prompt patterns.
  2. Dedicated system prompt - Critical instructions explaining that the final message MUST be text only, never tool calls.
  3. Contextual warning - Add specific instructions to <prunable-tools> wrapper for sub-agents to remind on every injection: "If you're completing your task → NO pruning. If you still have work to do → OK."

If agressive prompting turns out to be unreliable in practice, the real fix would be on OpenCode's side: add a task_complete tool that sub-agents MUST call to finish, with the summary as a parameter. No more ambiguity about "what's the final message". But that requires changes in OpenCode's task.ts.

Makes sense to keep this under experimental since reliability really depends on model prompt adherence. Also worth noting that in some workflows the sub-agent summary isn't critical anyway, so having this as an option lets people choose based on their actual use case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants