-
Notifications
You must be signed in to change notification settings - Fork 652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Add external tool support to ChatAgent
& Refactor
#830
Conversation
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @WHALEEYE ! Does this agent need structured output if the target of this agent is just to return the tool calling request from external tools?
Also for now if external tools is added or both internal and external tools added, this agent would not have any content in the response, does this make sense?
Could you also add some test for this agent? Thanks!
I'm doing this mainly out of the reason that we may further integrate external tool support directly into the
Yes, if the external tools are called, then |
ChatAgent
& Refactor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @WHALEEYE , left some comments below, overall LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @WHALEEYE , left some comments
self.is_tools_added() | ||
and isinstance(response, ChatCompletion) | ||
and response.choices[0].message.tool_calls is not None | ||
not self.is_tools_added() | ||
or not isinstance(response, ChatCompletion) | ||
or response.choices[0].message.tool_calls is None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if user sets tool_choice="required" in LLM config, then tools will always be added, which would lead to infinite while
loop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's the desired behavior in current design. Acutally the tool_choice config should be managed by the agent rather than the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @dandansamax , I think since currently we support tool_choice
in LLM parameter then we need to make sure there will not have this kind of infinite loop in ChatAgent
, I discussed with @WHALEEYE , we can handle tool_choice
separately when doing ChatAgent
refactor further
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally great. But so many legacy problems in the old code. We should open up new issues to tackle them.
# Format messages and get the token number | ||
openai_messages: Optional[List[OpenAIMessage]] | ||
|
||
# Check if token has exceeded | ||
try: | ||
openai_messages, num_tokens = self.memory.get_context() | ||
except RuntimeError as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not related to this PR: why treats all RuntimeError
as step token exceed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's open up a new issue for this
base_message_item.content = str(info['tool_calls'][0].result) | ||
# Normal function calling | ||
tool_call_records.append( | ||
await self._step_tool_call_and_update_async(response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The async
implementation is meaningless. We are only using the sequential behavior. Please open up an issue to remove all duplicated async code.
After discussing with @dandansamax, I think we'd better do some refactoring to the whole |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed, some enhancement would be implemented in another refactor PR by @WHALEEYE
Description
Add a set of tools named
external_tools
intoChatAgent
, allowing users to directly get tool calling requests for a certain set of external tools. This will also do refactoring to thestep()
andstep_async()
function we currently have.Motivation and Context
This agent can facilitate the integration of CAMEL into CRAB.
We'll see if there's any workaround at CRAB's side in the future.
This will also close #894.