Replies: 1 comment
-
|
Ooops, seems like it's already the case! https://tanstack.com/ai/latest/docs/tools/tool-architecture#parallel-tool-execution I'll have to find why it's not doing it in my case... EDIT: Fixed, I just had to write it down in my system.md |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
For now, the LLM execute tools one at a time. But sometimes, it needs to call multiple tools that can be executed in parallel. This would improve the token consumption and avoid back and forth requests.
Google describe this here: https://ai.google.dev/gemini-api/docs/function-calling?example=meeting#parallel_function_calling
Ollama describe this here: https://docs.ollama.com/capabilities/tool-calling#parallel-tool-calling
This seems to be supported on Vercel's AI SDK: https://ai-sdk.dev/cookbook/node/call-tools-in-parallel
Beta Was this translation helpful? Give feedback.
All reactions