The LLM decides WHICH tool to use and writes a JSON request. Your code actually runs the tool. The LLM then turns the result into a human answer.
⚡ Key idea (most beginners get this wrong)
The LLM CANNOT actually call APIs or browse the internet. It only outputs JSON telling your code what to run. Your backend runs the tool and passes the result back.
🧰 Agent's toolbox (always visible to the LLM)
get_weather
Get current weather for a city
city: string
convert_currency
Convert between currencies
amount: number, from: string, to: string
search_flights
Search flights between two cities
from: string, to: string, date: string
🔄 Pipeline — current query: “What's the weather in Mumbai?”
👤1
User
User asks a question
🧠2
LLM
LLM reads tools + query
📝3
LLM
LLM outputs a tool call (JSON)
⚙️4
Your Code
Your code runs the tool
📦5
External Tool
Tool returns a result
💬6
LLM
LLM writes the final answer
Press Play to watch how the LLM uses a tool to answer: “What's the weather in Mumbai?”