Understanding AI agents can be challenging, but there's an elegant way to think about them using concepts from functional programming.
Lets explore how AI agents can be modelled as trampolines - a pattern that perfectly captures their iterative, decision-making nature.
This post was inspired by the idea that an AI agent is just a “fancy loop”.
What Makes AI Agents Different from Regular Loops?
At their core, AI agents follow a deceptively simple pattern:
Receive input (user message, tool result, environmental data)
Use accumulated context to decide what to do next
Take an action (respond directly, call a tool, ask for clarification)
Repeat until they have a satisfactory answer
This sounds like a regular loop, but there's a crucial difference: the agent doesn't know ahead of time how many iterations it will need. It might respond immediately to "What's 2+2?" or it might need to call several tools for "What's the weather like in the capital of the country where the tallest building is located?"
The sequence of actions emerges dynamically based on the agent's reasoning at each step. This is where traditional control flow falls short and functional programming offers a better model.
Enter the Trampoline Pattern
A trampoline is a functional programming technique where instead of calling functions directly, you return a "thunk" (a zero-argument function). The trampoline keeps calling thunks until it gets a real value:
(defn factorial [n acc]
(if (= n 0)
acc ; "done" - return final value
#(factorial (dec n) (* n acc)))) ; "continue" - return thunk
(defn trampoline [f]
(loop [result f]
(if (fn? result)
(recur (result)) ; Call the thunk and continue
result))) ; Return the final value
The key insight is that each "bounce" represents a decision point where the computation can either continue (return a thunk) or complete (return a value).
AI Agents as Trampolines
AI agents are remarkably similar to trampolines in several crucial ways:
Unknown Depth: Just as the factorial function doesn't know how deep the recursion will go, the agent doesn't know how many "bounces" (tool calls or reasoning steps) it will need (although in practice most frameworks have a “max iterations” guard).
Self-Directed: Each bounce represents the LLM making a decision about whether to continue processing or provide a final response.
Stateful: Each bounce carries forward accumulated context from previous interactions - conversation history, tool results, and intermediate reasoning.
Iterative: The process continues until the agent determines it has enough information to provide a satisfactory response.
A Concrete Example
Let's see how this works with a real example. Imagine a user asks: "What's the weather like where the Eiffel Tower is located?"
(defn agent-step [context input]
(let [response (llm-call context input)]
(if (:final-response response)
response
#(agent-step updated-context
tool-result))))
;; The agent's reasoning process:
;; Bounce 1: "I need to find the location of the Eiffel Tower" -> Call location tool
;; Bounce 2: "Now I know it's in Paris. I need weather data" -> Call weather tool
;; Bounce 3: "I have the weather data for Paris" -> Return final response
The beautiful part is that the LLM's reasoning capabilities determine this sequence dynamically. For a simpler query like "What's 2+2?", it would return a final response immediately without any tool calls.
Multiple Tool Calls and Complex Reasoning
The trampoline pattern naturally handles complex multi-step reasoning. Consider this query: "Compare the weather in the capitals of France and Germany, and tell me which would be better for a picnic today."
The agent might reason through this sequence:
"I need to identify the capitals: Paris and Berlin"
"I need weather data for Paris" → tool call
"I need weather data for Berlin" → tool call
"Now I can compare and make a recommendation" → final response
Each step builds on the previous context, and the agent decides when it has enough information to provide a complete answer.
Building a Trampoline Agent
Here's how the core pattern works in practice:
(defn agent-step [context input]
(let [response (llm-call context input)]
(if (:final-response response)
response
#(agent-step (update-context context response)
(execute-tool (:tool-call response))))))
(defn run-agent [query]
(trampoline (agent-step initial-context query)))
The agent keeps "bouncing" until it decides it has enough information to respond.
Benefits of the Trampoline Model
This approach provides several key advantages:
Natural Control Flow: The agent decides when to continue and when to stop, just like a real conversation. There's no predetermined script or rigid decision tree.
Debuggable: You can easily trace through the agent's decision-making process by examining the context at each step, making it much easier to understand why an agent made particular choices.
Testable: Each step is conceptually a function. This makes testing straightforward and reliable.
Conclusion
The trampoline pattern provides a clean, functional way to model AI agent behaviour that naturally captures their most important characteristics:
Iterative decision-making: Each "bounce" represents a moment where the agent chooses its next action
Dynamic sequences: The agent generates its own sequence of actions based on reasoning, not predetermined logic
Context accumulation: Each step builds on previous interactions and discoveries
Tool integration: Tools become natural waypoints in the agent's journey toward an answer
This approach not only helps in understanding how AI agents work, but also provides a solid foundation for implementing them.
The next time you interact with an AI agent, imagine it bouncing on a trampoline - each bounce a moment of decision, each landing a chance to gather more context, until finally it decides it has everything it needs to give you the perfect answer.