Understanding AI agents can be challenging, but there's an elegant way to think about them using concepts from functional programming.
Lets explore how AI agents can be modelled as trampolines - a pattern that perfectly captures their iterative, decision-making nature.
This post was inspired by the idea that an AI agent is just a “fancy loop”.
What Makes AI Agents Tick?
At their core, AI agents follow a simple but powerful pattern:
Receive input (user message, tool result, etc.)
Use accumulated context to decide what to do next
Take an action (respond, call a tool, ask for clarification)
Repeat until they have a final answer
This sounds like a loop, but there's a crucial twist: the agent doesn't know ahead of time how many iterations it will need. It might respond immediately, or it might need to call several tools first. The sequence of actions emerges dynamically based on the agent's decisions.
Enter The Trampoline
A trampoline is a functional programming technique originally designed to avoid stack overflow in recursive functions. Instead of calling functions directly, you return a "thunk" (a zero-argument function). The trampoline keeps calling thunks until it gets a real value:
(defn factorial [n acc]
(if (= n 0)
acc ; "done"
#(factorial (dec n) (* n acc)))) ; "continue"
(defn trampoline [f]
(loop [result f]
(if (fn? result)
(recur (result)) ; Call the thunk and continue
result))) ; Return the final value
(trampoline (factorial 5 1)) ; => 120
The Perfect Analogy
AI agents are remarkably similar to trampolines:
Unknown depth: The agent doesn't know how many "bounces" (tool calls) it needs
Self-directed: Each bounce (LLM call) decides whether to continue or stop
Stateful: Each bounce carries forward accumulated context
Iterative: The process continues until the agent decides it's done
Let's build a simple AI agent using this pattern.
A Basic AI Agent Implementation
Here's a minimal but complete AI agent implementation using the trampoline pattern:
;; Agent state - simplified to just track messages
(defrecord AgentContext [messages])
(def initial-context (->AgentContext []))
;; Dependencies
(require '[clj-http.client :as http]
'[cheshire.core :as json]
'[clojure.string :as str])
;; Tool implementations - now actually compute results
(def tools
{:weather
(fn [params]
(let [temp (+ 60 (rand-int 30))]
(format "Weather in %s: %s, %d°F"
(:location params)
(rand-nth ["Sunny" "Cloudy" "Rainy"])
temp)))
:search
(fn [params]
(format "Found 3 results for '%s': Wikipedia article, Recent news, Expert guide"
(:query params)))
:calculator
(fn [params]
(try
(let [result (eval (read-string (:expression params)))]
(format "%s = %s" (:expression params) result))
(catch Exception e "Error: Invalid expression")))})
;; Cleaner LLM integration using structured output
(defn call-llm [messages]
(let [api-key (System/getenv "OPENAI_API_KEY")
;; Use JSON for more reliable parsing
system-msg {:role "system"
:content "You are an AI assistant. Respond with JSON:
For final answers: {\"type\": \"respond\", \"content\": \"your response\"}
For tool use: {\"type\": \"tool\", \"name\": \"weather\", \"args\": {\"location\": \"Tokyo\"}}
Available tools: weather, search, calculator"}
response (http/post "https://api.openai.com/v1/chat/completions"
{:headers {"Authorization" (str "Bearer " api-key)}
:content-type :json
:body (json/generate-string
{:model "gpt-3.5-turbo"
:messages (concat [system-msg] messages)
:temperature 0.1
:response_format {:type "json_object"}})}) ; Force JSON output
content (-> response :body json/parse-string :choices first :message :content)]
(json/parse-string content true)))
;; The core agent logic - this is where the trampoline pattern shines
(defn agent-step [context input]
;; Add input to conversation
(let [new-context (update context :messages conj
{:role "user" :content input})
;; Get LLM's decision
decision (call-llm (:messages new-context))]
(case (:type decision)
;; Final answer - return result (stops trampoline)
"respond"
{:response (:content decision)}
;; Tool needed - return thunk (continues trampoline)
"tool"
(let [tool-fn (tools (keyword (:name decision)))
result (tool-fn (:args decision))
;; Add tool result to context
updated-context (update new-context :messages conj
{:role "function"
:name (:name decision)
:content result})]
;; Return a thunk - this is the key pattern!
;; The trampoline will call this function next
#(agent-step updated-context result)))))
;; Simple trampoline implementation (same as in article intro)
(defn trampoline [f]
(loop [current f]
(if (fn? current)
(recur (current)) ; Keep bouncing
current))) ; Done - return final value
;; Clean runner that shows what's happening
(defn run-agent [input]
(println (str "\nUser: " input))
;; Count bounces to show iteration
(let [bounce-count (atom 0)
;; Wrap agent-step to count calls
counting-step (fn [ctx in]
(swap! bounce-count inc)
(let [result (agent-step ctx in)]
(when (fn? result)
(println (str "Bounce " @bounce-count ": Calling tool...")))
result))
;; Run with trampoline
result (trampoline (counting-step initial-context input))]
(println (str "Agent: " (:response result)))
(println (str " (Completed in " @bounce-count " steps)"))
result))
The key difference is that the real LLM uses its reasoning capabilities to:
Understand when tools are needed vs. when it can respond directly
Craft appropriate tool parameters (extracting location names, search queries, etc.)
Synthesise information from multiple tool calls into coherent responses
Maintain conversational context across multiple interaction steps
Multiple Tool Calls
The trampoline pattern naturally handles multiple tool calls. With a real LLM, the agent can intelligently chain tool calls based on its reasoning:
The beautiful part is that the LLM decides the sequence of actions based on its understanding of the request. It might:
Recognise that both weather and search information are needed
Call the weather tool first
Then call the search tool
Finally synthesise both results into a helpful response
Why This Pattern Works So Well
The trampoline pattern is perfect for AI agents because:
1. Natural Control Flow
The agent decides when to continue and when to stop, just like a real conversation. There's no predetermined script.
2. Composable and Extensible
Adding new tools is trivial - just add them to the tools map. The trampoline pattern handles the rest.
3. Debuggable
You can easily trace through the agent's decision-making process by examining the context at each step.
4. Testable
Each step is a pure function (given the same context and input, you get the same result), making testing straightforward.
Connection to Reactive Programming
The trampoline pattern for AI agents naturally connects to reactive programming, where you model computation as streams of events. AI agents are essentially reactive systems that process streams of inputs (user messages, tool results) and produce streams of outputs (responses, tool calls).
(require '[clojure.core.async :as async])
;; AI agent as a reactive stream processor
(defn reactive-agent [input-chan output-chan]
(async/go-loop [context initial-context]
(when-let [input (async/<! input-chan)]
(let [result (trampoline (agent-step context input))]
(async/>! output-chan (:response result))
(recur (:context result))))))
This combination gives us Functional Reactive AI Agents that use:
Reactive streams for event processing
Trampolines for reasoning steps
Functions for transformations
Immutable state for context
Conclusion
Thinking of AI agents as trampolines provides a clean, functional way to model their behaviour. The pattern naturally captures:
Iterative decision-making: Each "bounce" represents a decision point
Dynamic sequences: The agent generates its own sequence of actions
Context accumulation: Each step builds on previous interactions
Tool integration: Tools become natural waypoints in the conversation
This approach not only helps in understanding how AI agents work, but also provides a solid foundation for implementing them in functional programming languages. The trampoline pattern's emphasis on pure functions, explicit state, and composability makes it an excellent fit for building reliable, testable AI systems.
The next time you interact with an AI agent, imagine it bouncing on a trampoline - each bounce a moment of decision, each landing a chance to gather more context, until finally it decides to stop bouncing and give you an answer.