Wicked Python Trickery: Dynamically Patch a Python Function’s Source Code at Runtime
Alternatively titled: How I accidentally unlocked a new level of AI bot hackery (and why you probably shouldn’t do this at home... or in production)
Hello fellow datanistas!
Have you ever wondered if you could rewrite a Python function’s source code while your program is running? I stumbled into this rabbit hole recently, and what I found was both exhilarating and a little terrifying.
In this post, I share how I discovered a way to dynamically patch Python functions at runtime, why this matters for building more flexible AI bots, and the real-world lessons (and warnings) I picked up along the way. If you’re curious about the edges of Python’s runtime or want to see how this can supercharge LLM-powered agents, you’ll want to read on.
It all started with a simple experiment: could I swap out a function’s code on the fly? Turns out, Python’s .__code__ attribute lets you do exactly that. With a bit of compile and exec magic, you can inject new logic into a running function—no restart required.
At first, this felt like a party trick. But as I dug deeper, I realized it could solve a real pain point in my LLM project, LlamaBot. My old AgentBot mixed up too many responsibilities—function execution, tool selection, and user response—all tangled together. I wanted something more modular, where the bot could pick tools but let me control execution.
Enter ToolBot. By letting the LLM generate Python functions as strings, compiling them, and executing them in the current namespace (with access to all my globals!), I could build a much more powerful and flexible agent. This approach, inspired by Marimo’s generative UI ideas, means I don’t have to write a bespoke tool for every possible operation. Instead, the LLM can generate and run code that interacts with any variable in my environment.
Of course, this is not without risk. Running arbitrary code is dangerous—one malicious output could wreak havoc. I’m exploring ways to sandbox or restrict execution (hello, Restricted Python!), but for now, this is strictly an experiment, not production code.
The real lesson? Python’s runtime is more malleable than I ever imagined, and separating tool selection from execution in LLM agents opens up a world of possibilities. But with great power comes great responsibility (and a healthy dose of paranoia).
You can dynamically patch Python functions at runtime to build more flexible AI agents—but it’s a double-edged sword that demands caution and thoughtful design.
Have you ever used Python’s dynamic features for something unconventional (or risky)? What’s the wildest thing you’ve tried, and what did you learn?
Curious about the nitty-gritty details and code examples? Check out the full post here: Wicked Python Trickery: Dynamically Patch a Python Function’s Source Code at Runtime. If you find it useful (or cautionary!), feel free to share or subscribe for more experiments.
Happy Coding,
Eric
P.S. My former colleague Nathan Walsh is hiring for a Senior Applied AI/ML Engineer role at TetraScience. Consider applying if you feel you’re a good fit!

