AI Agents Are Making Software More Like Biology
Most software people are taught in kindergarten about the separation between the application and the database layers.
This paradigm states that data and "business logic" are like oil and water. Data lives in a database, logic lives in code. It is frown upon to "hard-code" data into your code although we all do a bit of it sometimes.
Both layers are uncoupled and even interchangeable. You can connect a new application to an existing database. Likewise, you can run your application using a different database (e.g. a local database with fake data for testing).
An important aspect of this approach is that while the data in the database can influence how the application behaves and its configuration, it cannot change the "business logic" of the application does at its core.
For example, by changing your personal data in a note taking app, you cannot turn that app into a 3D game, as the code of the app itself doesn't change.
Early LLM usage patterns didn't change this
When people started developing with LLM APIs such as the ChatGPT API we started seeing prompts inside code. You would hear phrases like "English is the new programming language", but those prompts were just business logic living in the application layer.
Between 2023 and 2025, we developed a lot of AI applications and integrations within Zenva. Things like AI closed-caption translation, the AI tutor in the lessons, and multiple internal automations. But all of them were just chains of API calls to various LLM APIs. All of them hard-coded in code or configuration files. Nothing truly new under the sun in terms of paradigms.
So what changed?
For the last year or so, I've using Cursor to help me write code in both existing and new codebases. Nothing new there from what everyone in tech has been doing for ages now.
The new pattern for me at least emerged when I started using the agent for much more advanced, multi-hour tasks. This was possible because agents got better and they can also be integrated to other tools (such as web browsers) via MCP servers.
I started using the AI agent more and more for tasks in which we weren't actually writing any code but doing more open-ended tasks without clear constraints, such as book editing or deep diving into a research topic or our company data. In this scenario, the interaction itself and the artifacts it leaves behind becomes both the business logic, and the database.
Note that while I personally use Cursor, this can be done with Claude Code (the most popular one), Codex, Open Code and many other AI agents! They all have more or less similar capabilities in terms of being able to work for multiple hours at a time, and utilize a broad set of tools via MCP servers for tasks that don't have clear boundaries or scopes.
When working on a project, I create an AGENT.MD file in the folder (which is a new standard for AI agents), which explains what the project is about and the main guidelines.
Importantly, I always write in there that whenever the project evolves, the agent needs to keep that file updated. So essentially, as the program (agent) runs and things happen, the code (instructions) change over time, modifying future executions of that program.
Moreover, I ask it to keep track of changes we've made, things we've decided, insights and data summaries, all in text files. This means the data is going to modify the future instances of the software.
We are getting closer to a biology-like software substrate
In biology, there is no separation between hardware, data and application.
The same molecule can be structural (hardware), signaling (data flows, state) and enzymatic activities ("application", if we can even call it that?).
None of those things can be separated in biology. Not even the execution instance itself (if that stops, the organism is dead).
Software development and self-actualizing AI agents are starting to blend those traditional barriers, and resembling a more biology-like system of interactions.