ARTICLE

The SDLC Is Broken in an Agentic AI World. Here’s What Has to Change.

Selfie of Atsign co-founder and CTO Colin Constable with Yosemite's Half Dome in the background

Colin Constable

CTO & Co-founder

The standard Software Development Life Cycle—Planning, Requirements, Design, Development, Testing, Deployment, Maintenance—was built for a world of deterministic software, humans, and perimeter-based security.

Agentic AI breaks all three assumptions.

It’s not just that LLMs are powerful. It’s that we’re now connecting them to tools, data sources, and backend systems and we are letting them act autonomously. An LLM that answers questions is a chatbot. An LLM that books your flights, queries databases, triggers deployments, and coordinates with other agents? That’s an agentic flow.

And it changes everything about how software gets built, shipped, and trusted.

Rather than rethinking all seven SDLC phases individually, the real friction comes down to three hurdles that every idea must clear to reach production:

Hurdle 1: Idea to Engineer

In the traditional SDLC, this is the Planning and Requirements phase—defining scope, documenting requirements, getting stakeholder sign-off. It was deliberately slow because getting it wrong was expensive.

Agentic AI is compressing this phase at breathtaking speed. Natural language is now a design tool. Product managers, domain experts, and founders can describe what they want and an agent can start scaffolding it—not just generating text, but taking actions, calling APIs, and wiring systems together. What used to take weeks of requirements gathering and design reviews can now happen in a conversation.

But the Planning and Requirements phase existed for a reason, and the need for it hasn’t gone away—it’s just that the window for doing it has shrunk dramatically. Ideas race into implementation before anyone has thought through trust boundaries, data access patterns, or what an autonomous agent should and shouldn’t be allowed to do.

The traditional SDLC assumed you could define scope upfront with reasonable certainty. Agentic systems introduce probabilistic behavior, chains of autonomous decisions, and tool invocations that span multiple services. Requirements now must account for what data agents can access, how they authenticate, what actions they’re authorized to take, and what happens when an agent makes a wrong decision three steps deep in a chain. These are still planning and requirements questions — they just need to be asked at the speed the technology now moves.

Hurdle 2: Engineer into Code

In the traditional SDLC, this spans System Design and Development—architecture documents, data models, UI/UX specs, and then the careful work of writing code against those designs. These were the phases where engineers translated intent into implementation, and the handoff between design and code was a natural quality gate.

Agentic coding tools are collapsing that entire sequence. Agents don’t just autocomplete lines—they write entire modules, scaffold infrastructure, and wire up integrations autonomously. Design and development are merging into a single motion. What used to be a deliberate, phased process now happens at the speed of prompting.

But velocity without visibility is a liability.

Code generated at speed is code that may expose MCP (Model Context Protocol) servers to the open internet, hardcode secrets, or grant agents broader access than intended. MCP is becoming the standard way agents connect to external tools and data sources. It’s the connective tissue that makes agentic flows possible. But every MCP server is a bridge between an autonomous agent and your critical systems, and every bridge is an attack surface.

The Design phase used to be where architects thought about security boundaries, data flow, and system interfaces. That thinking is more important than ever—it just can’t live in a separate phase anymore. It has to be embedded in the tools and platforms engineers build on. The challenge shifts from “Can we build it fast enough?” to “Can we trust what was built and what it connects to?”

Hurdle 3: Code You Can Trust

In the traditional SDLC, this is where Testing, Deployment, and Maintenance live: the phases that were supposed to catch what the earlier phases missed, get software safely into production, and keep it running. These were the guardrails.

This is the hurdle that kills most agentic AI initiatives before they ever reach production.

Trust in the agentic era isn’t just “Does it pass tests?” Traditional testing assumed deterministic outputs from human-initiated actions. Agentic systems require evaluation of multi-step autonomous workflows, red-teaming for prompt injection and tool poisoning, testing for data leakage across trust boundaries between agents and services, and verifying that each agent in a chain only accesses what it’s authorized to access.

The risks are compounded because agents act in chains. One agent calls a tool, which triggers another agent, which queries a database, which returns results to a third agent that takes an action. Prompt injection at any point in that chain can cascade. Tool poisoning, where malicious instructions are hidden in tool descriptions — can redirect an entire agentic workflow. And traditional firewalls and VPNs were never designed for non-human entities making autonomous tool invocations at machine speed.

Deployment is harder too. You’re not just shipping an application. You’re exposing MCP servers, inference endpoints, and agent-to-tool connections that all become live attack surfaces the moment they’re in production. And Maintenance? Agentic systems don’t just need bug fixes. They need continuous monitoring of agent behavior, prompt drift, model updates, and an evolving threat landscape that changes faster than any patching cycle.

Trust can’t be an afterthought. It must be architectural, baked into the protocol layer from the start.

So What Does Architectural Trust Actually Look Like?

If agentic AI is going to move from demos to production, the infrastructure underneath it has to answer a few non-negotiable questions:

Can every entity in the chain prove who it is? Not just humans; every agent, every tool, every MCP server. In a traditional SDLC, identity management meant user logins. In an agentic world, you need cryptographic identity for non-human entities too. Without it, you can’t answer the most basic question: “Who is this agent and should it be allowed to do what it’s asking to do?” Shared secrets and API keys aren’t enough when agents are acting autonomously at scale.

Can you eliminate the attack surface instead of just defending it? Every open port on an MCP server, inference node, or data service is a target. The traditional approach is to harden those endpoints via firewalls, WAFs, API gateways. But in an agentic world where the number of connections between agents and tools is exploding, defending every bridge doesn’t scale. The better question is: what if there were no visible bridges to attack (an ‘Invisible Architecture’ with no open inbound ports)?

Is encryption automatic and universal? When agents are passing sensitive data between services at machine speed, encryption can’t be optional, complex to implement, or dependent on developers getting it right every time. It has to be a property of the infrastructure itself: on by default, for every connection, with keys that the infrastructure never holds.

Does a breach of one agent compromise the whole system? Centralized architectures concentrate risk. If an agent is compromised and it has a path to a centralized data store, the blast radius is enormous. By applying the Application Code Rule, ensuring security logic is handled in application code rather than the network protocol, we can verify that data is encrypted at the source between specific known parties. This decentralized, peer-to-peer approach limits that blast radius fundamentally: a breach of one node stays a breach of one node.

These aren’t aspirational goals. They’re engineering requirements for any organization serious about moving agentic AI into production. And they’re the kinds of problems that have to be solved at the architecture and protocol level—not with another layer of middleware on top.

The New Reality

The first two hurdles, Idea to Engineer and Engineer into Code, are getting faster every month. Agentic AI is compressing timelines that used to take weeks into hours.

But the faster you move from idea to code, the more risk you’re shipping into production unless you solve the third hurdle: trust.

And trust in an agentic world is a fundamentally harder problem than it was in a human-driven one. It’s not just people logging in anymore. It’s autonomous agents calling tools, chaining decisions, and acting on your behalf across systems you may not even be monitoring in real time.

The SDLC isn’t dead. But it needs a new foundation: one where identity, encryption, and zero-trust connectivity aren’t phases you bolt on at the end, but properties of the architecture you build on from the start.

Without that foundation, speed is just a faster way to ship vulnerabilities.

Share This