Epic as Code: Why AI Is Finally Forcing the Requirements Problem Into the Open

For decades, the bottleneck in software delivery was building the thing. You had the idea, you had the specification - however vague - and you fed it into a machine made of developers, sprints, and standups. The machine was slow and expensive, so the organisation built a whole bureaucracy to manage it. Portfolio layers, program increments, PI planning ceremonies that consume entire weeks. Architecture review boards. Agile coaches coaching other agile coaches. All of it, at its root, was load-bearing scaffolding for a slow build process.

AI-assisted development is pulling that scaffolding away. Not gradually. Quickly.

When a competent developer with the right tooling can produce in a day what used to take a sprint, the entire governance apparatus built around slow delivery starts to look absurd. The cost model breaks. The timeline assumptions collapse. And suddenly the constraint is not where anyone expected to find it.

The pressure is reversing direction

This is the dynamic that most organisations are not prepared for, and it is already happening in pockets everywhere. Development used to be the long tail - the part where ideas went to get slowly, painfully realised. Management's job was to funnel requirements in and wait for output. That direction of flow made sense when building was the hard part.

Now building is becoming cheap. Fast. Almost mechanical when the inputs are good. And here is the thing about AI coding assistants that nobody in portfolio management wants to hear: they are absolutely ruthless about bad inputs. You hand an AI a vague epic with half-formed acceptance criteria and it will cheerfully generate something that looks like software, compiles without complaint, and solves the wrong problem entirely. It fills the gaps you left with its best guess. Its best guess is often spectacular nonsense.

So developers - freed up from the grinding execution work, finally able to lift their heads and think about what they are actually building - start asking questions. Sharp questions. Questions about data consistency rules that nobody has written down. About edge cases in asynchronous business processes that have never been modelled. About what actually happens when two events arrive out of sequence. These are not unreasonable questions. They are the questions any serious engineer should be able to ask. The difference is that now they have the time and the cognitive space to ask them.

The questions land on the desks of Product Managers, Business Analysts, Scrum Masters, Portfolio Managers. And most of these people are not equipped to answer them. Not because they are not smart - many of them are very smart - but because the skill set that got them into those roles was a different skill set. Managing stakeholders. Facilitating alignment. Writing epics that are ambitious enough to get approved and vague enough not to be wrong. Being wrong at the requirement level used to be invisible. It only showed up as delivery problems, and delivery problems were the development team's fault.

That comfortable arrangement is over.

The requirement is now load-bearing in a new way

When the build cycle compresses from months to weeks, the cost of a bad requirement is not absorbed across a long delivery timeline. It hits immediately. You get working software for the wrong thing, fast. And then you have to explain that to someone.

This is where the concept I want to introduce becomes necessary: Epic as Code.

I mean this literally, not metaphorically. The development of early requirement artefacts needs the same alertness, the same precision, and the same willingness to crash and revise that we expect in executable software. The epic is not a narrative artefact. It is a specification. And specifications should be treated with the discipline of code.

Code has properties that PowerPoint does not. It is unambiguous in a way that language cannot be. It forces you to define your data structures, which forces you to understand your domain. It fails loudly when the logic does not hold. If you have not thought through the consistency rules, the code will not run. There is no "we will figure this out in a later sprint" moment. The gaps show up immediately.

Early vibe coding attempts - just prompting an AI with a rough idea and seeing what comes out - failed for exactly this reason. The AI generates confidently and completely into whatever shape you gave it, including all the unexplored corners. The result looks finished and is hollow. The same failure mode applies to the front end of the lifecycle when requirements are treated as narrative artifacts to be discussed rather than structures to be validated.

An epic that is developed with code-level rigour does not have to be a real application. It does not need to be production-ready. It needs to be a working expression of the core idea, applied into the real environment, checked against actual constraints, and built up as a strategic backbone for what comes next. It is a probe. And like any good probe, its second most important function is failing when the idea does not hold. Kill it early if it does not fit. That is not failure. That is the system working correctly.

Who does this work?

Not the current product management population, at least not without significant retooling. The people who can do this - who can bridge the strategic intent and the technical expression, who can pick up tooling without getting lost in it, who understand both the organizational constraints and the data model - these are Enterprise Architects and the more technically grounded Analysts. Scarce resources. But exactly positioned for this moment.

These people need access to what I would call the AI twins of the enterprise silos: the actual code repositories, the wikis, the knowledge stores, the architecture models - whatever form the organisational memory takes, however imperfect. Nobody should need to call a meeting to understand how data flows between two systems. If the meeting is still necessary, that is a symptom of an information architecture problem that has been deferred for years, probably decades. AI tooling can help pull it out of the Confluence graves and SharePoint archives into something usable, but someone has to do that work before the requirement shaping starts.

The truly advanced organisations will have a knowledge graph, a capabilities model, a RAG store that makes this information queryable. Most will not be there. Most will be working with the digital mess they accumulated through twenty years of tool proliferation and undocumented decisions. That is fine. You start where you are. But you have to start.

What changes is the output of this early phase. Instead of workshop outputs and whiteboard photos and strategy decks, you have working artefacts. Runnable probes. Demonstrated assumptions. Code that either fits into the environment or breaks against it. When the conversation about whether to continue shifts from opinions to findings, the quality of that conversation changes fundamentally. The politics do not disappear, but they become harder to sustain against a working proof of concept.

Once a project moves into proper delivery, these early artefacts become the most valuable input the team receives. Not because they are complete specifications, but because the hard questions have already been asked, the key assumptions have been tested, and the shape of the problem is clear. The AI-assisted delivery cycle can then run fast and clean, which is what it was designed to do.

The organisations that figure this out first will not get there by hiring more Scrum Masters. They will get there by taking the requirement discipline seriously - by treating the front end of the lifecycle with the same rigour they finally learned to apply to delivery. Epic as Code is not a tool recommendation or a process framework. It is a reorientation of where the hard work happens.

It happens earlier than you think. It starts now.