Designing Software in the Age of AI: What Could Go Right?

Cory Doctorow gave us "enshittification" — the slow, deliberate degradation of software for profit. This post is about the opposite. How AI is shifting incentives, restoring makers to their highest purpose, and giving us a genuine shot at building software that actually serves the people using it.

Let's start with a word. In 2022, writer and activist Cory Doctorow coined the term "enshittification" to describe something many of us had been living through but couldn't quite name. His definition was precise: the process by which online platforms and products decline in quality. They start off serving users well to attract them, then degrade the experience to serve business customers, and finally degrade it for everyone to extract maximum short-term profit for shareholders.

The American Dialect Society made it their word of the year in 2023. It felt earned.

But this post is about the opposite. De-shittification. What happens when the incentives shift, the tools change, and the people building software get the chance to actually build for the people using it? What could genuinely go right?

How we got here

Enshittification isn't an accident. It's a business model. Doctorow describes the mechanism as "twiddling"  or the continual, incremental adjustment of a platform's parameters in search of marginal profit improvements, without regard for any other goal. A little more friction here. A dark pattern there. A feature buried behind a paywall that used to be free. None of it is usually dramatic enough to trigger a revolt, but quietly makes the experience worse.

The people buying enterprise software were rarely the people suffering it. Procurement optimised for features and price. The actual humans who had to use the thing every day weren't in the room. And once users were locked in, switching costs high enough, alternatives inconvenient enough — there’s no longer any real incentive to maintain quality. The value had been extracted. The product could coast.

Design tried to fix this from the inside with user-centred design, design thinking, and discovery sprints. The methodology was right, but the execution was often theatre. The "user" in UX was frequently a product manager's assumptions, laundered through enough Post-its to feel like research. Low-fidelity wireframes produced low-fidelity feedback. We were asking people to suspend disbelief, to imagine the grey boxes were a real product, and then wondering why the insights, while useful, often felt thin.

Time pressure made everything worse. There was never time to simplify, because the complexity ate up the time. A self-perpetuating cycle.

Let's start with an admission

A lot of us who build software don't enjoy what we've become.

A developer colleague put it perfectly recently. He doesn't like writing code. He likes making things. I'm a designer and I feel exactly the same way. I didn't get into this to draw boxes and arrows through endless rounds of stakeholder review. I got into it to make things that didn't exist before. Things that worked. Things that helped people.

Somewhere along the way, the making got buried under the process. And quietly, gradually, a lot of very talented people stopped feeling like makers and started feeling like people who generate artefacts for other people's approval.

AI changes that. But not in the way most of the hype suggests.

What AI actually unlocks

The first thing AI changes is fidelity. We can now create realistic, high-quality prototypes in hours rather than weeks. This matters more than it sounds.

Real reactions come from real-feeling things. The slight hesitation before clicking something unfamiliar. The moment someone looks for something in the wrong place. The expression when an error message says something unexpected. You cannot get that from a wireframe with grey boxes and placeholder text. Show someone something that feels genuinely real and the quality of what they tell you changes completely. They stop critiquing the prototype and start reacting to the experience.

We can also augment real user research with synthetic users, AI-generated personas that stress test logic, surface obvious gaps and eliminate cheap mistakes before we spend real people's time on them. It is worth being honest about what synthetic users can and cannot do. They work well for early-stage testing and for catching what is clearly broken. They do not replace the depth and unpredictability of real human behaviour - the workaround someone invented because the actual workflow doesn't match the designed one, the thing a participant does in a session that makes the whole room go quiet. But used well, they mean we arrive at the real research session having already found the obvious mistakes. The human conversation becomes richer because we are not wasting it on things we could have caught earlier for free.

More rolls of the dice

Here is where the business case gets interesting. And it is not the argument most people are making.

The goal is not to deliver the same thing faster. It is to have more shots on goal.

Under the old model, you got one or two big swings at getting it right before the budget ran out or the appetite disappeared. So everyone became conservative. Attached to their assumptions. Because being wrong was expensive. Sunk cost dressed up as a roadmap.

When each attempt costs less, you can afford to be wrong more often. Which paradoxically means you reach the right answer much faster. "Fail fast" became a cliché because it got detached from what it actually means. It is not about celebrating failure. It is about reducing the cost of being wrong to the point where you can afford to find out sooner. Every wrong turn you identify early is a right turn you get to take instead.

Consider this. Sometimes, we can build a first version of something faster than we can schedule the meeting to decide whether we should build it. When that becomes true, the whole conversation about risk, investment and sign-off starts to look very different.

The build, measure, learn loop was always the right idea. It just ran at the wrong speed to work in most organisations. Tighten that loop enough and something qualitatively different happens. You stop guessing and start knowing. The bottleneck is no longer the making. Which means the question becomes an organisational and cultural one. Can your processes and workflows move at the same speed as your tools? That is worth sitting with.

Restoring people to their highest purpose

This is what gets lost in the efficiency conversation.

AI does not just make coding faster. It fundamentally rebalances where human attention goes. The scarce, valuable resource, thoughtful human judgment about what people actually need,  stops being crowded out by the mechanical work of expressing it.

The designer stops being a production machine and starts spending their time understanding how people actually behave. The business analyst stops writing user stories and starts asking whether we are solving the right problem for the right person. The developer stops writing boilerplate and starts getting creative and pushing what can be done.

Every role gets restored to what it was supposed to be. The makers get to make again.

That matters beyond job satisfaction, though job satisfaction matters enormously. Disengaged makers make disengaged things. The enshittification of software and the quiet frustration of the people building it are not unrelated. When the incentive is extraction rather than quality, the work feels like it. When creative energy gets unblocked and people are genuinely building for users, the quality of what gets built changes with it.

People who love what they do make things worth loving.

The question to ask

Enshittification persists where switching costs are high, users are locked in and there is no meaningful pressure to do better. AI lowers switching costs. It compresses the effort required to build alternatives. It makes the threat of a better product more credible, more quickly.

That changes the incentive structure. Not automatically. Not magically. But meaningfully.

If you commission software design and development, this is the moment to raise your expectations.

Do not ask suppliers how fast they can build it. Ask how fast they can find out they are wrong. Ask what their process removes, not just what it adds. Ask to see something real in the first two weeks, not a presentation about what they are planning to make. Ask what your users will actually do in their first five minutes, and how your supplier intends to find out.

The organisations that embrace this shift do not just build better software. They build the right software. For the people who actually have to use it. Faster, with less waste, with tighter feedback loops, and with teams who remember why they got into this work in the first place.

The technology becomes invisible. The human work becomes visible again.

That is de-shittification. And it was always the point.

!@THEqQUICKbBROWNfFXjJMPSvVLAZYDGgkyz&[%r{\"}mosx,4>6]|?'while(putc 3_0-~$.+=9/2^5;)<18*7and:`#

Need a Quote for a Project?

We’re ready to start the conversation however best suits you - on the phone at
+353 (0)1 833 7392 or by email