Websites That Work Perfectly - Until They Don't

By Janus Boye

Tom Cranstoun as seen on stage at CMS Summit 25

Conversations about AI and the web increasingly arrive with an implicit countdown attached.

In a recent member call with Tom Cranstoun, there was certainly a sense of acceleration.

Tom has spent many years working at the intersection of CMS platforms, accessibility and emerging AI-driven architectures, and is currently writing a book on what he describes as the web’s ‘invisible users’. AI agents are already interacting with websites. Discovery and comparison behaviours are shifting. Assumptions that have underpinned web design for years are starting to creak under new forms of use.

At the same time, much of what was discussed felt oddly familiar.

The failure modes were not new. The design shortcuts were not new. Even the users being failed were not new. What has changed is the scale at which these failures now surface, and the leverage they have when they do.

As the conversation unfolded, it became clear that several different timelines are colliding. The problems themselves are old. The agents exposing them are already here. The organisational response, however, moves far more slowly. Understanding that mismatch is more useful than any single prediction about how quickly “the web will change”.

That framing shaped much of the discussion.

Invisible users are not a new category

Tom introduced the notion of invisible users to describe groups that are systematically overlooked in web design. The term landed not because it was provocative, but because it was accurate.

These users are invisible in two ways. They are invisible to site owners, blending into analytics or being filtered out entirely. And the interfaces we design are often invisible to them, relying on signals they cannot perceive.

Blind users have lived with this for decades. Animations, colour cues, spinners and toast notifications convey meaning visually, but often disappear entirely when translated through assistive technologies. What looks like helpful feedback to one user is silence to another.

AI agents are now encountering exactly the same problems.

Old failures, newly exposed

One of the most striking aspects of the call was how directly the discussion echoed long-standing accessibility concerns.

Non-semantic HTML. Content that only appears after JavaScript execution. Application state that exists entirely in client-side code. Feedback that flashes briefly and then vanishes. These are not emerging anti-patterns. They have been well understood for years.

What has changed is who is now affected by them.

When an interaction is confirmed only via a transient toast notification, a screen reader user may never hear it. An AI agent may never register that anything happened at all.

A brief moment in the call illustrated this neatly. Not everyone was familiar with the term “toast notification”. That gap was telling. The technical debt here is not only in code, but in shared vocabulary. If teams lack a common language to recognise and question such patterns, they are unlikely to challenge them consistently in practice. What some assumed was an obvious interface pattern needed clarification for others. If experienced practitioners do not share a consistent understanding of the signal being sent, machines have little chance of interpreting it correctly. From their perspective, the action failed. The flow appears broken, so they abandon it.

Several people in the room noted how uncomfortable this symmetry is. The web has known how to do better for a long time. We simply did not feel enough pressure to act.

Agents are already in the flow

Tom outlined several categories of AI agents already interacting with live websites.

Some operate server-side and never execute JavaScript. Others run in the browser as extensions. Some automate full browsers at scale. Others operate locally on users’ devices with limited context windows.

Their technical capabilities differ, but their requirements converge around the same foundations: semantic structure, explicit state and unambiguous data.

A key point that prompted discussion was that site owners can no longer reliably distinguish between humans and machines. Modern agents identify as humans, execute JavaScript, fill forms and complete transactions. Forking experiences for “bots” and “real users” may feel reassuring, but it is increasingly fragile.

This is where the timelines begin to diverge. Agent capability is advancing quickly. Commercial influence arrives early, through recommendation and comparison, long before full automation of checkout becomes routine. Organisational change, meanwhile, lags behind both.

When inference replaces data

The risks of leaving machines to infer meaning from presentation rather than being given explicit data were illustrated through a concrete example discussed in the call.

An AI agent researching river cruises returned prices in excess of £200,000 per person. The actual prices were closer to £2,000–£4,000. The root cause was mundane: European number formatting combined with missing guardrails.

What mattered was not the mistake itself, but the chain of absence around it. There was no range validation to flag an implausible value. No comparison against similar offers. No structured pricing data to anchor interpretation. The result was presented with the same confidence as verified information.

Several members observed that this is how trust erodes. Not through spectacular crashes, but through small, plausible errors delivered authoritatively.

No trade-off to resolve

A recurring question raised from the room was whether designing for AI agents implies a trade-off with performance or human experience.

The answer was consistently no.

Semantic HTML improves parsing speed for browsers. Clear structure benefits assistive technologies. Persistent feedback reduces abandonment for all users. Explicit pricing builds trust regardless of who is reading it.

Rather than competing concerns, AI readiness and accessibility increasingly look like the same set of practices viewed from different angles.

What machines need is what disabled users have needed for decades.

Changes that make sense anyway

The conversation deliberately avoided calls for wholesale rewrites or new stacks.

Instead, it focused on patterns that are well understood and disproportionately effective:

  • Replacing disappearing toast notifications with persistent alerts

  • Ensuring core content is present in the served HTML, not injected later

  • Making application state visible in the DOM rather than hidden in JavaScript variables

  • Using basic schema.org markup to remove ambiguity around products and prices

None of these ideas are new. What is new is the context in which they now operate.

Several members noted that improvements initially justified on accessibility grounds increasingly show measurable impact on conversion, resilience and debuggability.

As the discussion turned towards implementation, Justin Cook, President of Internet Marketing & Development at 9thCO and a Boye & Co member based in Toronto, offered a pragmatic perspective. Rather than focusing on novelty, he pointed out that many of the foundations already exist, and that the real work lies in using them deliberately to serve both people and machines.

As he put it, “Use Astro, Remix or Next.js on the front-end, use SSG for performance. MCP-UI will allow us to then build and control the experience for agentic interactions beyond a website.”

The point was not to prescribe a specific stack, but to underline that many of the patterns being discussed are already well supported by modern tooling, if organisations choose to prioritise them.

Beyond UX: machine experience

One proposal that generated significant discussion was treating machine experience, MX, as a first-class concern alongside UX.

The reasoning felt familiar. Accessibility only began to improve at scale when it moved from being “everyone’s responsibility” to having clear ownership, standards and review processes. When accountability is diffuse, progress stalls.

MX, in this framing, is not a new silo. It builds on existing competencies, often in accessibility and quality assurance teams. What is frequently missing is mandate rather than skill.

The harder challenge here is organisational, not technical.

Websites that keep working

The session closed on a simple but telling observation.

Websites that work for invisible users tend to work better for everyone. They are clearer, more robust and easier to trust. They are also easier to maintain over time.

Seen this way, the question is not whether change is coming, nor how quickly it will arrive. The more useful question is how deliberately organisations choose to respond to what is already visible.

The web has solved many of these problems before. The difference now is that we are being asked to solve them again, without the luxury of delay.

The conversation continues

If reading about the discussion, or even being part of the call itself, is not quite enough, you are very welcome to engage more actively.

Our community is built around learning together. We compare notes, challenge assumptions, and explore how theory meets practice across roles, industries and regions.

There are several ways to take part:

You can also download the slides (PPT) or even lean back and enjoy the entire recording.

At the close of the call, Tom also invited members who are curious to help shape the thinking to act as early reviewers of his forthcoming book, continuing the conversation in a more reflective, iterative way.

However you choose to engage, we are glad you are here and part of the journey.