Reflection
Public Design
Design for Freedom or Design for Safety? What Motorways Can Teach Us About Service Systems
Feb 6, 2026
I recently witnessed a driver on the motorway "undertaking" (passing on the left), weaving through traffic to get ahead. It was dangerous, yet they seemed completely calm. They weren't just breaking a rule; they were breaking the logic of the entire road system.
This incident sparked a conversation about how we design systems, whether for roads or services. In the UK (and much of the West), we design for User Agency (freedom). In parts of Asia, systems often prioritise Error Prevention (safety).
But here is the catch: You cannot give users freedom if you don't teach them "Why" the rules exist.
The Service Gap: A License to Guess
If we view the UK Driving License as a "Service Onboarding," it has a critical flaw. As of 2018, learner drivers can take lessons on motorways, but it is voluntary and not part of the test.
This means a user can "pass" the onboarding process without ever encountering the system's most dangerous feature (the motorway). Then, on day one of being a fully licensed user, they are given full access to that feature alone.
Imagine a software platform that unlocks "Admin Level" controls to a new user who has never read the manual. That is the UK motorway system. The driver I saw undertaking likely didn't know why it was wrong; they just knew it was faster.
The "Why" vs. The "What"
Service design often focuses on the "What" (the instruction):
Instruction: "Do not pass on the left."
User Reaction: "But the left lane is empty, and I am in a hurry. This rule is inefficient."
This is where education fails. It teaches compliance, not understanding. To design a safe system based on freedom, we must teach the "Why" (the System Logic):
The Logic: "Drivers have a larger blind spot on their passenger side. The entire system relies on the prediction that fast cars appear on the right. If you appear on the left, you are invisible."
User Reaction: "If I pass on the left, the other driver physically cannot see me and might crash into me."
When you explain the "Why," the rule stops being an arbitrary restriction and becomes a tool for self-preservation to some extent.
Prevention vs. Permission
This tension mirrors a cultural difference in system design.
Prevention Model (Common in East Asia):
The system often treats the user as a variable that must be managed rather than trusted. Design might use physical barriers or strict procedural blocks (Poka-yoke) to make the undertaking impossible.
One of the advantages is that it almost eliminates the error. However, it can feel patronising to expert users who are blocked from making efficient deviations, and it risks fostering "passive compliance", where users stop actively thinking about the logic of the situation and instead rely entirely on "behaving" correctly within the system's rigid constraints.
Permission Model (Seen in Western/UK):
The system treats the user as an adult. You have the physics to undertake (no barrier stops you), but you have the duty not to.
The Western model is beautiful because it respects user freedom. However, it could be dangerous because it relies on Shared Mental Models. If the user doesn't share the mental model (e.g., "Undertaking disrupts the flow"), the freedom becomes chaotic.
Conclusion: Reflection for Service Designers
The dangerous driver wasn't necessarily malicious; they were simply poorly onboarded. They were operating a high-stakes service with a "User Manual" that had missing pages.
As service designers, this leaves us with a critical balancing act. Should we lean toward the Prevention Model building "Asian-style" guardrails that physically stop errors guarantees safety, but risk treating users like children who stop thinking for themselves? Or we should stick to the Permission Model: offering the "Western" freedom to make choices, which respects user agency but demands a much higher standard of education?
I believe the answer lies in bridging the gap. If we choose to design systems that allow freedom, we have a moral obligation to improve the onboarding. We cannot just say "Follow the rules." We must reveal the system's logic: the "Why" behind the "What."
