Skip to content
John Miedema
John Miedema

Essays on mindfulness meditation, cognitive technology, and climate politics 🐌

  • Home
    • Program Notes
  • Essays
    • Meditation
    • Politics
    • Art & Technology
  • Book Reviews
  • Slow Reading
  • Art
  • Lab
    • TrumpGPT
    • Scrub your Link of Tracking Data
    • Share News on Facebook Canada
  • About
John Miedema

Essays on mindfulness meditation, cognitive technology, and climate politics 🐌

    Make Me Think

    Posted on May 23, 2025May 23, 2025

    Designing for Reflection in the Age of AI

    Don’t Make Me Think by Steve Krug is a foundational book on web usability that emphasizes designing websites and apps so intuitive that users barely have to think to use them. Krug argues that good design should be self-evident, relying on clear visual hierarchy, familiar conventions, and minimal distractions to help users achieve their goals quickly. He stresses that users skim rather than read, make quick decisions, and often muddle through rather than follow instructions, so interfaces should be simple, forgiving, and focused on usability testing rather than perfection.

    Steve Krug’s Don’t Make Me Think was foundational to the user-centered design principles that shaped Web 2.0, emphasizing simplicity, clarity, and minimal cognitive effort. These ideas influenced the rise of social media platforms that prioritized ease of use, instant feedback, and addictive interfaces. As a result, users could engage effortlessly, often mindlessly. While this made the web more accessible, it also ushered in an era where critical engagement was displaced by frictionless scrolling and superficial interactions. Arguably, the mantra of “don’t make me think” became a double-edged sword: it enhanced usability while encouraging passive consumption over thoughtful participation.

    As the unintended consequences of seamless digital experiences become more apparent—addiction, misinformation, and disconnection—designers are increasingly recognizing the value of friction in user experience. These intentional pauses or interruptions in automation re-engage the user’s attention and bring their reflective mind back into the loop. Rather than optimizing every interaction for speed and ease, friction-based design introduces moments for choice, context, or reconsideration. Examples include double-checking before posting, taking a mindful pause before continuing a scroll, or offering deeper context behind a notification. By making users think—not in the obstructive way Krug warned against, but in a conscious and intentional way—friction becomes a tool for ethical, human-centered design in an age that too often rewards mindless engagement.

    Good friction in payment experiences introduces intentional pauses that enhance user safety, trust, and decision-making. For example, confirmation prompts before finalizing a purchase help prevent accidental or impulsive spending, while address and card verification steps add a layer of security that reassures users. Multi-factor authentication, especially for large or unusual transactions, introduces a brief delay that significantly reduces fraud risk. Review screens that summarize items, costs, and terms give users a final chance to catch errors or reconsider. Even budget alerts or spending warnings can nudge users toward more mindful financial behavior. These design choices slow the process just enough to bring the user’s conscious mind back into the loop, turning friction into a feature, not a flaw.

    Some critics claim that AI will do our thinking for us and ultimately make us dumber, pointing out that AI systems often hallucinate, confidently producing false or fabricated information. For example, an AI might generate a plausible-sounding academic citation that doesn’t actually exist. Minimizing such errors is crucial, but it’s also worth noting that error is not a disqualifier of intelligence—it’s part of it. In Knowledge and the Flow of Information, philosopher Fred Dretske argues that the capacity to misrepresent is essential to genuine representation. A mental or informational state can only count as representing something if it can get it wrong. We accept that humans err and build systems (legal, scientific, educational) that account for this. So why not extend the same adaptive approach to machines?

    The issue arises from holding AI to an outdated ā€œcommandā€ model of automation, where computers are expected to execute perfectly defined tasks with precision. But AI belongs to a ā€œcollaborateā€ model: it processes and proposes, while the human remains in the loop, interpreting, validating, and deciding. AI can do much of the heavy lifting of information processing, but ultimate accountability still rests with people. In this light, the challenge of designing with AI isn’t to eliminate thinking, but to prompt it at the right moments. A fitting design ethos for our time might flip Krug’s classic title on its head: Make Me Think.

    Last Updated on May 23, 2025 | Published: May 23, 2025

    Follow on Bluesky | Subscribe on Substack
    Art & Technology

    Post navigation

    Previous post
    Follow on Bluesky
    Subscribe on Substack
    • ChatGPT and I Collaborate on a Book Review of Zen and the Art of Motorcycle Maintenance by Robert Pi…
      I don’t mind ChatGPT’s assessment of my writing style: ā€œYour writing style is reflective, […]
    • Library: An Unquiet History, by Matthew Battles
      The history and future of libraries is a story of fire. Many have heard of the burning of the […]
    • The Art of Being Posthuman by Francesca Ferrando
      Posthumanism invites us to embrace the plurality of being: I am they; we are they The term […]
    • I’m Feeling Lucky
      From Google to ChatGPT — How AI is Shaping Our Web Experience Remember Your First Google Search?Do […]
    • Dutch Ship

    Follow on Bluesky | Subscribe on Substack

    ©2025 John Miedema | WordPress Theme by SuperbThemes