Notes from a Cognitive Solutions Architect

Software is no longer just something we build and run—it’s something we increasingly teach, guide, and collaborate with. As intelligent systems grow more human-like in language, learning, and adaptation, our design choices must evolve beyond traditional architecture. This essay collects principles I’ve gathered over decades in software and AI, distilled from practice into a working model I call Cognitive Application Architecture.
From Cognitive Science to Cognitive Solutions
Cognitive architecture began as an academic discipline. At Western University in the late eighties, I studied psychology during a time of debate between Allan Paivio and Zenon Pylyshyn. Paivio argued that the mind processes information through both visual and verbal channels (Dual Coding Theory). Pylyshyn, by contrast, rejected mental imagery as explanatory, advocating instead for a symbolic, rule-based view of cognition. That computational view went on to shape how many intelligent systems are built today.
Over time, new models emerged. Neural networks reframed cognition as distributed and probabilistic. Embodied cognition emphasized the importance of physical interaction and environmental context. While these theories have mostly remained in academia, I’ve worked to bring them into the practice of software design. With a background in psychology, AI, UX, and more recently EEG neurotechnology, I now refer to my role as a Cognitive Solution Architect. It’s a title that reflects not only what I build, but how I think.
What Makes Cognitive Applications Different
Cognitive applications pursue different goals than traditional software. They must appear to understand, remember, adapt, and learn. They must handle ambiguity, track context, and evolve over time. Designing such systems requires a departure from conventional architecture—one shaped by the constraints and dynamics of human cognition.
This kind of design isn’t just about picking the right tools. It’s about making deliberate architectural decisions that reflect how intelligence works—messy, adaptive, partial. Over the years, I’ve learned that building cognitive systems demands a different kind of thinking, one that draws from psychology, design, and systems theory. What follows are the core principles I use when creating software that isn’t just functional, but cognitive.
Beyond Code: What Cognitive Applications Require
The rise of cognitive applications marks a shift in how we build software. Traditional development emphasized explicit control—structure, efficiency, and predictability. But intelligent systems must do more. They must understand, adapt, and sometimes surprise us. Designing for cognition means rethinking foundational principles, starting with how we relate to machines.
Teaching vs Programming
The old model was programming: tell the machine exactly what to do, step by step. But with today’s generative models, we’re teaching systems through examples, corrections, and intent. A single prompt can generate working code. Our role becomes less about control and more about guidance—setting expectations, offering context, shaping behavior. In this new mode, architecture isn’t just logic; it’s pedagogy. We’re not programming systems. We’re mentoring them.
Stateful vs Stateless
Once you stop commanding and start teaching, memory becomes essential. Traditional apps are stateless: each request stands alone. That’s great for scalability, but terrible for cognition. A chatbot, for instance, can’t seem intelligent if it forgets what you said two minutes ago. Cognitive systems need short-term memory to track context—who “he” refers to, or why the user is anxious. Some of this lives in databases, but much is kept in-session, enabling nuance and responsiveness. Memory doesn’t just make the system smarter. It makes it feel more human.
Good Friction vs Bad Friction
Classic usability tells us to eliminate friction—make things seamless. But seamlessness can flatten thought. In Make Me Think, I argued that a little friction, used well, prompts reflection. In cognitive apps, friction becomes a cue: “Pay attention. This matters.” A writing assistant that asks if you’re sure about a phrasing nudges clarity. A smart form that questions a contradiction invites rethinking. Good friction slows the user just enough to create intention. It’s the point where interaction becomes cognition.
Good Error vs Bad Error
In conventional software, error is failure. The system didn’t follow instructions. But cognitive systems work in a fuzzier space. Language models make mistakes—sometimes misleading, sometimes inspired. In the right context, a small hallucination can be a creative leap. A smart assistant that misunderstands might still reveal a useful angle. The goal is not error-free execution but graceful failure. Good errors spark ideas. They open doors. In cognitive design, some errors are not bugs—they’re opportunities.
Asymmetry vs Symmetry
Engineers prize symmetry—clean graphs, stable curves, predictable flows. But cognition thrives on asymmetry. A pause in speech, a spike in data, a strange word choice—these are where insight lives. In one neurotech project, EEG data from meditative states showed sudden bursts. Initially dismissed as noise, they turned out to signal breakthroughs in concentration. Cognitive systems must be tuned to notice the anomaly, not smooth it away. Intelligence isn’t always found in the norm. It often hides in the deviation.
Toward a More Reflective Intelligence
These patterns aren’t just theoretical. They come from decades of navigating the tension between what machines do well and what makes human thinking rich. As we enter an age where cognition is distributed—part human, part machine—the architectures we build will shape not just our tools, but our ways of thinking.
Cognitive application architecture sits at the frontier between computation and conversation, between data and meaning. It’s a new craft—and still in flux—but its promise is profound: not just software that works, but software that learns, listens, and thinks with us.
Last Updated on June 6, 2025 | Published: June 6, 2025