Cognitive Architecture
While I find attempts to anthropomorphize AI highly problematic, Cognitive Architecture is a popular (albeit misleading) term applied to principles and practices for improving how GenAI models recognize patterns and organize/classify data, to improve the generation of complex outputs like text, images, music, etc.
Introduction
While I find attempts to anthropomorphize AI highly problematic, Cognitive Architecture is a popular (albeit misleading) term applied to principles and practices for improving how GenAI models recognize patterns and organize/classify data, to improve the generation of complex outputs like text, images, music, etc.
Cognitive Architectures, particularly symbolic models like ACT-R or SOAR, focus on structured weighting/classification processes to inform their decision-making and problem-solving. There are two main types of cognitive architectures:
Symbolic Cognitive Architectures: These are based on the idea that human cognition operates on symbols and rules, much like how a computer processes information.
Connectionist Cognitive Architectures: These are inspired by the biologic networks of chemically connected neurons in the brain, and by extension the nervous system, that operate in parallel. This model uses distributed representations rather than symbolic rules.
Some architectures, like CLARION (Connectionist Learning with Adaptive Rule Induction ON-line), attempt to integrate both the symbolic and connectionist approaches.
To put this into context, let’s look at some of the aspects of human cognition these architectures are patterned after:
Memory: Cognitive architecture explores different means of retrieving past information as part of generating new content to improve consistency and coherence over longer narratives or dialogues.
Attention: GPT and BERT already use “attention” mechanisms to prioritize certain input tokens over others during text generation. Cognitive architecture extends that by mimicking human attention by focusing on more relevant information during each step in a multi-step process.
Learning: AI systems can adjust their algorithms and update outputs based on real-time feedback or evolving situations. Likewise, based on user preferences or expressed goals, generative models can improve their outputs.
Symbolic & Subsymbolic reasoning: Symbolic reasoning helps GenAI model abstract concepts and logic, while subsymbolic processes (like neural networks) focus on pattern recognition. Systems like CLARION integrate both symbolic and subsymbolic (connectionist) processes with the goal of generating narratives that are not only grammatically correct but also have rich content and deeper meaning.
“Creative” problem solving: By enhancing AI’s ability to deviate from established patterns, but stopping short of hallucinations, AI systems can potentially generate more innovative content by applying some of the common frameworks employed by people attempting to think outside the box.
While these models are inspired by human mental processes, such as perception, memory, attention, problem-solving, decision-making, and learning, human cognition is more than symbols and rules. And the brain is far more complex than a network of neurons and neurotransmitters.
Enter design…
Design plays a crucial role in aligning these cognitive architectures with human thought processes, goals, and behaviors to provide meaningful and efficient interactions. Here’s some examples where design can have an impact:
1. User-Centered Design Approach
This should come as no surprise, but it's worth repeating: Making AI-driven systems align with natural human behavior will be critical for reducing user frustration and enhancing engagement. Design plays a vital role in capturing and codifying the interplay between people and the systems they are using. Designers will need to identify user goals, behaviors, and pain points in order to optimize the cognitive system the results from these collaborative interactions in situ in order to reduce cognitive load, minimize errors, improve usability, and increase the accurate predictability of the AI’s responses.
2. Clarifying Complexity
Design will be required to visualize the abstract complexities of these cognitive architectures, while providing users with the means to control and configure the underlying models and their outputs. In many cases it will be necessary to provide clear and actionable explanations of how the system determined what was generated or how decisions were made, to make the inner workings of the system transparent, understandable and adjustable by people.
3. Determining Memory and Attention Mechanisms
It will be necessary to allow the users to determine what types of information the system should retain or forget, or when and what past interactions should be referenced in the current situation. Likewise, to ensure AI-generated outputs are aligned with the user’s intent and priorities, users will need to identify what information the system should take into consideration relevant to their task. For example, in a conversational agent, users could define when the system should recall past interactions or user data to make the interaction more seamless, or when it should reset allowing users to control the system's memory.
4. Managing the cognitive load
While GenAI promises to simplify complex tasks, it will be necessary to return to first principles, like progressive disclosure, guided procedures, contextual help, etc. to ensure the system doesn’t overwhelm users with too much information or complexity. Cognitive architectures can replicate complex reasoning and decision-making processes, but they will only be accepted if they are also capable of clearly explaining how they made the determinations, sometimes this will require longer descriptions, other times it could be addressed with shorthand notations, depending on the user’s level of trust and understanding of how the AI is functioning.
5. Feedback that feels organic
Cognitive architectures rely heavily on feedback loops for learning and improving interactions. Designers are aware that at times even a simple thumbs-up/down is asking too much from users. Design will need to help develop new methods of collecting user feedback, perhaps based on user actions—or the lack of actions, gestures, manual corrections to generated responses, etc., to improve the long-term success of these systems.
At the same time new preventative models for error handling will also need to be designed to minimize user frustration while allowing users to provide feedback in alternative ways that the AI can use to learn and improve. These could include visual or auditory cues to signal the system’s confidence level or uncertainty, to focus the user’s attention and help them gauge if they can rely on the AI’s output or if they need to make some adjustments before it commits an erroneous action.
6. Multi-tasking Interaction
Unlike people, AI systems can actually multi-task, performing multiple autonomous actions at the same time and in different systems. This will necessitate changes in how users interact with both the AI and the underlying systems they are acting on. This is made more complex by AI systems that can handle multi-modal input and output (i.e., text, speech, images, gestures, etc.). Design will need to ensure seamless transitions between these different interaction modes. For instance, when a user switches from voice input to text, the system should "remember" the context and offer a fluid experience adjusting its output to accommodate words written rather than spoken and going further by inserting visuals to expedite communication and facilitate clarity and understanding.
Using Generative UX Design will also be able to create experiences that combine multiple systems, and which adapt to the user’s preferred mode of interaction (i.e. voice vs text). Taking into account the different levels of memory retention and contextual awareness (i.e. voice vs. graphical interaction models).
7. Ethical Considerations
As cognitive architectures become more advanced, the practice of ethical design will be crucial to avoid negative consequences, such as bias in decision-making, and designing for inclusivity. Ensuring the cognitive architecture supports diverse user needs and is free from biases that might affect different demographic groups, especially in sensitive domains like healthcare or finance. And providing users with control over the use of their personal information, ensuring transparency and accountability and privacy. It is also worth noting that “personal information” will expand to include things like personalized algorithms, agents, and even private LLM augmentations.
Conclusion
Cognitive architectures are simply computational models, described with familiar (albeit misapplied) human characteristics, giving researchers studying their mechanisms a common point of reference. These architectures don’t include things like emotions (i.e. love, fear, joy, anger, etc.), or influences such as the environment, past life events, or events taking place outside the system. They lack morality and values. And as yet, they make no attempt to model neurological differences, gender, or cultural diversity. Most importantly they exclude the role of our intuition, or “gut feeling”, and its impact on our decision making. Basically, all the things that influence our sense of place in the world, and that shape our ability to understand and react to events around us. But then they’re also not impacted by the lack of sleep, diet, stress, or the occasional squirrel.
It will be design’s job to bridge those gaps; to include the humanity in these systems. Ensuring these architectures are aligned with real human cognition, the mental models, behaviors, needs, and emotions that come with being human, so that AI systems can enhance our creativity, productivity, and decision-making rather than attempt to compete with it.