14 Questions Design Leaders need ask about GenAI
As we begin what I am sure will be a very interesting year, one filled with socio-economic, environmental, geo-political, and cultural shifts, Generative AI continues to capture everyone’s attention. Given it will no doubt play a central role in many of those changes, design leaders need to navigate the complex interactions between people and AI, ensuring that the technology is useful, ethical, and intuitive. To help gain some clarity, here are 14 critical questions I feel every design leader should be asking. I have broken them out into two buckets: People and Ethics.
Understanding the Human Context
The current situation brings to mind two of my favorite quotes. The first from Steve Jobs, "It's not the customer’s job to know what they want." The other, Jeff Goldblum's character Ian Malcolm from Jurassic Park, "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should." As the people responsible for defining Generative AI solutions, design leaders need to have a clear point of view for both. The people we are designing for really have no idea what to realistically ask for from GenerativeAI, while the scientists and engineers we work with are eager to see what this technology can do.
Who are you building this for? What are their aspirations, needs, expectations, and comfort levels (i.e. trust) with AI technologies?
GenAI’s potential varies directly with the user’s proficiency. It’s important to understand if the people you are building for are developers, content creators, or business professionals, and how comfortable they are using AI and managing generated outputs. It is important to understand their skills for critical thinking and ability to assess the veracity of the output.
What are their most common frustrations or concerns about interacting with GenAI?
Consider issues like trust, accuracy, reliability, are common. But dig deeper do these people fear of losing creative control, or finding themselves in a bubble? Understanding these frustrations is the first step in ensuring your designs alleviate these human concerns.
What is the most tangible value GenAI can provide for these people?
Is it solving a pain point? Is it empowering them with new capabilities? Can you provide them with clear provenance for the information they are given?
What level of transparency is needed in the user interface for the people to trust the GenAI-generated content your design will provide?
Going beyond explaining how the AI works, providing provenance for the information, a pedigree for the original authors, along with copyrights and IP protections.
How can you maintain their agency while enhancing their productivity with AI-driven features?
How do your designs help users guide, correct, and improve the AI's outputs? How are you giving people control over the final outcome, ensuring they feel empowered by the technology rather than displaced by it? How can you create seamless handoffs between human input and AI-generated content? Explore ways to allow users to easily integrate, edit, or expand on AI-generated suggestions without disrupting their workflow. Are you balancing between enabling people to achieve their aspirations, augmenting people’s abilities so they can do their jobs better, and automating tasks people dislike doing
Are these people looking for personalized outcomes? If so, how are you balancing that with simplicity?
Personalizing AI outputs to an individual’s preferences can improve the experience but adds complexity in areas like provenance, agency, and productivity. How can you make customization intuitive to reduce the cognitive burden it introduces?
Are you designing your GenAI experiences to make them predictable and consistent across different devices and platforms?
Whether on desktop, mobile, or other devices, are you providing seamless predictable experiences, appropriate for the device, that ensures you are meeting your users expectations across platforms?
Ethical and Responsible Use of AI
GenAI requires careful monitoring to prevent its misuse, bias, lack of inclusion, and adherence to privacy, as well as fair use and attribution of content, and of course adherence to intellectual property laws. At the same time Important considerations need to be made to the environmental impact the GenAI is having on power consumption, and the inequalities of GenAI, both the capital requirements for building GenAI and the costs to access solutions, also need to be understood.
How are you ensuring your GenAI produces ethical, unbiased, and socially responsible outcomes?
As a Design Leader, you should be asking how the system mitigates biases in its generated content and ensures inclusivity, fairness, and cultural sensitivity. While also clarifying the underlying business model behind the service. Anticipate situations where GenAI may generate problematic responses. How will your design respond in those instances where the AI might produce harmful or inaccurate content? How will users report these, and what safety nets (content moderation, disclaimers, etc.) are you putting into place?
What guardrails are you designing to prevent the misuse of GenAI applications?
Have your applied “red hat” thinking to explore how to best prevent malicious use, such as generating misleading information or harmful content. How does your design ensure it is only used ethically and responsibly?
How will we ensure users’ data privacy while utilizing GenAI?
GenAI applications often rely on people sharing personal data to create tailored results. Is your design transparent about both the data being collected and how it’s used? Does your design allow people to control their data? And even remove it from the system?
What elements are you adding to your design to help people feel secure in sharing their data with your AI system?
Trust is built through transparency, so it's important to ask what steps you are taking to provide people with clear and comprehensive insight to exactly how their data will be used, stored, or shared before it's shared.
How can we design effective onboarding experiences for users unfamiliar with GenAI?
Provide clear instructions, tutorials, or in-context help that guide users in understanding how to leverage the AI’s capabilities without feeling overwhelmed.
What educational tools or tutorials should we build to help users understand the limits and potential of GenAI?
Help users grasp what the AI can and can’t do, reducing frustration and setting appropriate expectations.
Metrics and Success Criteria
Design leaders should ask these questions to prepare for the unique challenges of creating GenAI driven user experiences. But these questions are not “one and done”, these should be tracked continually throughout the development process. Transparency, trust, ethical design, empowerment, augmentation, and automation will all continually evolve as both the capabilities and pervasiveness of the technology expands. It’s important that you put key performance indicators (KPIs) in place to allow you and your team to track your progress against these questions. It is also important to track both the functional impact (productivity, efficiency) your designs have as well as the emotional (trust, satisfaction, enjoyment, and anxiety) your design has on the people using your GenAI solution. So the final question is simply that: How are you tracking your progress in addressing these questions?