From: aidotengineer
Alex Les, VP of Data Science and AI at Huge, discusses the “invisible users, invisible interfaces” concept, focusing on how AI simulation can accelerate design and address the prevailing AI trust gap [00:00:00]. The discussion covers the current state of UX and AI, where it could be, and a proposed method to get there [00:00:19].
The Current State: A Trust Gap
Currently, AI faces a significant trust gap [00:00:35]. Research from December 2024 by Edelman indicates that only 32% of US adults trust AI, and merely 44% of adults globally feel comfortable with how businesses are using AI [00:00:38]. This lack of trust is largely attributed to “AI slop” [00:00:56].
Understanding “AI Slop”
AI slop refers to instances where users encounter Generative AI (GenAI) failures in websites, products, or interfaces, providing incorrect or nonsensical information [00:00:59]. Examples include web searches suggesting one should “eat rocks every day” or websites offering cars for $1, which users instinctively know cannot be true [00:01:13]. This often results from website creators indiscriminately “stuffing AI chatbots” into everything and misleadingly telling users it’s “magic” [00:01:25]. This highlights challenges with current AI implementation.
The True “Magic” of GenAI for UX
Despite the issues, Cassie Kosakov points out that the real “magic” of GenAI lies in its potential as a UX revolution: the ability for users to communicate with a machine learning model using natural language, a capability previously unavailable [00:01:31]. This opens opportunities to revisit UX first principles [00:01:53].
Leveraging AI Simulation to Bridge the Trust Gap
Don Norman’s concept of “invisible interfaces” — software so seamless and intuitive that users forget they are using it — serves as an ideal for building user experiences with AI [00:02:18]. The opportunity lies in using AI to design truly magical interfaces, not by superficial chatbot integration, but by accelerating need finding [00:02:31].
The New Design Life Cycle with AI Acceleration
Traditional data-driven design processes, which involve collecting qualitative observations, quantitative data, and ethnographic observations to guide prototyping, predate large language models like ChatGPT [00:03:33]. The proposed new approach empowers designers to work with “invisible users” through AI simulation [00:03:54]. This turns passive data artifacts into active participants in the design process, providing designers with a mini-feedback cycle to enhance need finding and deliver better designs [00:04:01].
This new process maintains similarities with existing need finding but introduces key differences in components and workflow [00:04:32]:
- Defining Audiences: Start with data representing target audiences, such as demographic, psychographic, and contextual data, to simulate their real-world behaviors [00:05:06].
- Intent Mapping with Intelligent Twins: This data is transformed into active simulations called “intelligent twins,” which represent user behaviors, desired outcomes, needs, and motivations [00:05:34]. These twins become active participants in the design simulation [00:05:52].
- Task-Focused Evaluation: Intelligent twins can be briefed to evaluate interfaces based on specific tasks, similar to how human designers conduct heuristic analysis [00:05:56].
Case Study: Global Sports Website Audit
To demonstrate the advantages of this simulated methodology in terms of scale and speed, a global audit of sports websites was conducted [00:06:23].
- Personas: Two distinct audiences were simulated: a “casual fan” new to sports and a “super fan” who is lifelong and savvy [00:07:06].
- Tasks: Across three different sports websites (basketball, Olympics, English Premier League), 72 AI-simulated actions were performed, covering categories like navigation, information architecture, and fan engagement, with four tasks per category [00:07:18].
- Findings:
- Initial navigation tasks generally performed well across all sites [00:08:12].
- However, performance dropped significantly as fans delved deeper into content browsing, information architecture, and engagement pathways [00:08:29].
This audit highlights user pain points, especially in areas beyond initial interaction, which can help repair the AI trust gap by identifying and solving genuine user problems rather than just superficial implementation [00:08:53]. The methodology allows for insights to be rolled up at a high level or explored granularly, enabling the creation of focused and comprehensive design briefs [00:09:06].
Future Directions and Limitations
The acceleration of design tools, such as the MCP protocol from Anthropic showing potential integrations to turn Figma prototypes into code components (React, Node.js), will make creating new designs easier than ever [00:10:06]. This emphasizes the importance of focusing on the “why” — the strategy and the problem to solve for users — rather than just the “how” of design [00:10:36]. This points to the future of AI in improving user experience and integrations.
Limitations and Improvements
This methodology is currently experimental and in its early stages [00:10:53]. Key areas for future improvement include:
- Reproducibility: Standardizing parameters like briefing instructions, simulated audience dimensions, audit runs, and task completion/failure metrics in a code repository [00:11:00].
- Validation: Applying a test and control methodology to isolate the strengths of intelligent twins for design need finding [00:11:24]. This will help understand how intelligent twins can complement human teams across various industries, geographies, and domains [00:11:35].
Conclusion
To repair the current AI trust gap, it’s crucial to understand that users don’t need more ineffective GenAI chatbots [00:11:52]. Instead, they need better websites, mobile apps, and surfaces that offer clarity and simplicity [00:12:06]. AI simulation serves as a powerful tool to empower design teams to gather insights smarter, faster, and better [00:12:17]. Ultimately, this approach can lead to the creation of interfaces that restore user trust [00:12:27].