2022 SolvingSeparationofConcernsProb
- (Subramonyam et al., 2022) ⇒ Hariharan Subramonyam, Jane Im, Colleen Seifert, and Eytan Adar. (2022). “Solving Separation-of-Concerns Problems in Collaborative Design of Human-AI Systems through Leaky Abstractions.” In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. ISBN:9781450391573 doi:10.1145/3491102.3517537
Subject Headings: AI Team
Notes
- Team Structure Boundaries: The paper clearly illustrates how organizational structures are typically divided into core research teams, technology transfer teams, and product teams, highlighting how these divisions can hinder effective cross-team collaboration between UX designers and AI developers.
- Knowledge Blindness: The research effectively documents how team separation creates two types of knowledge gaps - UX designers lacking understanding of AI capabilities and AI developers lacking insight into user needs, leading to design failures.
- Data-Centric Development: The paper excellently describes how AI systems' need for training data transforms UX workflows, requiring UX designers to think about data points rather than just design requirements.
- Leaky Abstractions: The authors thoroughly explain their concept of beneficial sharing of implementation details across team boundaries, providing concrete examples of how this improves cross-team collaboration in AI development.
- Progressive Specification: The research presents a clear model of how successful teams delay concrete technical specifications, allowing both AI components and UX design to evolve together through iterative development.
- Collaborative Artifacts: The paper provides comprehensive documentation of various boundary objects teams use to bridge expertise gaps, from UX designers sharing user personas to AI developers creating AI prototypes.
- Evaluation Methods: The authors effectively detail how successful teams perform continuous usability testing throughout development cycles, including methods for testing AI systems with end users and iterating based on user feedback.
- AI Team: A core research team within an organization that typically focuses on researching and developing fundamental AI capabilities. According to the paper, these teams often operate independently from product teams initially, exploring AI innovations without direct connection to user needs. In large organizations, they're usually staffed by computer scientists and ML researchers prioritizing technical advancement over immediate product applications.
- AI Engineer: A technical professional who implements AI systems, focusing on model development, model training, and model optimization. The paper shows they often face challenges in translating UX requirements into technical specifications and may lack understanding of how their technical decisions impact the user experience. They typically work with data points and performance metrics rather than user needs.
- AI-First Workflow: An approach to product development where AI capabilities are developed before considering user experience design. The paper critiques this workflow, showing how it often leads to solutions that don't align with user needs. In this approach, AI technology drives the product direction rather than human needs shaping the AI development.
Cited By
Quotes
Abstract
In conventional software development, user experience (UX) designers and engineers collaborate through separation of concerns (SoC): designers create human interface specifications, and engineers build to those specifications. However, we argue that Human-AI systems thwart SoC because human needs must shape the design of the AI interface, the underlying AI sub-components, and training data. How do designers and engineers currently collaborate on AI and UX design? To find out, we interviewed 21 industry professionals (UX researchers, AI engineers, data scientists, and managers) across 14 organizations about their collaborative work practices and associated challenges. We find that hidden information encapsulated by SoC challenges collaboration across design and engineering concerns. Practitioners describe inventing ad-hoc representations exposing low-level design and implementation details (which we characterize as leaky abstractions) to âpunctureâ SoC and share information across expertise boundaries. We identify how leaky abstractions are employed to collaborate at the AI-UX boundary and formalize a process of creating and using leaky abstractions.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2022 SolvingSeparationofConcernsProb | Eytan Adar Hariharan Subramonyam Colleen Seifert Jane Im | Solving Separation-of-Concerns Problems in Collaborative Design of Human-AI Systems through Leaky Abstractions | 10.1145/3491102.3517537 | 2022 |