Emily Sappington

Product Design Manager: Voice & Conversational UI

Babylon Health

Emily Sappington

About Emily Sappington

Emily Sappington is the Product Design Manager for Voice & Conversational UI at Babylon Health. Previously she served as VP of Product at London-based AI startup, Context Scout. 

Emily has spent the bulk of her career in the United States designing Cortana for Microsoft across devices, particularly Natural Language & UI interactions with the assistant. Emily is a lecturer, US patent-holder, career coach for Ada School (the National College for Digital Skills in the UK), and is a recipient of an Exceptional Talent Visa from the UK Government and Tech Nation.

On the web @sappingtonemily


Designing for AI

Creating Minimum Viable Intelligence & Setting User Expectations

We are entering the age of intelligence—a time when technologists imbue artificially intelligent components into many products without a clear framework for how such intelligence is delivered to users consistently. It is a product designer’s job to make AI feel human-like and magical, not overwhelming and scary to users. When designing for Artificial Intelligence scenarios, whether for a large enterprise or small startup, setting user expectations is critical to deliver a reliable product. I will share some best practices from designing for AI in both large and small organizations. No matter the company size, a minimum viable product is important to design and not to be the result of unplanned feature cuts. I will share what Minimum Viable Intelligence is for an AI product, and how designers can deliver a clear UX when solving problems efficiently.

When thinking of how to design for intelligent products, first and foremost it needs to seem competent. Users must trust the AI agent or service with information and believe that it can achieve their goal. The bar for this depends on the expectations the designer sets. The most difficult thing about breaking out of scenario-focused AI is the lack of clear boundaries. Are you aspiring to create an entire conversational AI agent? Then the bar will be high, being that it is human-like in every way, including what it can respond to. A less intelligent Bot, however, will teach users the rails of its conversation early on to avoid disappointment. In this talk we’ll dig deeper into setting appropriate expectations when designing for AI across large and small applications.

Designing AI touchpoints from conversational interfaces to more traditional UI leads a designer to solve for how to best explain the capabilities of AI without overwhelming or frightening the user. This can be achieved by drawing on human interaction models. Responsiveness when users expect it is only one part of this equation. Apps that explain processes in human ways, like thinking, seeing, or reading, can benefit from showing users where they are in a process, while explaining it in natural ways. Emulating true intelligence takes more than just seeming alive and being basically competent though. To surpass users’ expectations can be a delightful moment when the product seems truly and independently intelligent.


Talking Design: Make Your First Voice Interface

If machine intelligence is taking over the world, voice interfaces (VUIs) are a close second. Fortunately, the evolution of human speech makes most of us terrific listeners and speakers, especially when compared to machines. Unfortunately, while our brains are wired for speech, designing a VUI still takes work.
Modern voice interfaces are an opportunity to create better products in new contexts. Smart adoption of VUIs in the right environment makes interaction simple, pleasurable, accessible, and lowers cognitive load. The potential for voice to aid in a healthcare context is even greater, when some diseases limit mobility or touch-screen usability, unlocking a new modality means opening up your experience to a wider population.
In this hands-on workshop, participants will choose from suggested healthcare scenarios to explore voice’s potential. When designing for a health scenario, participants will hone their listening and speaking skills by critiquing dialogues, and use improvisational techniques to design and structure conversations.
Each workshop team will test the effectiveness and enjoyability of their prototyped voice interfaces using the Wizard of Oz methods practiced by teams at Amazon, Google, and Microsoft. And each participant will take home resources useful for future VUI work.
Join us to expand your design skills with a new interaction tool that’s as old as humanity: voice.
This workshop is for anyone curious about working in voice, including designers, decision-makers, and engineers who are interested in ubiquitous computing, calm technology, bots, NO-UI, and thinking outside of the screen.

You will learn:

  • How to design a VUI
  • How designing ‘conversation’ differs from screen interactions
  • Understanding of how to prototype voice with Wizard of Oz methods

Please note: in this session, one Mac laptop will be needed per team, so if you have one, it would be useful to have it with you