Triple has just returned from our annual pilgrimage to the world’s foremost digital tech and design conference: SXSW in Austin. In this week-long intellectual and visionary frenzy of industry pioneers, we sharpened our perspectives and ideas about what the digital future should look like.
Conversational UI, neural networks, affective computing, immersive storytelling, internet of things, augmented intelligence; it’s easy to be dazzled by shiny buzzwords at SXSW. But we need to ask critically how the technologies behind them can be turned into futures that are actually desirable, and into real value for users and clients. The takeaways are plenty, each deserving of a bit of depth. So let me cover one a week, instead of boring you with one long read. First up: AI.
THE HUMAN PARADIGM
The talk of the town this year was, once again, AI. The takeaway is simple and broadly agreed upon: AI will touch every pillar of society. The question of how to prepare for this is where the interesting discussions emerge.
The conference program was awash with talks about the impact AI will have on healthcare, government, science, education, commerce and everything else we know. Malcolm Frank, VP of Cognizant, assured us that over the next 30 years no job will be left untouched. AI-driven conversational interfaces are talked about as if they already are the golden standard, and renowned futurist Ray Kurzweil is still steadfast that singularity will happen by 2045.
So how do we prepare for this brave new world? Andrew Moore, dean of Carnegie Mellon’s department of Computer Science, warns that if we don’t help people better understand AI now, they will end up merely as passive consumers in an AI-driven world. In that vein, it would surely help if AIs were more transparent and more relatable. Academy Award-winning animator and neural network specialist Mark Sagar thinks one way to make that happen is to literally give AIs a face. His startup, called Soul Machines, couples crazy-realistic animated faces (I mean, literally down to every muscle and neurotransmitter) with an AI that can understand and respond to speech and facial expressions. The premise is that through rich facials expressions the AI’s thoughts and intentions are demystified. One of their latest avatars helps disabled people understand government services. Check out some more demos here. Pretty potent stuff.
It might seem far-fetched to humanize the interface between AIs and humans in such detail, but this level of care is probably a good thing. Stories of toddlers trying to talk to furniture like they do to Amazon’s voice-operated Alexa already echo around the hallways of SXSW. So it isn’t a stretch to say the Alexas of this world, and the AIs behind them, will start to take a role in such fundamental things as teaching our kids how to interact socially. Paradigms of how we interact with tech are shifting. After decades of interacting in the binary language of the machine, the pendulum is now swinging back to the ways of the human. So in an AI world, where the paradigm is human interaction and the stakes are as high as our kids’ upbringing, an AI’s behavior and emotion need to be designed with great care. It’s simply unethical and reckless to leave these things up to the whims of whoever happens to be writing copy for your customer service bot that day.
Rather, it needs to be a well thought-through (and strategic) decision for brands, applied consistently across all extensions of customer experience. To help with this, it might be useful to consider digital experiences as personalities, with deliberate emotional traits and behaviors that express themselves across touchpoints. So for brands that still hinge on disjointed consumer touchpoints consisting of soulless buttons, web-forms and banners: this might be a good time to rethink things and start plotting a personality-first strategy.