Triple has just returned from our annual pilgrimage to the world’s foremost digital tech and design conference: SXSW in Austin. In this week-long intellectual and visionary frenzy of industry pioneers, we sharpened our perspectives and ideas about what the digital future should look like.
Conversational UI, neural networks, affective computing, immersive storytelling, internet of things, augmented intelligence; it’s easy to be dazzled by shiny buzzwords at SXSW. But we need to ask critically how the technologies behind them can be turned into futures that are actually desirable, and into real value for users and clients. The takeaways are plenty, each deserving of a bit of depth. So let me cover one a week, instead of boring you with one long read. First up: AI.
The human paradigm
The talk of the town this year was, once again, AI. The takeaway is simple and broadly agreed upon: AI will touch every pillar of society. The question of how to prepare for this is where the interesting discussions emerge.
The conference program was awash with talks about the impact AI will have on healthcare, government, science, education, commerce and everything else we know. Malcolm Frank, VP of Cognizant, assured us that over the next 30 years no job will be left untouched. AI-driven conversational interfaces are talked about as if they already are the golden standard, and renowned futurist Ray Kurzweil is still steadfast that singularity will happen by 2045.
So how do we prepare for this brave new world? Andrew Moore, dean of Carnegie Mellon’s department of Computer Science, warns that if we don’t help people better understand AI now, they will end up merely as passive consumers in an AI-driven world. In that vein, it would surely help if AIs were more transparent and more relatable. Academy Award-winning animator and neural network specialist Mark Sagar thinks one way to make that happen is to literally give AIs a face. His startup, called
, couples crazy-realistic animated faces (I mean, literally down to every muscle and neurotransmitter) with an AI that can understand and respond to speech and facial expressions. The premise is that through rich facials expressions the AI’s thoughts and intentions are demystified. One of their latest
helps disabled people understand government services. Check out some more demos here
. Pretty potent stuff.