Artificial intelligence and machine learning are bringing about fundamental changes to the ways in which humans work as they move into the mainstream.
AI is already great at performing specific tasks, such as facial and voice recognition, object tracking, or even transposing your face onto someone else's body, and due to advances in deep learning, computers are starting to learn and frame reality visually in the same way as humans perceive it.
AI is typically deep rather than broad, so the trick is combining these skills effectively. As a creative toolkit, it can augment an existing creative idea or technique, or make mundane tasks more efficient.
A fascinating example of what is possible, is Deepmind’s work on using AI to learn to write programs that generate images, and Microsoft’s Starship Commander, using voice and intent to control the narrative. SingularityNET is attempting to democratise AI through a decentralised marketplace, using blockchain, and Intel’s sensational light show, places drones in the sky like pixels on a screen.
More controversial is the use of TensorFlow, Google’s AI tech open sourced in 2015, to analyse a person's face from multiple images (say from a social feed) and map the likeness onto another video, frame by frame. A similar technique was used in Star Wars to bring back the 1977 version of the late Carrie Fisher and Peter Cushing, a young Sean Young for Blade Runner 2049, and a young Kurt Russell in Guardians of the Galaxy Vol. 2.
With an individual’s and content owner’s explicit permission, this could be a powerful creative tool for live experiences. Imagine being able to appear in a classic Bullitt scene, with or as Steve McQueen, as Neo or Agent Smith in the Matrix, or doing a death defying stunt in Mission Impossible or as a superhero in Guardians of the Galaxy. It doesn’t have to be as personal as your face, the same technique could map your own designs or characteristics onto a physical product, such as a pair of trainers or clothing.
The physical experience itself doesn’t have to include any visible technology at all, for example, Google’s Arts and Culture app helps museum visitors find their lookalike in nearby paintings, sculptures and artwork.
Another example is using the iPhone X depth sensing camera to do live facial capture, coupled say, with an Xsens suit for full body motion capture. This strange but entertaining demo shows the potential. Suddenly, technology previously only developed by blockbuster movie directors like Peter Jackson or James Cameron, is available to everyone.
Imagination is already exploring these possibilities with personalised experiences such as "Go faster" for Ford, combining the visceral thrill of being a stunt driver, with the fun and fame of staring in your own movie trailer. We’re experimenting with voice to personalise experiences, natural language to improve engagement through chatbots, AI to prototype and build physical experiences in virtual reality and augmented reality as part of the ideation and sign-off process, and even using AI in VR for scientific research.
The discussion up until now on AI for events has focussed on marketing and customer engagement. What is becoming even more fascinating, is augmenting the creative process itself, where you can build a powerful and accessible creative toolkit to create amazing live experiences and content that is truly immersive and personalised, as well as being able to provide an entertaining experience for a wider audience.
We are on the cusp of a revolution in what we can achieve in the field of amazing, immersive, personalised experiences. In the future, the ‘intelligence’ may be artificial and the ‘reality’ virtual, but the impact on creativity is very real indeed.
Anton C E Christodoulou is the group chief technology officer at Imagination