Netflix on Monday released the official trailer for Ghost in the Shell: SAC_2045, its new animated take on the classic Japanese anime. First announced…
Disney Research used neural networks to model audience’s reactions to its movies, and said it was able to provide data superior to “conventional methods”.
The AI was apparently able to tell how a viewer would react to the rest of the movie from just a few minutes of watching their face. Dubbed factorised variational autoencoders (or FVAEs for people with less time in their day), the method was able to learn expressions like smiling and laughing on its own.
“What’s more, they were able to show how these facial expressions correlated with humorous scenes,” Ph.D. student Zhiwei Deng told Phys.org.
Using this technology, Disney could then find out which of its scenes were hitting the emotional mark.
The research team tested the FVAEs during 150 showings of nine movies, including Big Hero 6 (through which people surely sobbed) and Star Wars: The Force Awakens (during which viewers had hearts for eyes).
The experiment studied 3179 audience members, and curated 16-million facial landmarks to be evaluated. The FVAEs then took this data and spotted patterns between audience members. These patterns allowed them to form a stereotypical audience reaction, which in turn allowed them to learn how people “should” be reacting.
Disney researchers used AI to predict audience facial expressions in movies, but it’s not perfect at all
It is this technology that then gives them the ability to predict reactions for the rest of the film.
But using facial recognition is not necessarily a foolproof way of telling if someone is enjoying a film.
Human beings are naturally inclined to mimic the facial expressions of people with whom we’re interacting, and this extends to characters on a screen.
In his book Flicker: Your Brain on Movies, Jeffrey M Zacks describes the mirror rule as an involuntary function we’ve adopted for survival. Over time, humans have learned it is safer to smile when someone is smiling at them, and snarl if someone is snarling.
This is one of the reasons we often find ourselves laughing in theatres but not at home — and it’s why this research won’t deliver anything groundbreaking. Disney won’t be able to discern if facial expressions are mimicry or if people are smiling because of the story.
Other uses for the tech?
It seems the researchers are aware of this, though, and have mentioned that FVAEs do have other purposes.
The developed pattern recognition can be used on any group of objects — like perhaps a forest with trees leaning one side due to wind. These models can then aid animation.
So while FVAEs may not give Disney much to work with in the way of marketing, they certainly may save its animators a good deal of time.