In aesthetics, the uncanny valley is a hypothesized relationship between the degree of an object's resemblance to a human being and the emotional response to such an object. The concept suggests that humanoid objects which imperfectly resemble actual human beings provoke uncanny or strangely familiar feelings of eeriness and revulsion in observers.
For most of us, the first of the two performances in Bitef-Prologue was also the first visit to theatre since March. Despite the nuisance caused by all the necessary measures of precaution - masks, hand-sanitizers, blocked seats - the joy and anticipation prevailed. Especially since we knew that the performance itself is unusual, and consistent with the measures that imply maintaining physical distance. Namely, the actors on stage are even more physically distanced than the audience - there aren’t any whatsoever. Only one robot.
When we entered, the animatronic copy of the playwright Thomas Melle, was already on stage, sitting in the dark. When a “normal” performance begins like that, I always wonder if the actors are looking at us, analysing us, whether they even notice us. Yesterday, I was sure that that was not the case.
Talking about his own psychological impediment, about Alan Turing, his invention and the suffering that the society put him through, about various machines and robots that people accept or reject, Melle’s voice speaks through the robot who looks just like him, addressing and testing the phenomenon in the title. And it is true, it does make you feel uncomfortable and a bit repelled, though I remained undecided whether it was because of the robot itself (I’ve almost just written “himself”!) or because of the stories we are being told. Can robots be useful? No doubt, they already are. Are they becoming more perfect as we speak? Definitely. Are they becoming more difficult to differentiate from people? Well…
Eventually, you end up wondering what is next. In terms of robotics, in terms of humanity, in terms of whether the uncanny valley is getting wider and deeper or, actually, narrower and less dividing. Will reCAPTCHA start asking us not only to decipher text or match images but to do much more sophisticated actions in order to be able to distinguish between us and computers? And how sophisticated should the actions be? Nowadays, when human communication is getting reduced to the language of emoji, will machines, computers, robots become more sophisticated than us? Are we going to start trying to catch up with them? Will at some point a robot sitting on a stage be able to actually look at us and analyse us? Who is going to win the imitation game in the end?