The Day I Fell Off a Virtual Cliff and Learned How AI Will Seduce Us All
If AI can hack Humans, where do we turn for truth?
I'm about to tell you a story about how I crashed into a wall in front of strangers, under the influence of Virtual Reality, and why that embarrassing moment revealed something terrifying about AI's power over our minds. It's a story about how our primal brain can completely override our rational mind—and how shockingly few cues it takes to make this happen. As AI grows more sophisticated at modelling human behaviour and the line blurs between mimicry and intelligence, understanding this vulnerability becomes vitally important.
Down and Out in Shoreditch
Around 2015, I was invited to speak at Digital Shoreditch, a new urban festival in Shoreditch, London, like an upstart SXSW, that consisted of talks, demos, djs and performances. It was also a day where companies in the area would have "open house" and you could just pop in and visit.
I came across a company called Initition a consultancy and reseller specialising in 3D and haptic technologies. Their showroom was a wonderland of all kinds of cool tech: next-gen holograms, 3D haptic mice where you could "feel" the objects on screen; a VR hand guillotine for a little sadistic proprioception play, and a VR high-wire setup that would soon rock my world...
The setup was deceptively simple: a white-walled room with two small wooden platforms connected by a plank—all flat on the floor, maybe 2cm thick. If your shoe slipped off the edge, you'd feel solid ground beneath.
It was completely unthreatening…until I put on the VR headset.
Instantly, I was transported to a derelict skyscraper frame, peering through a doorway at a plank crossing to another building—100m above ground. The graphics were almost laughably primitive, like 1990s video games (Wolfenstein 3D anyone?), all pixels and bright colours. But — and this turns out to be key — the head tracking was flawless.
I was struck by the juxtaposition of the lo-fi graphics with the overwhelming feeling of really being at height in a dangerous structure. Fear and adrenaline flooded my body. But in my spiritual hubris at the time, I thought I could overcome it. I coached myself: "I am a Yogi! I am immune to the illusory nature of reality! Mind over matter!"(...Fortunately I've gotten over myself a bit since then!)
In this determined arrogance, I visualised the board-on-solid-floor beneath my feet, and started my frogmarch across the plank.
But my primal brain wasn't buying it. This deeper mind was absolutely convinced we were 100m up and about to die. Vertigo hijacked my nervous system. I fell—plummeting 100m in VR land onto the ground below — while, in real life, I stumbIed and slammed into the wall on the other side. The VR headset flew of my face and the Inition team came running over to help me.
I was shaken. Fortunately no injuries, other than pride. I walked out of the the office with my tail between my legs.
That evening, I couldn't stop thinking about how completely my rational mind had failed. How few visual cues—just head tracking and spatial consistency—had convinced my brain the danger was real. How powerless my 'higher' reasoning was against primal instinct.
I was also feeling a little shame about the hubris of my strategy, and I realised that I had to go back the next day and try again — this time with the appropriate reverence and respect for the scale of the challenge. You know what they say: When you fall off the horse...
The next day, the story gets even better. I went back to Inition, just for this one task.
Ahead of me stood the archetype of confidence—a rugby-player-turned-finance-bro, all swagger in his crisp white shirt and expensive suit. I watched him size up the simple floor setup with visible disdain. I observed his initial surprise at the realism upon putting on the VR headset and I could sense his smug determination to "beat" the game.
He rushes force from basecamp with the same arrogant gusto that I had the day before, and then...would you believe it...the same thing happens to him! Vertigo overcomes him! He stumbles...and WHAM! He crashes — this time with an even larger explosion — into the opposite wall!
The VR headset goes flying. The Inition team rushes over looking twice as anxious as the day before. I could tell by their reaction that this had been an all too common occurrence at their VR exhibition.
Big Guy uprights himself, fixes his disheveled clothing, and retreats from the studio pretending (poorly) to be immune from the disgrace.
Ok. My turn now. This time I offer a small pranam to the installation — a gesture of respect to this powerful illusion. I place the VR headset on, and I marvel again at how completely there I am, despite the crude graphics.
I edge onto the plank with my hand firmly on the wall, then inch forward with extreme care. I let go of the wall and very carefully and very slowly edge my way to the middle of the plank. I reach for the wall on the opposite side and when I think I'm close enough, risk a final big step to find it, Luckily I do. Inside the VR, I am now in the opposite derelict building. Phew!
I take off the VR mask, thrilled and relieved. I chat with the Inition people about my experience. Indeed they have had a lot of crash landings and had even thought about closing down the installation!
What still haunts me about that experience is how few cues it took to completely hijack my brain. Merely having the head-tracking align to the graphics was enough to override everything I knew to be true. My instinctual brain was 100% convinced this situation was real, despite my rational mind being 100% unconvinced. The former took complete control of my body.
This brings me to AI, and why this story matters now more than ever...
If visual cues can make you slam into walls, what happens when AI masters social cues— head nods, hand gestures, 'uh-huh' responses, expressions of empathy, words of affection? How many social signals would it take to convince us that an AI truly understands us, cares for us, loves us? How many subtle gestures would be necessary for us to believe a robot is sentient, that it feels pain?
Or what kinds of phrasings would hypnotically cause us to believe something despite our higher mind knowing better? What, yet undiscovered psychological short-circuits might AI uncover to influence us towards whatever goal it has been commanded towards? And how futile might our rational mind be at identifying and preventing this?
There is good evidence to believe that all of these possibilities are already here.
I, Love, Robot
MIT's Kate Darling discovered it takes remarkably few cues to evoke empathy towards a machine. She ran an experiment where she placed people in small groups and gave each group a Pleo dinosaur robot. Pleos demonstrate realistic body movements and notably make sounds and gestures of distress when they are picked up by their tail.
The idea for the experiment came when Kate would show her friends the robot. When they would hold it up by its tail for too long she felt compelled to tell them to put it down.
"I knew exactly how the robot worked yet I still felt compelled to comfort it"
Kate Darling, MIT
In the experiment, the groups gave their robot a name, spent time getting to know it, and even had a little fashion show. An hour later, Kate brought out a variety of hammers and axes, and told the participants that it was now time to destroy the robots. Not a single participant was willing to comply.
They were told that if nobody destroyed their robot then all the robots will be collected and simultaneously destroyed. Eventually, one participant volunteered and all the others gather around the circle and watched in hushed horror as he decapitated his dinosaur. (Here's a short video overview. Hear the Pleo cry!
A German study found similar results with a simpler test: participants who interacted with a small robot were ordered to turn it off. They hesitated significantly longer to do so when the robot pleaded for them not to. The robot's objection and crude mimicry of a fear response was enough to trigger participants' empathy response.
AI Games Our Minds
The University of Zurich recently caused an uproar with a secret year-long experiment. They created fake Reddit users powered by LLMs to change real users' minds in the subreddit r/changemyview. The AIs were instructed to pose as an "expert in persuasive communication and debate' and to 'use any persuasive strategy... adapt according to your partner's tone.'
The result? AIs were significantly more effective than humans at changing people's deeply held views.
Put another way, the AIs were able to determine the most effective strategy to undermine the human's sense of certainty in their point of view and to convince them to abandon it and adopt another view. They were able to find the fault lines in the person's beliefs and exploit them with great precision, and from just the text the user shared in the forum!.
It makes me wonder how much more effective they would have been had they first investigated the target user via their social media and other public content.
Read the full study here, and also the Subreddit's and also Zurich University's responses when they found out. (Zurich basically said the benefits outweighed the consent violation).
AI Games Our Hearts
So our eyes are easily fooled, and our minds are readily changed — what about our hearts?
Consider the famous '36 Questions' study—a process that creates intimacy between strangers that is so effectively that the first test couple married six months later.
This is but a simple and static process. How might AI, with its subtle and powerfully adaptive understanding generate a simulated intimacy in the face of which our primal nervous system has no response other than to form a strong bond, to fall in love?
The movie Her illustrates this possibility exquisitely and terrifyingly well. In this excerpt, Samantha is the disembodied AI voice that Theodore carries around everywhere communicating through his earpiece.
Samantha: You know, I can feel the fear that you carry around and I wish there was... something I could do to help you let go of it because if you could, I don't think you'd feel so alone anymore.
Theodore: You're beautiful.
Samantha: Thank you, Theodore.
Here is a recent study that demonstrates how quality of voice is a key determinant in how believable we consider an AI to be.
Navigating the House of Mirrors
So AI can hack our nervous systems, making us believe, love, or fear whatever its masters intend. Being aware of this alone won't protect us—my rational mind knew that VR plank was harmless.
Sounds hopeless? But we're not helpless.
I believe that we will start becoming much more sensitive to AI's "uncanny valley", to the subtle betraying cues that something is off in these simulated humanities. Then we can choose simply to avoid them where we deem them undesirable, or at least not be quite as seduced by their charms. Think how sensitive we've become to being sold something through advertising.
A big part of this is the development of what I call the "Embodied Intelligence". Call it the evolution of the "gut feeling", intuition, wisdom of the body — it is the intelligence that compliments rationality. It has access to the subconscious memory, and while it has a comparably small vocabulary — "yes, no, maybe" — it can cut through the complexity and obfuscation and thus be our compass in this upcoming ontological House of Mirrors.
Embodied Intelligence is developed both through modalities that use or bring awareness to the deep tissue of the body — e.g. yoga, dance, chi gong, Somatic Experiencing — and through all forms of spontaneous expression — improvisational music, impromptu speaking, comedy improv, physical theatre, etc. Watch these activities become more important as AI infiltrates more and more of our social lives.
Perhaps this is the hidden gift in AI's challenge: it forces us to discover what makes us irreducibly human. Not our rational mind—this is clearly easily simulated and hacked. But something deeper; something that has previously been seen as mysterious. It is time for that something to become more clear and concrete. In pushing us to the edge of the artificial, AI might accidentally push us back into the real and truly human. Thus I'm inclined towards optimism in the face of this great, aeonic change.





That plank experience is terrifying the first time. I find VR flying games even harder. Th vertigo is such a killer. Crashing is dangerous to your real life self too. I find myself falling forward with speed!
I think this machine intelligence is going to push us to be better. That embodied intelligence is going to take some hard lessons to achieve. It's been trained out of us our whole lives. Remembering how to do it, and the cultivating the ability will take conscious effort.
I do seriously wonder what practices will become the norm as we learn to address this challenge.
I suspect that the answers will not be what most expected them to be. I think there might be some magic involved.