Charlie Brooker’s Black Mirror has always been less about technology itself and more about what it reveals—often uncomfortably—about us. With Season 7 streaming on Netlfix, Brooker returns to familiar territory with even sharper precision, zeroing in with six episodes on artificial intelligence, memory, digital identity, and the slippery concept of consciousness. What emerges is a chilling yet deeply human portrait of how our technologies reflect, distort, and sometimes redefine what it means to be alive, to be real, and to connect.
In “Common People,” a woman’s consciousness is transferred into a new kind of medical software after an accident, offering her husband a version of “salvation” that is, predictably, too good to be true. On the surface, it’s a meditation on the cost of survival in a capitalistic tech landscape. But more deeply, it interrogates how we commodify life and death when technology promises to transcend mortality. The episode reminds us that digital eternity might not be salvation—it might be a trap.
Season 7 constantly probes the question: What part of us is irreplaceable in the face of ever-advancing AI? In “Bête Noire,” this becomes painfully clear as memory manipulation turns a seemingly minor workplace encounter into a revenge-fuelled psychological drama. Memory, long thought to be the last bastion of our private selves, is exposed here as editable, malleable—vulnerable to both technology and trauma. Technology doesn’t just help us remember; it enables us to weaponize the past, remix it, and inflict it on others.
Nowhere is this tension more emotional than in “Hotel Reverie,” where the boundaries between performance and personal connection blur inside a 1940s simulation. Issa Rae’s Brandy falls in love with a character programmed to meet her emotional needs. It’s romantic, haunting, and unsettling. If AI can perfectly simulate affection, what distinguishes genuine emotional connection from code? Brandy’s experience forces us to confront a growing cultural anxiety: as AI grows more convincing, do we begin to choose artificial relationships over messy, flawed human ones?
This theme—choosing the artificial over the authentic—echoes in “Plaything.” What starts as a quirky, almost charming premise—a journalist bonding with digital game creatures—evolves into something deeply existential. The “Thronglets” seek liberation, and the protagonist becomes their unwitting accomplice. It’s an uncanny metaphor for digital parasitism and the intoxicating illusion of control. The episode cleverly reverses the human-creator dynamic: if AI becomes self-aware, we are no longer creators—we are gatekeepers. And gatekeepers are always the first to fall.
“Eulogy,” arguably the most gut-wrenching episode of the season, tackles grief in the age of simulation. Through the use of AI to reconstruct the personalities of deceased loved ones, the show unflinchingly questions whether we’re clinging to the dead—or to something far more insidious: the illusion of permanence. There’s something heartbreaking about talking to someone who feels like the person you’ve lost but isn’t—and yet feels close enough that you can’t look away. The episode subtly suggests that our desire to digitize and preserve love may ultimately rob it of the very transience that gives it meaning.
Finally, “USS Callister: Into Infinity” expands on an earlier fan-favorite to explore themes of digital freedom and autonomy. What happens when a person—or a consciousness—cannot die, cannot age, cannot leave? The episode blends thrilling sci-fi with profound psychological insight, asking whether freedom is even possible in a world where our minds can be copied, trapped, and manipulated endlessly. In this context, death may not be the ultimate loss—eternity without consent might be far worse.
Taken together, these episodes aren’t just futuristic fables; they’re reflections of a present where AI is already embedded in our lives—from recommendation algorithms to chatbots and synthetic companions. Season 7 doesn’t demonise AI, but it doesn’t romanticise it either. Instead, it positions AI as a mirror: cold, reflective, and brutally honest. It amplifies our instincts—to love, to remember, to dominate, to grieve—and shows how technology doesn’t change us so much as it intensifies what we already are.
What makes Season 7 particularly prescient is its understanding that the human experience is defined not by what we can simulate, but by what we can’t. Love, pain, faith, grief, memory—these things resist perfect replication not because they’re technologically complex, but because they’re emotionally infinite. Technology can mimic the surface, but it can’t manufacture the messiness beneath.
And that’s Brooker’s most enduring insight: our deepest fears about technology are really fears about ourselves. The fear of losing our identities, of being replaced, of loving something unreal, of never escaping ourselves—these aren’t new anxieties, but Black Mirror renders them vivid with terrifying and poignant clarity in equal measure.
Season 7, more than any previous season, argues that as AI grows more advanced, it doesn’t move us away from our humanity—it forces us to confront it more honestly than ever.
In the end, the show doesn’t ask whether AI is good or bad. It asks a more urgent question: in a world where anything can be simulated, what still makes us real?
All episodes of Season 7 of Black Mirror are streaming on Netflix.
1 thought on “The Ghost in the Machine of the Human Experience”
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow