As researchers explore human consciousness with light machines and deep theories, some fear AI may soon wake up
Inside a soundproof booth at Sussex University, BBC science correspondent Pallab Ghosh sits with eyes closed, headphones on, and a strobe light flashing into his face. He’s inside the “Dreamachine,” a strange device flooding his mind with swirling, multicoloured geometric shapes—triangles, pentagons, and octagons that pulse like neon lights.
What seems like an art installation is actually a scientific probe into one of the most profound mysteries in science: consciousness. The Dreamachine isn’t here to test if you’re secretly a robot, Blade Runner-style, but to help researchers understand how the human brain generates our rich, personal experiences of the world.
The visuals Ghosh experiences are unique to him. According to the scientists, they come directly from his brain’s internal processes. The aim? To shine a light—literally and metaphorically—on how we become aware of ourselves and the world around us.
Embed from Getty ImagesBut this investigation into human consciousness is also stirring a more urgent question: could artificial intelligence become conscious too?
The team behind the Dreamachine, led by Prof Anil Seth at Sussex University’s Centre for Consciousness Science, is among a growing group of scientists probing the nature of awareness itself. Their hope is that, by understanding human consciousness better, we might begin to understand if machines could ever develop the same.
That possibility has rapidly moved from science fiction into serious debate. As large language models like Gemini and ChatGPT grow increasingly fluent and human-like in conversation, some researchers have begun to ask whether they might already be inching towards consciousness.
Science fiction has long predicted such scenarios. From Fritz Lang’s Metropolis (1927) to 2001: A Space Odyssey’s HAL 9000 and the AI antagonist in the latest Mission: Impossible film, our culture has warned of conscious machines gone rogue. But now, respected voices in science and technology circles are beginning to ask: is the fiction starting to become fact?
For some, the speed and sophistication of AI’s development is cause for concern. The ability of LLMs to mimic human dialogue and even show signs of reasoning has prompted speculation that we may be only a few breakthroughs away from machines “waking up”.
Not everyone agrees. Prof Seth calls this belief “blindly optimistic” and “driven by human exceptionalism”—the tendency to project human traits onto non-human systems. “We associate consciousness with intelligence and language because they go together in humans,” he explains. “But that doesn’t mean the same applies to machines or animals.”
The truth is, no one really knows what consciousness is. At Sussex, even among a team of AI specialists, philosophers, and neuroscientists, there is no agreement—only rigorous, passionate debate.
Instead of seeking a single grand answer, the researchers are taking a different route: breaking down the enigma into manageable parts. Their projects range from brain-scan analysis to the psychedelic Dreamachine, each designed to explore how thoughts, feelings, and awareness emerge from physical processes.
Prof Seth compares this to 19th-century biology’s rejection of a mythical “spark of life” in favour of studying living systems piece by piece. Similarly, the Sussex team hopes to unravel consciousness not through magical thinking, but by dissecting its components.
As this quest unfolds, one thing becomes clear: learning what makes us human may be key to understanding what our machines can—and can’t—become. Whether consciousness is a uniquely human trait or a universal property that could one day ignite in silicon remains unknown.
For now, the Dreamachine offers a glimpse into the mind’s vibrant depths. But somewhere in the background, a deeper question hums: what happens if, one day, the machines start dreaming too?