Reviews: The Internet Is Not What You Think It Is, Justin E H Smith, Princeton
Reality+, David Chalmers, Allen Lane
If you haven’t read the blurb for Justin Smith’s The Internet Is Not What You Think It Is, its argument is probably not what you think it is. You may imagine the book is about the dark net, or about how the technology behind the internet is different. You might think it is about the dominance of the big tech companies – and Smith certainly bemoans this fact.

You might be surprised by his conclusion that the internet has turned bad – dumbing down language and attention spans. Indeed, like the fact we have too many physical, manufactured products (partly the cause of our environmental woes), it could be argued that there are too many internet-based cultural products that are proving a distraction. Though he is not arguing that we are heading for a scenario envisaged in the film The Matrix, our immersion in the internet, social media in particular, is distracting and addictive – misleading in that it pretends to enhance relationships while reducing them. It is controlling and undemocratic – we have gone from the Arab Spring to the state control of China and Russia, and the corporate control that entangles politics and commerce.
The problem is less the amount of information on the internet and more about how and where we find it. Most websites are designed to keep us moving, the information designed not so much for enrichment but for entrenchment of over-simplified opinions, of tribalism. We are led from one thing we like to another and not challenged to consider other opinions.
Smith is not wholly critical. He includes a paean to Wikipedia, which, he says, has risen in status. Wikipedia is not what we used to think it is – unreliable – but what the internet should be. But his most surprising argument is that as an information provider and a means of communication, the internet is not so much a revolution as a continuation. He spends what seems like a surprising amount of pages discussing thought experiments and inventions from hundreds of years ago. He also discusses the pervasive networks in nature – from whale communities to spider webs to fungus – to argue that the internet is not as unnatural as we might think.
But he notes how rather than the internet reflecting reality, reality is being shaped to look like the internet, in particular social media. We are increasingly told we are brands, including in academia, where popularity can be more important than depth of teaching. In a way, we are already living in virtual reality, a world mediated by computers.
Which brings us to David Chalmers’ book about the philosophy of virtual reality. The book is something of a response to Nick Bostrom, who famously argues that statistically we are likely to be in a simulation. The argument goes that once we have sophisticated enough technology, computer users will be running millions of simulations of our universe, populated by ‘people’ who are conscious but don’t know they are in a simulation. In this case, it is overwhelmingly likely that we are the simulations and not in the original universe. (A variation of this is that some alien civilisation has created the simulation.)
This might seem highly unlikely, but it’s hard to come up with a counter-argument, and as a thought experiment, despite sounding like something out of Hitchhikers’ Guide to the Galaxy, it has ethical and even theological implications, such as the proposition that our simulator herself might be a simulation, leading to ‘who created God?’-type questions. You may find this fascinating or dismiss it as the kind of nonsense philosophers get up to when they are left to their own devices. But on the internet and in virtual worlds there are already ethical questions on, for example, what constitutes sexual harassment.

Chalmers explains that there are different types of simulated worlds. Minecraft is one, where we (the users) are outside the sim world and realise we are. The Matrix is a sim world where our physical bodies are outside the sim world, but we operate within one unknowingly. Then there are sim worlds populated by purely sim beings. These will be a form of AI, which leads to the question of whether AIs can become conscious. They have to be for Bostrom’s argument to work; after-all we are conscious.
There are some big ‘if’s here. If we are in a simulation, purely sim creatures, and the simulation is indistinguishable from reality – what it’s simulating – and we don’t know it’s a simulation, it’s hard to know what the point of calling it a simulation is. If we’ll never know, nothing much changes. And could someone in a simulation create a simulation? We are getting into the realm of the world resting on an endless succession of turtles.
There are also big assumptions here that reminded me of the old joke that on current rates of growth we can project that in ten years’ time two out of three Americans will be Elvis impersonators. Similarly, and importantly, are there erroneous assumptions here that technology will just keep improving exponentially until simulations of entire universes are possible?
Smith also notes that all this AI talk hangs on the not-yet-provable assumption that consciousness is possible if you are in a simulated world, and that consciousness must be possible within computers, an assumption that indicates how we think of the world as reflecting computer technology, and not the other way around. We don’t really know what consciousness is, and don’t know how it switches on, so-to-speak, at a certain level of complexity. I think it’s fair to say that consciousness is gradual – think of the development of pre-schoolers – and that society programs us – we are not simply self-automated – but I agree with Smith that none of this is settled. The internet is relevant here, says Smith, because AI will likely need the internet’s vast store of information to become conscious. That is, if consciousness requires simply a level of complexity within information processing units, rather than some other more intangible thing we are currently missing.
While in the notion of computers becoming conscious there seems to be an anti-anthropocentric tendency, a desire to prove that humans are nothing special, and replicable by technology, it is interesting that in the idea of AI consciousness, often termed ‘the great leap’, there is also a longing for transcendence. And there seems also to be something solipsistic in the fact that in extrapolating from the fact that we can create computers, we can envisage some future ‘us’ being advanced enough to create whole universes, including intelligent beings, as if we were gods.
Nick Mattiske blogs on books at coburgreviewofbooks.wordpress.com and is the illustrator of Thoughts That Feel So Big.
1 thought on “The internet and longing for transcendence”
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461