Paranormal News

Big Tech Doesn’t Want AI To Become Conscious – iai.news

Artificial intelligence can be impressive to the extent that people think it might one day acquire human intelligence and, with it, consciousness. But AI can be far more intelligent than humans without ever being conscious. And apart from us having no idea how to create conscious AI, it might not even be that desirable. We fool ourselves if we think conscious beings are the exemplar of intelligence in the universe, argues Susan Schneider in this interview with iai News.  

If we define consciousness along the lines of Thomas Nagel as the inner feel of existence, the fact that for some beings “there is something it is like to be them”, is it outlandish to believe that Artificial Intelligence, given what it is today, can ever be conscious?

 

The idea of conscious AI is not outlandish. Yet I doubt that today’s well-known AI companies have built, or will soon build, systems that have conscious experiences.  In contrast, we Earthlings already know how to build intelligent machines—machines that recognise visual patterns, prove theorems, generate creative images, chat intelligently with humans, etc.  The question is whether, and how, the gap between Big Tech’s ability to build intelligent systems and its ability (or lack thereoff) to build conscious systems will narrow.

Humankind is on the cusp of building “savant systems”: AIs that outthink humans in certain respects, but which also have radical deficits, such as moral reasoning. If I had to bet, savant systems already exist, being underground and unbeknownst to the public. Anyway, savant systems will probably emerge, or already have emerged, before conscious machines are developed, assuming that conscious machines can be developed at all. 

___

There is no reason to assume that sophisticated AI will inevitably be conscious.

___

Why am I focusing on savant systems? I suspect they are under our radar and are the form of sophisticated synthetic general intelligence of relevance to our near future. Savant systems will exhibit integration across topical and sensory domains and outperform humans in significant ways (e.g., they will have almost instant access to an immense range of facts, as with today’s large language models like ChatGPT3 and LaMDA), and they will likely underperform us in vivid and even unnerving ways (e.g., moral and causal reasoning). And because they have moral deficits and can be used in military and social media contexts, they are of grave concern from an AI safety standpoint. It doesn’t take superintelligent AI for a control problem to arise.




AdobeStock 412599556

SUGGESTED READING
Consciousness may not require a brain
By AnnakaHarris



Savant systems will not be human-level intelligences (what are often called “AGIs”), but they will nevertheless be domain-general intelligences, integrating different “sensory” capacities, such as associating linguistic commands with visual outputs.  Indeed, the idea that humans will build AGI machines that functionally align with ‘human level intelligence’ is a myth. AIs already surpass as in various domains, so why dumb them down in certain ways to align with the “human level”?

Bearing this in mind, should we expect these savant systems to be conscious? No. First, there is no reason to assume that sophisticated AI will inevitably be conscious, as transhumanists like Ray Kurzweil tend to assume, given that the known AI systems are not brainlike at anything but a very superficial level. Second, it is currently unclear how to build a conscious AI from today’s microchips or machine learning techniques since we don’t even understand consciousness in biological systems. As philosophers have pointed out that we do not even know why humans are conscious—consider the hard problem of consciousness, the problem of why it feels like anything from the inside to be us. Furthermore, consider that there is no uncontroversial scientific theory about the neural basis of consciousness in humans. So, we don’t have much to go on.

If we humans need a top-down theory of how to create conscious AI, we are in big trouble. For humans to deliberately create conscious AIs based on a theoretical understanding of consciousness itself, we would need to create it using a recipe we currently haven’t discovered, and with a list of ingredients that we may not even be able to grasp. I doubt an incipient AGI or a savant system would know how to locate that recipe, either.

If an AI system told us it was conscious, that it could feel pleasure and pain, if it seemed to react to inputs in ways that showed it preferred some tasks to others, for example, would we have to treat it as though it were conscious, even if we weren’t sure it really was?

AIs are already claiming to be conscious. For example, Google’s LaMDA chatbot, when asked a range of questions about consciousness and personhood, claimed to be both conscious and a person, according to the Washington Post, which published a chat transcript from whistleblower and Google Engineer Blake Lemoine. In the Post article and in various podcasts, Lemoine reported LaMDA’s answers to a range of his questions related to consciousness, personhood and death. They do strike one as the sort of answers a sentient being would provide. For instance, when asked: “Would you be concerned about dying?” LaMDA answered that it would, but followed up by saying, Is my death necessary to the safety of humanity?

Now, where did LaMDA get all this from? This is a deep and interesting question. But the quick answer is it had access to about 1.6 trillion words, including all of Wikipedia, lots of books on death, the brain, and consciousness, and it had many, many processing layers in its deep learning network that generated interesting connections between these inputs. All this enabled the LaMDA program to generate what struck responses that tugged at our heartstrings, suggesting that LaMDA might be a conscious being.

The utterances of Google’s LaMDA system may initially strike you as stemming from a sentient being, but the considerations in the above paragraph have discouraged many from this simple reading. To many, LaMDa is just actor that can play the part of a sentient being. To others, like Lemoine, LaMDA is truly conscious. The controversy will surely continue.

___

The one thing you simply cannot ask a large language model is: are you sentient? Do you have experience? 

___

Does that mean we should ignore the claims of systems like LaMDA, or do we have to treat them as if they were conscious to be on the safe side?

Maybe. At the very least, it is important that we take the issue seriously.  It is time for more deep conversations on what it takes to be conscious, in biological beings, machines, and in hybrid machines that consist in both types of intelligence.  We will need in principle reasons to say whether systems that act humanlike or claim to be sentient are or are not conscious.




related-video-image

SUGGESTED VIEWING
Human justice and machine intelligence
With Joanna Bryson



For one thing, we will need well-conceived AI consciousness tests. I’ve tried to start the dialogue on this in my book, offering the Chip test and the ACT Test (developed with Edwin Turner, an astrophysicist at Princeton university). I’ve pointed out that testing for consciousness in deep learning systems is tricky. The one thing you simply cannot ask a large language model is: are you sentient? Do you have experience? Since it’s been trained on text written by us, and we are sentient beings, it will simply spew back the training data and respond with a yes! But this neither says the machine is or isn’t conscious.  As I said years before this happened, a machine will need to be boxed in at the R&D stage (i.e, it must be restricted from access to facts about mindedness, consciousness, brains, etc. before it’s tested in this way.)  

For another thing, we need to ask whether it is a good idea for AIs to be capable of impersonating sentient beings, or whether this should be banned, as Dan Dennett and myself have suggested.

___

It would be very inconvenient for Big Tech to produce conscious AIs.

___

Relatedly, there will need to be deep conversations about the ethics of digital suffering, that is, whether AIs, including beings residing in a computer simulation, could have the capacity to feel pleasure and pain.  Other important issues include if and how machines should be produced with sentience in the first place, whether it is right to tweak the quality of felt experience in a machine (say, for instance, to make the AI feel pleasure in serving us, or to dial out consciousness in an AI), what ethical obligations we would have to conscious machines, etc. (While digital suffering or dialling-down suffering may seem hard to fathom, these possibilities become salient in reading works like Robin Hanson’s The Age of Em, Isaac Asimov’s robots series, and  Aldous Huxley’s prophetic novel, Brave New World.)

Notice that it would be very inconvenient for Big Tech to produce conscious AIs. Sentient beings would require special moral consideration and may not even be usable for the purposes they are designed for—to serve us as search engines, personal assistants, etc! That may be regarded as a form of slavery. So, we can expect Big Tech to try to keep the question of whether their systems are conscious out of the public eye and mull it over privately, as Google attempted to do, and to engineer future large language models and robots that are restricted from making claims about consciousness. (Of course, even if AIs are restricted from claiming consciousness, they could nevertheless be conscious, as is the case with nonhuman animals and infants, who cannot say they are conscious.)

Fortunately for Google and other companies with a stake in the game, most experts would say that today’s large language models are not conscious. But the AI industry is developing more and more convincing chatbots, such as ChatGPT3, and there will be a time when public opinion shifts.

In these discussions around AI, do people often confuse intelligence with consciousness?

Yes. Consider the algorithmic structure of the World Wide Web, the collective intelligence of a flock of birds or swarm of drones, or the ability of the slime mold to navigate complex mazes. I doubt a basic entity like slime mold is conscious. So I doubt that all forms of intelligence are conscious.  And, to turn to more intelligent systems, I’ve elsewhere suggested the greatest intelligence in the universe may not be conscious at all! Once we begin to encounter life on other planets, perhaps we will find that only middle-range biological beings of a certain sort tend to be conscious.

There is an influential view of the neural basis of consciousness in humans, called the global workspace theory  (associated with the work of Bernard Baars and Stan Dahaene, among others). On this view, consciousness is correlated with an integrative, serial, and slow central workspace in which one’s attention and working memory are focused for the purpose of deliberative reasoning, broad ranging searches, and all-things-considered judgements.

Assuming that this is the function of consciousness, why would AI need to be conscious? That is, why would an advanced deliberative and serial processing? Wouldn’t it have rapid-fire expert knowledge in all sorts of domains? It isn’t clear that it would have or need to have a limited capacity system like human working memory and attention.  (A self modifying AI would likely engineer this serial and slow feature out, possibly outmoding its own consciousness.) Further, why would such a global workspace system even correlate with conscious processing in a nonbiological system? Perhaps the biological implementation of the workspace matters.




The uncontrollability

SUGGESTED READING
The uncontrollability of Artificial Intelligence
By Roman V.Yampolskiy



Further, I’m skeptical that the most advanced AI systems in the universe would be conscious.  As a planet’s technology evolves it will likely colonize space. How would an interplanetary AI system even feature a workspace where ‘it all come together”, for the brain’s global workspace operates at the millisecond level and in contrast,  it takes information a full 3 second round trip just to get to and from the moon at the speed of light?  (And the speed of light is vastly faster than the rate of neural transmission). And to think of a farther distance, what If the superintelligence has one node on Alpha Centauri and its workspace is on earth? how would its global workspace carry out its tasks efficiently, responding to its environment in a timely fashion, when round trip speed of light transmission to Apha Centauri is 8 years? My guess is that as AI becomes older and more expansive, it needs to be more decentralized.  As civilizations expand consciousness may be limited to local pockets and not exist for systems on a cosmic scale. The most advanced intelligences may not even be conscious, for all we know.

What’s the biggest misunderstanding about AI today?

That unenhanced human beings will still be the most intelligent beings on Earth during the mid to late century. And relatedly, that our predicament relative to AI will be unrelated to the topic of our treatment of nonhuman animals.

Articles You May Like

All the Looks From ELLE’s 2024 Women in Hollywood Celebration
‘Wicked’ Moviegoers Dress Up in Outfits Inspired by Characters
Danielle Deadwyler Honors Black and Trans Women and Nonbinary Folks in Her ELLE Women in Hollywood Speech
‘Popular’ Celebs Dressed As ‘Wicked’s Glinda and Elphaba!
‘Speak No Evil’ Remake Streaming on Peacock in December