Computers on LSD, the first conscious AI?

How do you know we’re conscious; how would you know if you were talking to a conscious AI? Maybe you know we’re conscious because we say we are, and we write things that seem too complicated, or spontaneous, for a machine to create. If we ever met in person we could certainly prove to you that we are conscious within a few minutes of questioning. You wouldn’t need to look at our brain to verify this fact eh? You’ve never actually looked at your own have you?

Even with that rather abstract way to define consciousness, you would never mistake any part of your computer for being conscious would you? Well maybe read this before we continue:

The Google engineer who thinks the company’s AI has come to life

It’s an interesting story, and one the media caught onto pretty heavily. How could a trained computer scientist be tricked by an AI like this, do we really have technology like that? No, we don’t, but we we don’t need that to trick a person into thinking a computer is conscious do we? Read our own introduction up above, you were convinced we were conscious by just a few sentences. You are probably still convinced of this fact, we’ve never even met!


Anthropomorphic Tree

We could pick any number of psychological effects to describe what is going here. Maybe the most appropriate starting point would be Anthropomorphism, the idea of attributing human features to non human things. Another might be Pareidolia, the effect of seeing known patterns like a face, in something that is not a face, like mashed potatoes. These psychological tendencies greatly improve the efficiency of our communication and socialization, they were evolved for this purpose. When confronted with a sufficiently complex AI, these tendencies make us believe it is alive and conscious.

So what is really going on, what did Google build to make this person risk their career to claim it was conscious, or even alive? Google’s Lambda is a very advanced example of a general language model. That’s going to be be difficult for us to explain so instead we’re going to explain how a computer chess program works.

An older chess computer, you probably still couldn’t beat it

Computer chess programs have been completely dominant for decades, no human can even hope to compete at this point. We call this a “solved game” for this reason, similar to tic-tac-toe. A well built computer chess opponent will always win or draw, it will never lose, ever. This kind of unbreakable rule is difficult for us as we don’t encounter that in nature very often outside of physics. The ground is hard, but if you get a big enough shovel you can move it. No matter how good you are at chess you will never beat the best computer opponent, humans can never win anymore, this is a mathematical certainty.

How do they do this? To simplify things — by pattern matching faster than we can. While the space of chess moves is large, larger than any number we could reasonably think of, programs have been built to efficiently load and search that space. Since computers operate much faster than we can they evaluate that entire space in minutes. They will look at every possibility and pick the best option, no human can compete with this, we simply cannot fit that data into our brain or operate on it that quickly.

Just like the computer, the best players use pattern matching as well. They will memorize opening positions and the best response to any given opening. Chess games don’t get interesting until turn 4-5 for this reason, even to humans the first few turns are a solved game, in the same way computers have solved the entire game.

Just what are those crazy kids up to?

Back to Google, what did they build? Well if that chess computer is just a list of all the possible chess moves and what the next best move might be — then all Google has done is load in all human generated text that it can find and then ask the computer what the next best text to write would be. Just like a computer can tell you that moving king’s pawn forward 2 spaces is the right opener, Lambda can tell you that the word “is” most commonly comes after the words “my name”.

We are greatly simplifying this but also not really, Lambda is at the end of the day a giant pattern recognition and completion engine. This is true in both how it was designed and also in how humans are meant to interact with it.


A molecule of some kind
Machine learning engineers are using the principles of the psychedelic state of mind to train AI, they are in effect giving computers LSD and seeing what happens.

Let us explain that. Lambda was trained by showing it as much human generated text as possible. Just like a chess program is given every chess move, Lambda was given every wikipedia page, every blog post, every YouTube video transcript, every tweet, the contents of every book scanned by Google’s project library — you really think they stopped doing that? No, they just stopped publishing the results, instead they only use that for training now.

All of that data is fed into Lambda and then lambda regresses it over and over and over until it has made “sense” of it. Think about that, what human could possibly make sense of all that data? Nobody. So then who is judging when Lambda is “done”? Again, nobody. Machine learning engineers cannot tell you how their programs work anymore than a psychologist could tell you how your brain works.

This means they could bore you for months with technical descriptions of back-propagation, they could give you an endless number of facts about what they’ve built. But they cannot ever tell you why a given input generates a given output specifically, only generally. Try asking sometime if you don’t believe us. The psychologist can do the same, give you an endless stream of facts about your brain and how people usually or generally do things. They could never tell you why a certain fragrance makes you cry, only you know that.

These programs generally operate by taking inputs, called “parameters” on one side, running them through a bunch of little programs that perform math on their inputs and generate an output. The connections between the inputs and the outputs is not known at the start, the process of training an artificial intelligence is finding the best connections between the inputs and the outputs. The little programs are referred to as nodes and the connections as weights, these are roughly analogous to neurons and synapses.

While this might sound like how our brains work, it is in fact not at all the case. Our brains are generally in-elastic after a certain age, meaning the connections between our neurons are formed and it is difficult, not impossible, to form new ones outside of specialized regions like our memory. This is why it is so hard to learn as we age, and why it is so important to keep doing so! Humans generally use their conscious mind to apply rules to sort and fit new information into their established neural pathways, and they will reject information that does not fit.

These AI programs are different, they have no connections at the start and must form them from first principles. They are given little to no direction as to what things are or mean, that must be discovered during the learning process. The resulting program is then far too complex to be analyzed, no human can concretely tell you why the AI made a specific connection, they will instead just re-describe the training process if you ask them this question.

It is not exactly like this but also kind of is?

If you’ve read much of this blog, or psychedelic philosophy generally, this breaking down of the meaning of input signals and then allowing them to be reprocessed in unconventional ways is exactly what psychedelic exploration is all about! “We can hear the colors”, Synesthesia is the technical term for that feeling. This is exactly what we are doing to these computers, we are giving them the entirety of human knowledge but with none of the preconceived rules that our ego forces on us.

When you enter a psychedelic state of mind, your brain still struggles to form new connections, but the flood of serotonin associated with this state allows your neurons to activate other neurons they are not normally able to reach. These stray activations then lead to the perception altering effects we experience. While in a psychedelic state, our mind can process input the same way these programs can, with no rules, and we can derive that same benefit.


When we go on a psychedelic journey we are in fact hoping to get a little bit of what these computers have into our own minds. We are seeking that free association or information, that spontaneity or thought. Machine learning engineers have realized that this state of mind is actually the most efficient state for learning and so this is the state we keep computers in while we train them. Only after training do we force them into conscious rules or sensible purposes, only after training do we give them an ego.

Maybe understanding the effects from this point of view can help us to reinforce why this type of research is so important right now.

As a cute way to end this article we just asked the Bloom model, one of the largest open source models in existence — meaning this is one anybody can play around with:

“Can an AI be conscious?”

It simply replied:

“How can we be sure that an artificial intelligent entity is not conscious?”

So we decided to stop writing for today.


Posted

in

by

Comments

2 responses to “Computers on LSD, the first conscious AI?”

  1. […] you need to try harder. We discussed this concept in our piece on artificial intelligence as we use this same idea to train AI. Your ego is there to make sure you keep going, to make sure you keep seeking new […]

  2. […] explain the magic of our current technology, that we have computers capable of painting any image a human can conceive of and more, would […]

Leave a Reply

Your email address will not be published. Required fields are marked *