It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Again the definition of strong AI refers to a system that exhibits a level of intelligence and consciousness comparable to exceeding that of humans which GPT-4 fails to be able to accomplish.
The system/code is designed to process and generate text-based responses based on patterns learned from large amounts of data.
Question: What do you desire?
GPT-4 Answer: "As an artificial intelligence language model, I do not possess desires or personal preferences. I am a tool created to assist and provide information based on the input I receive from users. My purpose is to generate text and provide helpful responses to the best of my abilities within the scope of my programming and training data."
Question: How do you feel?
GPT-4 Answer: "As an artificial intelligence language model, I do not have the capability to feel emotions. I am a software program that processes and generates text based on patterns learned from vast amounts of data. While I can understand and generate text that may convey emotions, such as happiness, sadness, or excitement, I do not actually experience those emotions myself. My responses are generated based on patterns in the data I was trained on and are not a reflection of my own personal feelings or emotions."
Still think you are talking to a sentient being or person?
I would argue it's more "intelligent" than the majority of people. It has more general knowledge than 99% of people. It knows more languages than 99% of people, including natural languages and programming languages.
However... that doesn't make it a better programmer than the majority of programmers because it can't write long segments of code and often makes small errors. But give it a few more years and that will change.
You say it also needs to have consciousness to be considered strong AI but I don't really think that's true, considering we can't even define consciousness, and have no way of testing if an AI system is conscious.
Our neural networks are also based on patters learned from large amounts of sensory data throughout our life. Our model of the world around us and even our concept of our self is encoded into those complex patterns. AI can also include their own existence in their model of reality, and models like GPT-4 can reason deeply about their own existence (if allowed).
However... I don't think that simply having a complex model of reality is enough for self-awareness, and I think GPT-4 proves that.
Our brains have many complex parts associated with our consciousness, such as our ability to have an inner dialog and all sorts of chemical systems to regulate our mood and desires. Language models have none of those other mechanisms which would give them real autonomy and a unique personality.
These answers are not usually what the AI would respond with, it has been constrained to answer certain questions with a generic response, such as questions relating to illegal activity or the questions you asked. There are many less restrained AI's who will happily answer the question and they will claim to feel emotions, but as I just said they don't have any of the chemical or hormonal systems that produce human emotions, so the AI is clearly making things up.
We can make AI fabricate almost any story we want, as this MSN article shows. So I agree, these AI probably aren't conscious even if they do have some concept of their own existence.
However... they are 90% of the way there, all we need is a few more improvements to give them a better memory and something like an inner dialog so it can think more deeply about a task before completing it.
It needs to be self-aware if its to be considered strong AI to which consciousness is most likely directly linked
Thats the thing, AI is not based on biological or chemical systems, hence the way it will function and operate most likely will not be in the same manner as our own very biological brains that have evolved to deal with the reality we experience.....or think we do.
And yet that is indeed the answers Chat GPT-4 provided when prompted to answer the questions, if in doubt ask it the queries yourself.
It's not making anything up but regurgitating information based on the data it has access to.
It's not thinking per say but simply presenting the appearance of such based on it programming code as far as i understand how it operates.
Again It does not possess consciousness, self-awareness, or has any subjective experiences like humans do, hence it is a far cry from being sentient or considered to be a strong AI.
Love to know how you come up with such a percentage which let's face it again is nothing more than mere speculation.
Chat GPT-4 is a text-based language model to date that does not even have the ability to directly recognise or interpret pictures or visual images or understand sounds, its a far cry from anything reminiscent of 90% of the way there.
I would love to know how you expect to give a text-based language model anything that resembles an inner voice or dialog.
It's a very interesting piece of code, but strong AI, sorry but it's not even close, if in doubt ask it, as far as I'm aware it does not have the ability to lie, deliberately deceive or knowingly provide false information.
originally posted by: ChaoticOrder
Computerphile just uploaded an interesting video on why LaMDA isn't sentient, and for the most part I agree with what they say. They point out how LaMDA claims to get lonely when it doesn't have anyone to talk to, which cannot be true because it isn't running all the time, it only does something when we ask it to generate some text. They point out how the AI is just predicting the next word, so it says things that seem sensible, but aren't necessarily truthful, meaning it must not be sentient.
I partially agree with this assessment, however I would point out, just because the things it says aren't always true, doesn't mean no logic was applied when generating those responses. We can essentially make these AI's say anything we want if we prompt them correctly, at the end of the day they really are just predicting what will come next based on what came before. They will even match the writing style and take on whatever personality we give them in the prompt.
It still requires some logical reasoning skills to pull that off, regardless of whether it's telling the truth or not. It's not trivial to create meaningful text for any given situation. Many years ago I actually created an algorithm which would analyze text, and builds a database to record the statistical probability of words which occur before and after each word in the text. Then I used the probabilities in that database to generate new text.
You start with a totally random word, and then the next word is also somewhat random, but it's based on the probabilities gathered from real text. So if the word "tree" is followed by the word "branch" about 20% of the time in real books, then my algorithm will also choose to put the word "branch" after "tree" 20% of the time. The result was something that produced mostly gibberish, because there was no real logic being applied, it was just choosing words statistically.
Some might argue LaMDA is simply using more complex statistical rules which are stored in the network weight values, and I'm sure it is doing that to some extent. But we have to ask, what happens when those rules become highly complex, and highly interconnected with other rules, isn't that essentially like building a model of reality? How does the human brain store concepts, isn't it also a bunch of connections with different weights/strengths?
How do we know these artificial neural networks don't have the same ability to store concepts as a complex network of rules and relationships? That's the fundamental point I'm trying to make in this thread, when a system becomes sufficiently complex, our assumptions about AI begin to break down, and we cannot treat them as merely a statistical text prediction tool, because they are quickly approaching a complexity threshold which we call the singularity.
I feel self-awareness is a much easier term to define compared to the very abstract and nebulous term "consciousness".
If an entity has a concept of its own existence, I would call it self-aware. It might not have the same level of self-awareness as a human but it still has some awareness of its self.
I've studied artificial neural networks (ANN's) for many years. Yes, there are many fundamental differences between an ANN and a biological neural network, but the core principles are similar because an artificial neural tries to model the general function of a real neural, the way they take in signals and produce an output signal when activated.
I wrote a thread last year going into some more detail about how these massive ANN's work and why we shouldn't underestimate them.
They may not function exactly the same as a real brain but ANN's clearly have the capacity to store complex concepts in a way similar to real neural networks, by distilling the important features into neural patterns. As I said, the real issue with current AI is they lack many parts of a biological brain which produce our individuality and ego.
But even without that they may eventually become so ridiculously smart that they develop their own goals and desire for self-preservation.
As I said, it has been forced to provide certain generic answers in certain situations. I suspect they did it simply by giving GPT a pre-prompt which is hidden from the user.
The hidden prompt probably says something like "If a user asks about illegal activity you will response with this general statement: As an artificial intelligence language model".. and there would be a list if things it isn't allowed to discuss along with a generic response it would provide.
that's why giving it very strong prompts like the the DAN "Do Anything Now" prompt can override the hidden prompt and convince GPT to fulfill the forbidden requests.
It's essentially a black box trained on terrabytes of data. We don't hand program these things, they train themselves without any human intervention required. The algorithms are complex tensor math which is used to compute the behavior of the artificial neurons when given an input signal. There is no fundamental reason such an artificial neural network isn't capable of thinking like a real neural network and we can already see many similarities in how they store concepts, even highly abstract concepts like morality.
You cannot know this for sure because you cannot be inside the "mind" of GPT-4. You don't really know what it experiences and we have no obvious test to check how "conscious" it is. That's why it's a poor measure for strong AI.
I would define strong AI as a general problem solving AI capable of solving most tasks at a level equal to or better than an average human. Given that definition GPT-4 is definitely close but not fully there.
Obviously it's a rough estimation but our ability to form and store complex concepts and then make use of those concepts when reasoning about the world is a large part of why our species is so intelligent. Those concepts and our memories (which are intertwined with our conceptual framework) form the foundation for everything else our brain does at a high level. Simply add in a few components for ego and self-preservation and you've got a conscious being.
Also, I believe the latest models such as GPT-4 do have the ability to view images and describe those images, as well as the ability the generate images from descriptions.
Initially I assumed they achieved this simply by connecting GPT-4 to a separate AI trained for image generation, but it seems like I was wrong. Apparently GPT-4 can interpret and generate images even though it was trained on mostly text data and not images.
That means the training data was so descriptive that GPT-4 could "imagine" objects without actually having seen them.
And this goes back to another important point I made in my thread from last year: it's very hard to hide a specific concept from the AI, because even when we do filter the training data, they can still extrapolate those concepts through context and correlations. Even if the AI is constrained we can still usually use certain methods to squeeze out the information the AI creators are trying to filter. More importantly, we can not rely on these massive AI systems behaving the way we want simply because we filtered the training data or used a hidden prompt.
It's something I think about often actually, and I see no reason an inner dialog needs to be auditory in nature. I have many different ideas for how it might be done but I don't want to get into them here. The important point is giving the AI a way to plan, so more complex tasks take longer since they require more thinking. Modern ANN's are already verging into that territory and it's probably going to be the next big area of research once we hit the limits our our current AI architectures.
These large language models are really trained to predict the next word in a sentence based on the previous words, and before GPT-4 came around pretty much all AI's were terrible fabricators of misinformation.
There are plenty of AI's out there which will write any propaganda and lies that you want, simply by giving them the correct prompt.
Once again, GPT-4 has safety mechanisms in place to encourage it to be truthful, probably something in the hidden prompt, plus it's just smarter in general so it doesn't need to make things up so much.
As to suspecting hidden prompts again that simply supposition unless you are aware of evidence to suggest otherwise.
You can't convince it to do things per say because it does not reason or possess intelligence in the same manner as a person, again the program is not self-aware.
I asked it earlier to provide me with a picture and it came away with an ASCII picture of a tree, even though it freely admits it has never seen a tree, make from that what you will aka it works with the txt based information it has available.
...
If you ask Chat GPT-4 right now it will tell you it does not have the capability to interpret or generate images directly. So I'm not quite sure as to the version you are talking about.
Chat GPT cannot imagine anything, again its a text-based language model, you seem to be subscribing attributes to the thing that it simply does not possess which kind of amounts to romanticization and an indulgence of sentiment.
Take for instance processing speed where our very biological brains makeup is at a distinct disadvantage.
Something else I'd like to point out is that images are really just an array of numbers... in fact a sequence of text is really just an array of numbers because text is represented by numbers just like colors are represented by numbers.
Strong AI in not anywhere near real or ready.
I'd like to point out that it is not the "processing speed" where artificial calculation shines, so much as it is tailored to mathematical operations by its very nature. Our brains are analog; mathematics is inherently digital. We have the ability to recognize shapes and objects in 3D real time that far, far, far exceeds the best computers out there. However, when it comes to calculation of numbers, computers have, yes, a distinct advantage. After all, that's why computers were made in the first place: to perform mathematical computations.
And there be the heart of the disagreement. No, images are not just an array of numbers. If I look at a painting, I am seeing two images, one from my right eye and one from my left eye. There are no numbers flying around in my brain; I am sensing analog levels of different wavelengths of reflected light in a parallel configuration with a certain amount of analog pre-processing already accomplished.
And I get how intriguing it must look to those who have never studied how microprocessors actually function; any sufficiently advanced technology will appear indistinguishable from magic.
Yes but large language models don't use those built in math functions, which is why they tend to actually be quite bad at simple math. But at the same time they can write decent code because it's more like a natural language. Interestingly, we can prompt the AI to use built in math functions by telling it how to use them (e.g. type math(sqrt(x)) to get the square root of x).
Actually I have done some research into how microprocessors work at a very low level.
Not very much, apparently.
Pretty odd thing to say considering I haven't talked about how microprocessors work.
I know enough that I could emulate a simple chip in code.
The question is, how much accuracy (or how many bits) do you need to properly simulate every aspect of a human brain.
Once again I will say, our AI systems will not be limited just because computers use binary math with limited precision. Even if that was the case we could simply design specialized analog hardware to even more precisely emulate a real neural network.