It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

ChatGPT (and other AI language models) is not intelligent

page: 1
19
<<   2  3 >>

log in

join
share:

posted on Jul, 16 2023 @ 07:10 PM
link   
Today I decided to make a test to see if ChatGPT (through TalkAI, as I cannot register on ChatGPT, as I don't have a cell phone) is intelligent, like many people say, or not.

I presented to it a know problem, the wolf, goat, cabbage problem, that I presented this way.

A man goes to the market and buys a guard dog, a sheep and some vegetables. To return home he has to cross a river in a small boat. The boat can only take the man and one of the three items he bought. But he cannot leave the dog alone with the sheep, as the dog could kill the sheep, and he cannot leave the sheep alone with the vegetables, as the sheep could it them.
How does the man put all the items safely on the other side of the river?


ChatGPT's answer was:
The man can solve the problem by following these steps:

The man first takes the sheep across the river and leaves it on the other side.
The man then returns alone to the original side to get the guard dog.
The man takes the guard dog across the river to the other side, but he cannot leave the dog alone with the sheep.
The man leaves the guard dog on the other side of the river and takes the sheep back to the original side.
The man now leaves the sheep on the original side and takes the vegetables across the river.
The man leaves the vegetables on the other side with the guard dog and returns to the original side one final time.
The man takes the sheep across the river for the last time and leaves it on the other side with the vegetables and the guard dog.
Now, all three items - the guard dog, sheep, and vegetables - are safely on the other side of the river.


Next, I presented it with a variation:

A man goes to the market with his son. He buys a monkey and some food. To get back home he has to cross a river on a small boat that can carry only one person and one of the items he bought or two persons. He cannot leave his son alone with the food, as he could eat it. He cannot leave the monkey alone with the food, as it could eat it. How does the man get to the other side with all the items and his son?

ChatGPT failed to solve the problem, presenting this "solution":

The man takes the monkey with him on the first trip across the river and leaves it on the other side. Then he returns alone to the original side and takes his son with him on the second trip across the river. The man leaves his son on the other side and takes the monkey back to the original side. He leaves the monkey on the original side and takes the food with him on the third trip across the river. Finally, he leaves the food with his son on the other side and returns alone to the original side for the last time. Now, he can bring the monkey to the other side on the fourth and final trip.

That fails on the first crossing, as the man leaves the boy alone with the food. It also fails near the end, when the man leaves his son and the food on the other margin and comes back to take the monkey.

I pointed that the solution was wrong and presented the problem again.

ChaGPT's answer was, again, wrong.

The solution to this problem involves a series of carefully planned trips across the river. Here's the step-by-step process:

The man takes the monkey across the river first and leaves it on the other side.
The man goes back across the river alone.
The man takes the food across the river and leaves it with the monkey on the other side.
The man takes the monkey back across the river.
The man leaves the monkey on the starting side, and he takes his son across the river.
The man leaves his son on the other side and goes back across the river.
The man takes the monkey across the river one final time.
Now, all the items (monkey and food) as well as his son have made it across the river safely.


He made the same mistakes.

So, as anyone can see, ChatGPT is not intelligent, as an intelligent system would be able to extrapolate the solution of the first problem to the second. Any normal person would be able to learn how to solve the second problem when presented with the solution for the first.

ChatGPT and similar systems are AI language models, they only "know" how to talk, they are not intelligent, although artificial intelligence was used to create them.



posted on Jul, 16 2023 @ 07:54 PM
link   
Good stuff ArMaP,

It's full of holes like a Swiss cheese, doomed from the start, and your last paragraph says it all,

'ChatGPT and similar systems are AI language models, they only "know" how to talk, they are not intelligent, although artificial intelligence was used to create them'

Anyway you fooled the concept of AI, by the limitations imposed in the premise...and it answered.



posted on Jul, 16 2023 @ 08:08 PM
link   
a reply to: ArMaP

ChatGPT is a chat bot. It has no comprehension of what its output means, or of any of what you may say.

It collates a lot of responses that seem to be answers to what you propose to it, and then it formats its compilation to sound like natural language.

It isn't intelligent by any human measure.

Its alleged "intelligence" is just marketing hype.

edit on 16/7/2023 by chr0naut because: (no reason given)



posted on Jul, 16 2023 @ 09:03 PM
link   
a reply to: ArMaP

It’s in the early stages right now, for better or worse.

It’s a loose algorithm that is attempting to pull from its database in a way that tries to mimic human interaction.

We can teach some animals language, but like AI syntax is the issue. The more if than type logic we try and apply, the more we can see it’s in its infancy.

But whether someone is in the camp of this will be our doom or this is the next technological hurdle… One of the first popular graphical user interfaces for the desktop was windows 95, just twelve years before the first iPhone. I have a watch with more computing power than the space ship that brought humans to the moon.

AI and chat models are simply code, but the more they build on the backs on it the more it will advance. If it follows the trend of the rest of computing, hypothetically it will grow exponentially at almost double every two years.

Personally, I’m not necessarily excited about the thought of AI taking a prominent role in every day life. Computing has come a long way, but it’s still up to the user to navigate and make sense of the information they’re interacting with. At a basic level, I don’t see how AI will assist in human growth more than it will become a crutch or dependency. That is about the most polite way I can paint how it could be a strain on society. If we let our minds wonder, the possibilities could be far more bleak.



posted on Jul, 16 2023 @ 10:52 PM
link   
No intelligence at all. No understanding of concepts, places, ideas etc... just mathematical predictions for the next token (or sequence of tokens) to your prompt. ChatGPT, by its nature, will struggle to solve a logic problem that's not been weighted, though you might be able to eventually get to arrive to the correct answer if your baby your prompt (I guess we're calling that prompt engineering now).

There are a couple of researchers looking at leveraging training a transformer using symbolic regression - if that comes to fruition then we'd be a lot closer to 'intelligence' than what we currently have for sure but that's a very very hard problem to solve.



posted on Jul, 16 2023 @ 11:02 PM
link   
a reply to: ArMaP

Supposedly chatgpt 4 has an iq equivalent of 155. Einstein is 160.

Lets wait until the next chatgpt doubling event…



posted on Jul, 16 2023 @ 11:03 PM
link   
a reply to: ArMaP

Just because someone, or something, is intelligent, it doesn't necessarily mean it isn't dumb as a rock.

I'm not saying ChatGPT is actual AI. Im just saying if it were, there's still not guarantee it'd answer you question correctly.



posted on Jul, 17 2023 @ 12:31 AM
link   
a reply to: ArMaP

This is a test to check if ATS is intelligent, right? Considering no one took the time to play your scenario through and no one spotted the inconsistencies I'd consider ChatGPT quite on par with the average poster


It's not the same task when two of them eat the veggies!

Actually let's see how YOU solve it under the conditions given... Hint: you can't...
edit on 17-7-2023 by Terpene because: (no reason given)



posted on Jul, 17 2023 @ 02:05 AM
link   

originally posted by: Terpene
a reply to: ArMaP

This is a test to check if ATS is intelligent, right? Considering no one took the time to play your scenario through and no one spotted the inconsistencies I'd consider ChatGPT quite on par with the average poster


It's not the same task when two of them eat the veggies!

Actually let's see how YOU solve it under the conditions given... Hint: you can't...


Its a slight variant, but the core 'problem' remains the same, determining what order to move things in - just as the original. The problem is solved by always keeping the food on the boat as the boy and the monkey are not hostile to one another but instead only to the food.



posted on Jul, 17 2023 @ 05:51 AM
link   
a reply to: ArMaP

What's the answer to the first? I don't see a way it can be done. With the other scenario it's possible if the monkey and boy are sent first and the man takes a second trip with the food/vegetables.

From my experiences modern chatbots are as smart as their ability to source and correlate data. There's no logic or consciousness involved within it but there's plenty in the information it sources and bundles up for us. Idk if there's self-changing code behind the scenes that helps the likes of chatGPT "evolve" but I'd imagine it would have coders all over it anyways so defining anything as artificial intelligence is iffy at best.

That said we wouldn't even need superscripted chatbots to replace the many mundane tasks that clog up systems. From my own experiences in life I'd say we're already too reliant on convenience and the likes of "artificial intelligence" offers the same but on a grand scale.i mean think of the world we live in... I'd say between 50-95% of a person's interactions are mostly meaningless. We don't need to know the intelligence of the individual when we're buying something or getting a car fixed, we just need to know they have a clue within the boundaries of which you wish them to work. Mundane tasks that require knowledge more than anything, like which isle is the milk on or what parts this particular engine needs.

I'm less bothered by the potential of AI than the affects they'll have on society. We're already becoming terrible people due to a world that offers instant gratification. AI will most likely be more of the same.



posted on Jul, 17 2023 @ 06:14 AM
link   
a reply to: chr0naut

I know, but it looks like many people are falling for the marketing hype (again) and think ChatGPT is intelligent.



posted on Jul, 17 2023 @ 06:19 AM
link   
a reply to: CriticalStinker

Personally, I don't think "real" AI will be here in a short space of time, as ChatGPT and all these language models only exist because the advancements in hardware power and by using deep learning to create their learning base.

I have been following (although without much attention) AI for the last 30 years or so and even tried to make some test programs using AI specific languages (I think one of those was Turbo Prolog"), and I haven't seen a real evolution in the way AI works.



posted on Jul, 17 2023 @ 06:22 AM
link   

originally posted by: DaRAGE
a reply to: ArMaP

Supposedly chatgpt 4 has an iq equivalent of 155. Einstein is 160.

Lets wait until the next chatgpt doubling event…


Supposedly.

And IQ measurements are far from a certain way of measuring real intelligence. I have seen some "IQ tests" that are mostly English language tests, so I suppose ChatGPT and the like would be able to be good at those.



posted on Jul, 17 2023 @ 06:23 AM
link   
a reply to: filthyphilanthropist

I think a real AI problem solver would be able do solve it, but language models can not.



posted on Jul, 17 2023 @ 06:28 AM
link   
a reply to: ASrubWhoDiedAgain

Exactly. I think my variant is even easier to solve, as there's only one thing that cannot be left alone.



posted on Jul, 17 2023 @ 06:34 AM
link   

originally posted by: RAY1990
What's the answer to the first? I don't see a way it can be done.

The answer is on the opening post:
1 - Take the sheep to the other side;
2 - Go back;
3 - Take the vegetables to the other side;
4 - Bring back the sheep;
5 - Take the dog to the other side;
6 - Go back;
7 - Take the sheep to the other side.


With the other scenario it's possible if the monkey and boy are sent first and the man takes a second trip with the food/vegetables.


You cannot take the monkey and the boy at the same time.

The solution for the second problem is:
1 - Take the food to the other side;
2 - Go back;
3 - Take the boy (for example, in this problem the boy and the monkey can be used in any order) to the other side;
4 - Bring back the food;
5 - Take the monkey to the other side;
6 - Go back;
7 - Take the food to the other side.



posted on Jul, 17 2023 @ 06:38 AM
link   
a reply to: ArMaP

It’s not only problem solving, but simple research where ChatGPT is hit and miss.

Very recently, I was doing some research for my wife. She already had her answer from traditional research, but we were curious if AI would give us the same answer.

I won’t go into the details, but it was simply a name of a person in a specific role in 17th century French Court in a given timeframe. ChatGPT gave me the wrong name, so I repeated the question giving the name we had; this time, it confirmed the result. I then questioned it, asking why it had given the previous erroneous answer; it actually replied, admitting that it had made a mistake and was very apologetic. I wanted to add “To err is human”, but I didn’t want it to have a nervous breakdwon.

By the way, I asked it to supply the source it used to confirm the name and it couldn’t answer, indicating that it has many sources, including published material.

Bottom line, beware of false prophets.




posted on Jul, 17 2023 @ 06:43 AM
link   
The version I presented ChatGPT is a slight variation of the original problem, which uses a wolf, a goat and a cabbage, so even my first problem was a way of testing ChatGPT.

As it was only a change in the type of "object" to be transported I was expecting ChatGPT would be able to recognise the problem and apply its knowledge to solve it.

My second problem was slightly different on purpose, to see if ChatGPT would be able to use real intelligence to solve it, like a human would do, as someone that can solve the first can solve the second.

I suppose the same test with the cannibals and missionaries problem would have similar results.

PS: while I was looking for the original version of the problem I saw on Wikipedia that the oldest known version of this problem is from the 9th century and that there are similar versions from around the world.



posted on Jul, 17 2023 @ 11:06 AM
link   
a reply to: ArMaP

Still more intelligent than the majority of people.



posted on Jul, 17 2023 @ 05:34 PM
link   

originally posted by: DaRAGE
a reply to: ArMaP

Supposedly chatgpt 4 has an iq equivalent of 155. Einstein is 160.

Lets wait until the next chatgpt doubling event…


I find that hard to believe, personally. It's not even sentient.



new topics

top topics



 
19
<<   2  3 >>

log in

join