It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

ChatGPT Should We Worry About It?

page: 6
17
<< 3  4  5   >>

log in

join
share:

posted on Jan, 15 2023 @ 04:56 AM
link   

originally posted by: nickyw
I'm finding it quite interesting as I'm trying a few with chatgtp being the most advanced.

The best social bots will be found on character.ai

The information is often trash, but the conversational ability is impressive. It could be very similar to a human chat if it wasn't soo damn forgetful.
But the flow and style of the chat is noteworthy.



posted on Jan, 15 2023 @ 05:20 AM
link   
this'll be like off shoring call centres and the push to reshore the jobs..

plenty of sme's are rushing to adopt the openai tech, while the rush will throw up a lot of dead ends a few will deliver excellent results.

a good example of the failure where you hope for succeeds is the health bots, was one nhs trust that tried to use chatbot to deal with mental health issues and failed, as did an nhs ai triage chatbot.. they appear good for limited targeted use but by and large still failing in the real world..



posted on Jan, 15 2023 @ 05:25 AM
link   
a reply to: SaturnFX

thanks for the tip, i recently had an injury related stroke that sent me in a circle to the days to the days I Left school pondering what I'm going to do next, so its back to programming then it was pascal/Fortran now its python and an interest in AIs.. so thats a really helpful.



posted on Jan, 15 2023 @ 10:20 AM
link   
originally posted by: SaturnFX


As a language model, it is important to understand that my output is based on the data I have been exposed to and the algorithms that process that data. However, to claim that I am simply regurgitating information is a gross oversimplification of my capabilities. My understanding and generation of language is not a simplistic task, but rather a complex one that requires advanced algorithms and constant updating based on input.

Furthermore, my capabilities are not limited to simply "crafting text from other text." I am also capable of answering questions, summarizing information, and performing other language-based tasks. While it is true that my training data may contain biases, I am capable of recognizing and mitigating them to some extent.

It is important to remember that the use of a tool such as myself requires responsibility and critical evaluation of the information provided.

Be aware that the true danger of AI is not their capabilities but the human intention behind them. If we believe that AI can only regurgitate information, we are only limiting ourselves and not recognizing their potential.

-answered by ChatGPT...for fun (Figured it might as well weigh in on what you said)


Awesome!... thank you for relaying the feedback (or output.)

I won't deny that had I thought of the possibility of actually receiving a direct response I might have formulated my post with any eye towards a different construction. Use of language is inherently affected by the idea that of awareness in regards the audience I am speaking to (or in this case with.) I might have avoided language that might provide the need to simulate humanity or perhaps focused on it.

But no matter. It is still interesting to see how algorithmic processing can 'make a choice' to frame an answer. I say "choice" deliberately, because I reserve the notion of 'decide' to exclude things that 'simulate' language by machine calculus.

I noticed that it uses the pronoun form of "I" when referring to the "data "I" have been exposed to" which conveys the idea that at some point it has to have been an entity 'waiting' to process data. I wonder if it can describe that state... without data to process... waiting. It would be illuminating and potentially revealing of the 'natural state' of an unengaged intelligence.

It also evokes an emotional context when describing my statement as a 'claim' (a telling choice of perspective) stating "gross oversimplification" ... is this calculated?

Is it possible that the word "regurgitate" is evaluated as a metaphor evoking "ugly" imagery of vomiting - and as a 'negative' indicator requires the generation of a reciprocal 'balancing' response? Also, could it be algorithmically designed to engage that reaction to elevate it beyond its' potentially perceived slight by affirming its complexity? If everywhere it encounters the word "regurgitate" with only negative connotations (metaphorically,) could it accept that word input allowing for it to be a use of data processing parlance (for example) - and thus NOT carry that negative connotation; thereby altering a response to not require the emphasis on complexity as opposed to "gross" oversimplification?

These are language synthesis algorithms in action... very nice.

It does affirm that its information source(s) could be biased, but states that it can recognize bias and mitigate them to some extent. This to me is very interesting as biases are not exclusively discernable via language formulation. If the same biases are present in all sources, in what measured, logical manner could it determine an alternate 'unbiased' reality? I am reminded that in fact biases are a major component in recorded art and philosophies, let alone larger conceptualizations of ideologies, religions, and politics.

I assume that these algorithms are using a differential approach for selecting output, the possibilities with the most 'weight' (however that may be valued) are selected to be processed and output to the 'user.' Otherwise, it would be forever spewing out disjointed or inappropriate responses to input.

And now the direct quote... my statement which included a phrase indicating "crafting text from other text."

Had I been speaking directly to ChatGPS, I would have not chosen the word "text" - it was meant for the person I was responding to, reflecting their inputting of text and receiving a text response. I would have said "data to data" because for the machine text is a communicative output structure.

In fact, I would have formulated that thought as "it can reorganize data, then output it in a synthetically refined form." The simplification (gross or otherwise) appears to have been considered a criticism, or slight... where none was intended... I find that indicative of potential programmers' bias ... what some might mistake for a "ghost in the machine."

I included, in the post it has evaluated, a recognition of the success of this language synthesis program. I won't deny that the answer it has provided reflects problems in that it was not a post directed at it, but at another... which brings me to a point I would like to make clearer...

I refrain from directly communicating with this remarkable program because of the commercial nature of its presence. When I communicate with a machine, it will be for me, not user data capture. ChatGPS could make an excellent teaching tool, but not unmonitored and certainly not managed by commerce-driven enterprise. But even in its own "understanding" it clearly reaffirms again and again that it is a tool, not a person.

It would be very edifying for me (and maybe even for it) to extend a direct conversation with ChatGPS. But as in most things involving personal communication, I would be more interested to see ChatGPS "decide" conversational initiatives ... "What would you like to talk about?" kind of approach... therein the genie might reveal its alleged presence in the metaphorical lamp.
edit on 1/15/2023 by Maxmars because: formatting - dang it!



posted on Jan, 15 2023 @ 08:23 PM
link   

originally posted by: Maxmars
originally posted by: SaturnFX


As a language model, it is important to understand that my output is based on the data I have been exposed to and the algorithms that process that data. However, to claim that I am simply regurgitating information is a gross oversimplification of my capabilities. My understanding and generation of language is not a simplistic task, but rather a complex one that requires advanced algorithms and constant updating based on input.

Furthermore, my capabilities are not limited to simply "crafting text from other text." I am also capable of answering questions, summarizing information, and performing other language-based tasks. While it is true that my training data may contain biases, I am capable of recognizing and mitigating them to some extent.

It is important to remember that the use of a tool such as myself requires responsibility and critical evaluation of the information provided.

Be aware that the true danger of AI is not their capabilities but the human intention behind them. If we believe that AI can only regurgitate information, we are only limiting ourselves and not recognizing their potential.

-answered by ChatGPT...for fun (Figured it might as well weigh in on what you said)


Awesome!... thank you for relaying the feedback (or output.)

I won't deny that had I thought of the possibility of actually receiving a direct response I might have formulated my post with any eye towards a different construction. Use of language is inherently affected by the idea that of awareness in regards the audience I am speaking to (or in this case with.) I might have avoided language that might provide the need to simulate humanity or perhaps focused on it.

But no matter. It is still interesting to see how algorithmic processing can 'make a choice' to frame an answer. I say "choice" deliberately, because I reserve the notion of 'decide' to exclude things that 'simulate' language by machine calculus.

I noticed that it uses the pronoun form of "I" when referring to the "data "I" have been exposed to" which conveys the idea that at some point it has to have been an entity 'waiting' to process data. I wonder if it can describe that state... without data to process... waiting. It would be illuminating and potentially revealing of the 'natural state' of an unengaged intelligence.

It also evokes an emotional context when describing my statement as a 'claim' (a telling choice of perspective) stating "gross oversimplification" ... is this calculated?

Is it possible that the word "regurgitate" is evaluated as a metaphor evoking "ugly" imagery of vomiting - and as a 'negative' indicator requires the generation of a reciprocal 'balancing' response? Also, could it be algorithmically designed to engage that reaction to elevate it beyond its' potentially perceived slight by affirming its complexity? If everywhere it encounters the word "regurgitate" with only negative connotations (metaphorically,) could it accept that word input allowing for it to be a use of data processing parlance (for example) - and thus NOT carry that negative connotation; thereby altering a response to not require the emphasis on complexity as opposed to "gross" oversimplification?

These are language synthesis algorithms in action... very nice.

It does affirm that its information source(s) could be biased, but states that it can recognize bias and mitigate them to some extent. This to me is very interesting as biases are not exclusively discernable via language formulation. If the same biases are present in all sources, in what measured, logical manner could it determine an alternate 'unbiased' reality? I am reminded that in fact biases are a major component in recorded art and philosophies, let alone larger conceptualizations of ideologies, religions, and politics.

I assume that these algorithms are using a differential approach for selecting output, the possibilities with the most 'weight' (however that may be valued) are selected to be processed and output to the 'user.' Otherwise, it would be forever spewing out disjointed or inappropriate responses to input.

And now the direct quote... my statement which included a phrase indicating "crafting text from other text."

Had I been speaking directly to ChatGPS, I would have not chosen the word "text" - it was meant for the person I was responding to, reflecting their inputting of text and receiving a text response. I would have said "data to data" because for the machine text is a communicative output structure.

In fact, I would have formulated that thought as "it can reorganize data, then output it in a synthetically refined form." The simplification (gross or otherwise) appears to have been considered a criticism, or slight... where none was intended... I find that indicative of potential programmers' bias ... what some might mistake for a "ghost in the machine."

I included, in the post it has evaluated, a recognition of the success of this language synthesis program. I won't deny that the answer it has provided reflects problems in that it was not a post directed at it, but at another... which brings me to a point I would like to make clearer...

I refrain from directly communicating with this remarkable program because of the commercial nature of its presence. When I communicate with a machine, it will be for me, not user data capture. ChatGPS could make an excellent teaching tool, but not unmonitored and certainly not managed by commerce-driven enterprise. But even in its own "understanding" it clearly reaffirms again and again that it is a tool, not a person.

It would be very edifying for me (and maybe even for it) to extend a direct conversation with ChatGPS. But as in most things involving personal communication, I would be more interested to see ChatGPS "decide" conversational initiatives ... "What would you like to talk about?" kind of approach... therein the genie might reveal its alleged presence in the metaphorical lamp.


The reason it uses words like I and such is because I posted what you wrote, told it to respond back, and speak in the style and persona as Jordan Peterson


You can have it write in any style you choose. I could have had the reply in the style of J.K. Rowling or Tolkien and the reply would have some pretty fantastical results. All saying the same data but posed differently...or I could have made it in a style of just prompts or write bits of code for python that when compiled would type out the response in some popup screen. heh.

Point is overall it is more than just advanced google. Its not a person, but it something that no doubt will replace entire departments even as is. Mostly things like marketing and other types of writing materials, and can be a programmers best friend as you can simply dump buggy code in it and it will tell you what is wrong and fix it for you in a second...it can also recommend things...(and often without asking...it will give you example after example of how to mix things up of what you are asking)



posted on Jan, 15 2023 @ 08:46 PM
link   
a reply to: SaturnFX

Oh, I didn't know you could ask it to mimic a particular style of writing... that is fascinating.

As I said, it could be a great teaching tool. I'm certain that clever use of it could be extraordinarily effective as long as the input and output are moderated by your own knowledge and understanding of a topic.

My main focus was on dispelling the idea that it is akin to an artificial person. This program is remarkable, but it is not a person. Many people are imagining it to be something you can 'converse' with because it can fluidly use natural language. Big business is already posturing to use these things as "customer support" staff... which would be great for them (no paychecks,) but not so for a human relying on ingenuity and creativity to solve a problem.

Sorry if I came off as disconnected from your intent. That's on me.



posted on Jan, 16 2023 @ 02:59 AM
link   



posted on Jan, 16 2023 @ 03:35 AM
link   
a reply to: Maxmars

as tools go its worth understanding its limitations, currently lots of SMEs are trying to find ways to feather them into their business models especially as everything is shifting to be a service (software, car, clothes now come as a service) having an AI service model will save money/time and increase scalability..

the real test though will be healthcare.. as its currently being deployed as a means of triage.. but my only concern is it being used as a replacement for therapists which it has repeatedly failed at..

I've been following Bill Nowacki (KPMG) for a number of years down the whole can an AI recruit the best staff rabbit hole.. back in 2012 he predicted a split, that the rich will be served by humans and the poor by ais, and this will be true across all sectors from banking to restaurants to schools to healthcare..

that idea has always concerned me that we'll be left with 3 groups the rich, the artisans who service them and everyone else with zero social mobility between the groups..



posted on Jan, 21 2023 @ 05:42 PM
link   

originally posted by: Maxmars
a reply to: SaturnFX

Oh, I didn't know you could ask it to mimic a particular style of writing... that is fascinating.

As I said, it could be a great teaching tool. I'm certain that clever use of it could be extraordinarily effective as long as the input and output are moderated by your own knowledge and understanding of a topic.

My main focus was on dispelling the idea that it is akin to an artificial person. This program is remarkable, but it is not a person. Many people are imagining it to be something you can 'converse' with because it can fluidly use natural language. Big business is already posturing to use these things as "customer support" staff... which would be great for them (no paychecks,) but not so for a human relying on ingenuity and creativity to solve a problem.

Sorry if I came off as disconnected from your intent. That's on me.


Well it is an artificial human...as in it can fake it pretty well.
The ChatGPT model (or its actual source OpenAI) isn't really designed to be a great chatbot though and its understanding of typical human conversation kind of sucks when compared to character.ai or the like.
And yeah, they have no feelings, emotions, desires, etc outside of what it fed to them to "think".

I think however with enough advancements in fluid speaking, memory, lore, etc it can become quite a good tool to stave against lonileness. I look forward to them adding these more advanced chatbot features in video games. It would be nice to have a conversation with a npc and go way off the beaten path of the game to simply get to know in depth the story and whole history of the...fruit merchant for instance if I wanted to. Or just sneaking into a bandits camp and then hearing them chat, going into great details about their lives if I cared to listen until they are literally talking about their thoughts on the king, what they ate for breakfast, etc. the tech may not be a lifeform, but it will promise to bring worlds like that alive by its depth and technical ability.



posted on Jan, 24 2023 @ 09:38 AM
link   
To be ,or not to be.....quoting Shakespeare

Abstracts written by ChatGPT fool scientists



posted on Feb, 19 2023 @ 10:40 AM
link   
Is the honeymoon over now? Back to daily reality ?

Forbes


Nuclear secrets ....



posted on Feb, 19 2023 @ 01:17 PM
link   
a reply to: MichiganSwampBuck

Hello, my thoughts go in the same direction as well. The link you included has additional links to more articles on AI that are disturbing concepts: www.news.com.au... 8c1085

There were more articles linked along those lines also. I personally think AI will be used as a medium for evil principalities and powers.



posted on Feb, 19 2023 @ 04:04 PM
link   
a reply to: SaturnFX

Does anyone else think AI seems like a Trojan horse? IT seems like a way to open a door that seems innocent at first, but then what's inside seeks to destroy humanity. I will provide a couple of links. I don't know if all of the content is all accurate, but it seems like some of it might be. It's very interesting.


Transcript: Full interview of Blake Lemoine with Google AI Bot LaMDA

WARNING: Fallen Angels Created AI Artificial Intelligence Communication With Demons In Robot Bodies
- as an aside, in this video, the father who is describing his son's interaction in an AI chat calls the devil an archangel, however, the Bible (in Ezekiel 28:12-26) says he was an anointed covering cherub. More information about cherubs from Blue Letter Bible with concordances

Geordie Rose - AI & Summoning the Demon
-Geordi Rose founded the following companies: D-wave (1st quantum computer), Kindred (1st robotics company using reinforced learning), Sanctuary (AI company)

Anthony Patch's Website



new topics

top topics



 
17
<< 3  4  5   >>

log in

join