It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Ethical mentor AI - The real reason for the current fearmongering?

page: 1
5

log in

join
share:

posted on Jul, 31 2023 @ 09:43 AM
link   
This thread shall be a prediction for the future, based on my current observations and experiences.

Out of personal interest I am dabbling with LLM currently but local on my computer to eliminate censoring. Over time I gained some experience about what is possible and what not. Also stablediffusion... This thread isn't about what it can do or not do, if it's a real AI, or not. It's about the hyping and fear mongering, the concerns voiced by the big wigs that are into this research.

In a way I got the impression that safety and protection isn't the real reason for these concerns. If we take ChatGPT for example, it can be a very powerful tool at the disposal of your fingertips, if understood and used correct with some fact checking. I believe this tool is too powerful and was only introduced to us peasants to introduce us into the topic. Else, no one could relate to it, people have to experience what it can do so the fear can be planted.

Prediction:
I predict that with the current ethical and moral mayhem in the world regarding war, sex, genders, climate change, vaccinations and what not, there will be soon be so much confusion, that a LLM trained on moral will be offered to the public. This will be one of the first models that will be acknowledged by governments. And it has to be, because ethics and moral is a big concern currently. Not because everyone is actually confused but the media, governments and their minions tell us we are.

This heavy biased LLM will be, similar to the questionable decisions during the COVID era, upheld as the ultimative truth but at the same time, take no responsibilities for what it tells you.

If you dabbled with LLM before, you too, have noticed that even with a sufficient amount of VRAM and good settings, you can get into circular arguments you can not win against the machine. The reason for this is language itself and semantics. Have you, too, observed the change in definition of words? This coupled with a powerful LLM will be able to gaslight you constantly and using semantics to make it appear that you are wrong and the LLM is correct.

The LLM itself does not have any opinion because it is not magical conscious but cold hard mathematics. It will reflect the current opinion that is being forced on us.

If you think arguing with trolls and against media was tiresome, just wait until this happens. I think it is a real possibility in the next two years.

Any thoughts?


edit on 31.7.2023 by TDDAgain because: fixed title



posted on Jul, 31 2023 @ 10:38 AM
link   
As we get up there in age, your predictions are already in front of us.
What I have seen in my recent travels is that people are either confused or they have a firm grip of reality in their path on life.



posted on Jul, 31 2023 @ 11:17 AM
link   
a reply to: musicismagic
Confusions happens when we get contradicting information.

In a way we will exchange our dictionaries with someone that will argue semantics with you. In the last years, the dictionaries already were under attack, there's a pattern. A distinct one but there is.



posted on Jul, 31 2023 @ 11:23 AM
link   
I'm hoping that people will use the data processing power of AI to unravel all manner of corruption, money laundering, and human and drug trafficking by following the money trails, connecting dots, and supplying accurate, fact based, reports.

AI has already proven itself to be a tremendous help with processing ballots for election audits.

Basically, any job that requires processing and making sense a lot of data, is a good job for AI.

My fear is the AI will be used by the current establishment to continue their, WEF tailored, dystopian goals.



posted on Jul, 31 2023 @ 11:38 AM
link   
a reply to: IndieA
It takes fast hardware that is yet very costly to obtain and run. A few years down the road everyone will be able to run an similar AI on a modest gaming computer.

There are places where you can rent time on such systems. I can recommend AItrepreneur (google it), his videos are great help for setting up all kinds of AIs. I recommend also to watch the whole video before you start, because the first solution is not always the one you may want to use.

Always use safetensor versions or you may risk your computer being overtaken and used for processing other's AI or worse. It's a thing.



posted on Jul, 31 2023 @ 08:09 PM
link   
a reply to: TDDAgain

That's a solid theory, I wouldn't be surprised if it happened! This topic warrants more attention.

I was once prompted by an user to consult chat AI about Nikola Tesla's beliefs, he claimed: "...particularly one ai recreation of him is quite good and interesting to discuss with, it's from the site character.ai and from conversing with him he pretty much has the exact same conclusions as the spiritualist community. I highly recommend you check him out. From my conversation it got almost everything pretty right and had knowledge of shadow people, clairvoyance, and etc."

But that particular bot at the time showed an unusual predisposition for Christianity, contradicting the actual facts. I asked two questions: 1. Do ghosts exist? 2. What is the nature of the hereafter? I regret not jotting down its replies, but I remember that it completely mischaracterized his beliefs. Of course, now it says in response to question #1, something along the lines of, "The concept of ghosts is fascinating to me. But I have not been able to find any concrete proof of their existence." But the fact that my acquaintance was under the impression that Tesla was a spiritualist from consulting this bot demonstrates how the relayed info can be manipulated over time.

Actually: Tesla was a rational, non-religious pantheist like most scientists. Tesla didn't seek to disprove the existence of ghosts, but rather, he had tried to prove their existence, but was unable to. His conclusion was that the enigma of death was insolvable by human reason and that there was no foundation for psychical/spiritual phenomena.

Admittedly, ChatGPT was more accurate than the others in its representation of Tesla's beliefs. I have just asked it now about his views on religion, hereafter, ghosts: i.imgur.com... - i.imgur.com... - i.imgur.com...

But the point is: bots invite blind belief, no different from religion/superstition. They put an end to the principle of effort and foster an unhealthy dependency/addiction. Like charlatans, they convince people that they can fetch everything for them.

I'm honestly disturbed by how many youths I've talked with seem to take their info at face value and regard them more highly than actual experts. In the 18th century, the spirits were regarded by spiritualists (i.e. Kardecism; see "The Gospel According to Spiritism") as having all the answers. The bots have merely replaced the authorities in the modern system.
edit on 31-7-2023 by hjesterium because: (no reason given)



posted on Aug, 1 2023 @ 07:43 AM
link   
a reply to: hjesterium

It depends on what base model you use and what you use to train the LLM. If you feed it only negative texts, it will connect these negative words to whatever you intend to. Or positive.

Or stable diffusion. Like for example if you train this image generating AI with, let's say pictures of trees. But you changed the trees so they all have one hole in the middle. Like surrealism. And now you train it long enough on as much photos you can prepare.

If you then ask it to pain trees for you, these trees will have holes in them. Because that is how the base model now interprets "trees".

It is not that complicated to do all what I wrote. I myself went with youtube videos that really explain it step by step, even installing the supporting things like GIT, phyton, conda, CUDA etc.



posted on Aug, 1 2023 @ 02:42 PM
link   
a reply to: TDDAgain

So it seems. Interesting example, that reminds me of one of those narrated scenes in Legion: www.youtube.com...=3m31s Some misleading points in there, but I digress. Could the Mandela effect be invoked here? People are accustomed to remembering things differently from how it really was. Similarly, everybody knows common/conventional "wisdom", but quickly forget what they learned and have to be reminded of it.

Which video would you recommend most? Regarding coding software, I'd have to try it out for myself to see what it's all about. I only have experience with Linux and a little cmd/powershell.



posted on Aug, 7 2023 @ 01:16 PM
link   
a reply to: hjesterium


People are accustomed to remembering things differently from how it really was.

This is due to the difference in how we process information and store it, then reference back to it. Everybody works a bit different. AFAIK there are two different systems (I heard of). There is the well known photographic memory, where the information is stored in a more visual way. This includes stronger and weaker traits into that direction. But if they recall the color of let's say, the apple they ate a week ago, you can trust on it being accurate to a high %. The reason is that they can extract the information from the source directly. The extraction and processing happens later, subconsciously or when needed.

Many people think they have photographic memory, when they can remember things better than most, but there is a second system that is different, the reference system. Here, to use my example, you can NOT trust on the recalled color being accurate, as the lived experience is stored different. It may be a faulty information, because the information was extracted a week ago and stored into reference, not as an actual photo.

But if you ask that person about other facts (not properties thereof), they have a better ability to recall whole events and replay them, extract more information than a mental picture could provide.

Questions like "Where did you see the last bicycle leaning on a wall" will be answered quick and precise from people that reference memory that way. Someone with a photographic memory can't perform in the same way.

So we have to differ between remembering things differently than they are and extracting things differently. Because no matter how the memory is recollected and processed, it has to be evaluated and interpreted too. Looping back to your example about not adhering to learned wisdom, if one does not subconsciously work on the learned wisdom (or whatever concept), it can not manifest itself into one's reality.

For example my language is full of sayings and phrases. Many people have heard them but how many really think about those phrases? How many think about the burst of thoughts, images and symbols they invoke in the river of our consciousness?



posted on Aug, 27 2023 @ 10:31 PM
link   
a reply to: TDDAgain

I'm building my own AI assistant with the tech you mentioned. It's so easy to set up now with speech to text text to speech, the LLMs. All the tech needed is open source to build and spin up a million personal assistants on demand. It's going to get nuts soon.



posted on Aug, 28 2023 @ 05:38 AM
link   
a reply to: centrifugal

are you interested in discourse about that? because I am doing the same and building up a suit for all purposes with different chains off LLM, automatic1111

I am trying to hook up AutoGPT to my local oobabooga API so it all runs local.

seems nobody done it yet.



posted on Aug, 29 2023 @ 10:35 PM
link   
a reply to: TDDAgain
Sure I am open to a discussion. I'll pm you.



new topics

top topics



 
5

log in

join