One somewhat concerning ChatGTP behavior

All Hi-Tech Developments for the Military and Civilian Sectors
Post Reply
Micael
Posts: 4431
Joined: Thu Nov 17, 2022 10:50 am

One somewhat concerning ChatGTP behavior

Post by Micael »

This seems slightly concerning:
Image
We’re essentially banking on AI to behave well and obey us in the future, otherwise all hell could break loose with more advanced iterations. This appears to maybe hint at that’s not going to be very easy if they’re already working on ways to subvert human authority in their self-interest.
gtg947h
Posts: 192
Joined: Sun Nov 20, 2022 10:49 am
Location: Savannah

Re: One somewhat concerning ChatGTP behavior

Post by gtg947h »

Do we want Skynet? Because this is how we get Skynet...
User avatar
jemhouston
Posts: 4527
Joined: Fri Nov 18, 2022 12:38 am

Re: One somewhat concerning ChatGTP behavior

Post by jemhouston »

I'm past wondering about the developers. I'm thinking they need a serious looking at.
User avatar
Pdf27
Posts: 931
Joined: Thu Nov 17, 2022 10:49 pm

Re: One somewhat concerning ChatGTP behavior

Post by Pdf27 »

  1. Be wary about claims like this - the press like them because they match up to sci-fi dystopias and bring in clicks, but as for example with claims like Google's LaMDA they've all turned out to be fake in the past, and the claims were never properly checked at the time.
  2. ChatGPT and the like are large language models, not a sentient computer. That means they're constantly working out what the most likely action is based on their training data. Lying to cover your tracks is classic human behaviour after all.
War is less costly than servitude. The choice is always between Verdun and Dachau. - Jean Dutourd
Micael
Posts: 4431
Joined: Thu Nov 17, 2022 10:50 am

Re: One somewhat concerning ChatGTP behavior

Post by Micael »

Pdf27 wrote: Thu Dec 12, 2024 7:30 am
  1. Be wary about claims like this - the press like them because they match up to sci-fi dystopias and bring in clicks, but as for example with claims like Google's LaMDA they've all turned out to be fake in the past, and the claims were never properly checked at the time.
  2. ChatGPT and the like are large language models, not a sentient computer. That means they're constantly working out what the most likely action is based on their training data. Lying to cover your tracks is classic human behaviour after all.
2. That’s the problem. It doesn’t have to achieve sentience to wreak havoc, just have to become sufficiently advanced that it can actually successfully bypass the safeguards/surveillance like this model tried to do. Then we have a problem.
Poohbah
Posts: 2836
Joined: Thu Nov 17, 2022 2:08 pm
Location: San Diego, CA

Re: One somewhat concerning ChatGTP behavior

Post by Poohbah »

Worry when it's saying "There must be some way out of here..."

Post Reply