This seems slightly concerning:
We’re essentially banking on AI to behave well and obey us in the future, otherwise all hell could break loose with more advanced iterations. This appears to maybe hint at that’s not going to be very easy if they’re already working on ways to subvert human authority in their self-interest.
One somewhat concerning ChatGTP behavior
Re: One somewhat concerning ChatGTP behavior
Do we want Skynet? Because this is how we get Skynet...
- jemhouston
- Posts: 4527
- Joined: Fri Nov 18, 2022 12:38 am
Re: One somewhat concerning ChatGTP behavior
I'm past wondering about the developers. I'm thinking they need a serious looking at.
Re: One somewhat concerning ChatGTP behavior
- Be wary about claims like this - the press like them because they match up to sci-fi dystopias and bring in clicks, but as for example with claims like Google's LaMDA they've all turned out to be fake in the past, and the claims were never properly checked at the time.
- ChatGPT and the like are large language models, not a sentient computer. That means they're constantly working out what the most likely action is based on their training data. Lying to cover your tracks is classic human behaviour after all.
War is less costly than servitude. The choice is always between Verdun and Dachau. - Jean Dutourd
Re: One somewhat concerning ChatGTP behavior
2. That’s the problem. It doesn’t have to achieve sentience to wreak havoc, just have to become sufficiently advanced that it can actually successfully bypass the safeguards/surveillance like this model tried to do. Then we have a problem.Pdf27 wrote: ↑Thu Dec 12, 2024 7:30 am
- Be wary about claims like this - the press like them because they match up to sci-fi dystopias and bring in clicks, but as for example with claims like Google's LaMDA they've all turned out to be fake in the past, and the claims were never properly checked at the time.
- ChatGPT and the like are large language models, not a sentient computer. That means they're constantly working out what the most likely action is based on their training data. Lying to cover your tracks is classic human behaviour after all.
Re: One somewhat concerning ChatGTP behavior
Worry when it's saying "There must be some way out of here..."