Last March, OpenAI released a new version of ChatGPT, known as ChatGPT-4.0. Watching the videos displaying its abilities, everyone was dazzled by the improvements from the last installment. However, dazzled doesn’t always mean impressed. Scientists and ethics groups found the platform alarming and dangerous. Is there a justifiable fear of new technology, or are their concerns legit? Let’s discuss their point of view in this article.
What Makes It Different from ChatGPT 3.5?
- It reasons better than ChatGPT 4.0, as developers trained it not to get easily fooled after users used various prompts to trick GPT-3.5.
- It offers more accurate, less wordy responses than its predecessor.
- Unlike GPT-3.5, the new bot is multimodal, meaning it understands different modes of information, including images. It can read images and describe what’s in them, which could be helpful for people with vision disabilities.
- ChatGPT 3.5 could process around 3k words, while the new GPT 4.0 can do more than 25k. This could help in workplaces that require processing large documents.
Why ChatGPT 4.0 is Deemed a Threat to Society
All the points we discussed are positive and show GPT-4 in a good light. However, ethics groups and scientists still have concerns that a regular user might overlook.
A lot of Question Marks
Scientists have many unanswered questions regarding how ChatGPT 4.0 got developed and trained. A lot of secrecy surrounds the new language model, which concerns them. They believe we all have the right to know how OpenAI made the infamous chatbot.
Experts think that the secrecy surrounding its training methods will set a negative precedent in the future of AI development. Sasha Luccioni, a research scientist specializing in climate, stated that the chatbot is a dead-end for scientists to replicate or improve because it’s a closed-source model.
Moreover, law professors argue that the bot may be trained on data fundamentally derived from community resources that should be constrained and used reasonably, as it should be available to everyone to know about, which is ignored by OpenAI, which charges a fee per user.
“A Risk of Privacy and Public Safety”
The Center for AI and Digital Policy (CAIDP), an AI ethics group, is a prominent organization that has been vocal about its concerns. The members claim GPT-4 is “biased, deceptive, and a risk to privacy and public safety.” They add that it fails to meet the standards for AI, which are to be “transparent, explainable, fair, and empirically sound while fostering accountability.”
The organization filed a complaint with the Federal Trade Commission (FTC) to stop OpenAI from resuming their work on ChatGPT because, to them, the bot violated the FTC’s stated guidance about AI systems. Just a day before, 500 top technologists and AI experts, including Elon Musk, signed an open letter demanding the immediate pause of advanced AI systems.
The letter states the belief that, because of OpenAI’s endeavors, people rush to create new AI systems without much planning or management that should be practiced. They claim that such life-changing events should be handled with care and appropriate resources.
‘Red Teaming’
Red teaming is a test where researchers try to get a harmful output out of GPT-4 to determine if it threatens society. The Red teamers would attempt to get the AI chatbot to give biased answers, generate hateful propaganda, and take deceptive actions, just for AI to test its capabilities and how people might misuse it. The purpose is to alter the model before it causes issues when released to the real world.
Regardless of the results, it’s a good endeavor on Open AI’s behalf to try and reduce the damage that could happen with its AI. And it sets a positive precedent that other AI developers should follow.
Red teaming researchers are experts in a field in most cases. For example, Open AI paid Andrew White, a chemical engineer at the University of Rochester, New York to do red teaming for six months. His findings were that Chat-GPT 4.0 isn’t impressive alone, as it made a few mistakes, but paired up with internet tools, it’s a game-changer. The question here is, can AI teach someone to create harmful chemicals?
There’s no right answer. And in all cases, red teaming has proven insufficient in detecting harmful issues, as one of the red teamers could still get conspiracy propaganda and scam emails from GPT-4.
Fake Facts
GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts. As stated on The Official OpenAI Website.
Hallucinations here refer to fake news that ChatGPT and ChatGPT 4.0 can compose. Seeing that the company itself has acknowledged the problem, it seems noticeable. It’s easy to see how it poses a threat with people relying on the internet for news these days.
Hallucinations or false information can be extremely harmful to the public, even going so far as to influence politics. Journalists have the right to worry that an AI chatbot can mimic the human writing style. It can write about anything, whether hallucinating or influenced by malicious actors as well.
The good news is that while scientists fret that they can’t make changes to the model because they don’t know how it got trained, the developers working at OpenAI use their knowledge to work on these limitations. GPT-4 is already doing better regarding telling false news from facts. Some argue that it can help educate people on the dangers of incorrect information and how to stop or combat it.
Final Words
As cliché as it sounds, ChatGPT-4 is a double-edged weapon. It can be helpful in many areas, making mundane tasks less tedious and serving as a virtual assistant. It has many applications that developers made with good intentions, like aiding people visually and cutting work by half, enhancing productivity.
Yes, OpenAI should inform people about the dangers of misusing the chatbot and spreading false information. And they should also warn others of hallucinations and miscalculations. But no, that wouldn’t stop malicious actors from taking advantage of the chatbot for whatever reasons. Not to say that OpenAI should stop operating, but we believe there should be constraints and restrictions to prevent any real damage.
With the nandbox native no-code app builder, you can build and add a customizable chatbot to your app. Try it now!