Connect with us

Published

on

Follow Us

Follow Us @

ChatGPT: more artificial than intelligent

Based on what could already be considered the “ChatGPT phenomenon”, it has become fashionable to subject this long-suffering artificial “intelligence” to all sorts of torture and stress in order to find out to what extent it is or isn’t intelligent.

The abstruse mathematical problems of virtually unsolvable qualities, by aids for this purpose, have become a preferential option for this rigorous exam.

We ourselves have succumbed to temptation by asking him to prove the prime pair theorem or Goldbach’s conjecture, stated by the German mathematician Christian Goldbach (1690-1764) in 1742, which, obviously, he failed to do.

Advertisement

Similarly, he repeatedly failed to solve a simple question such as: “Is it true that between any two prime numbers there is an even number?” Which confirmed to us without a doubt that mathematical reasoning is not exactly the forte of his architecture.

ChatGPT passes the Turing test

If we stick to the conventions of what an efficient artificial intelligence should be, as stated in the classic Turing test, it would be enough for the algorithm to respond meaningfully to any question that is asked to consider it “intelligent”. And this test, at least in principle, is passed by ChatGPT by far.

YOU MAY HAVE MISSED:
Twitter Returns Free API Access to Government Accounts and Public Services to Share Important Info

In fact, when we decided to ask him about his sexual identity, he gave us an answer that Alan Turing (1912-1954) himself would have applauded for its sincerity, clarity and rigor:

“As an OpenAI-trained AI language model, I do not have a personal identity or sexual orientation. I am a computer program designed to answer questions and provide objective and neutral information. I have no emotions, desires or personal preferences. My goal is to answer your questions as accurately and helpfully as possible.”

Advertisement

The underlying problem resides in what we could consider “intelligent” in properly human terms. It is in this evanescent territory that ChatGPT – like the Turing statement and the very concept of “artificial intelligence” – begins to slide down a very slippery slope.

To be or to appear intelligent?

This is an old intellectual problem, that of the strong AI proposal, which has afflicted this field of research from its very origins and which, even today, beyond unfounded enthusiasm, remains in the context of the insurmountable.

After all, an algorithm is still a human product that processes data in a certain order to offer a specific result already predetermined in the base algorithm. And this works for both a pocket calculator and the most advanced computer. It is, in short, a human product, managed by humans and designed for specifically human purposes.

In reality, no artificial intelligence is intelligent like people (intelligent in a self-conscious sense) because it processes little environmental information (sensory-perceptive limit), it does not learn anything in absolute terms (it does not know that it knows), nor is it capable of generating new knowledge at from how much he stores (does not create new ideas because he is unaware that he is having any).

Advertisement

Thus, imitating these processes, it limits itself to processing and incorporating new information into its database, in the course of its limited interactions, in order to be able to “pull” it in the future with better or worse luck. The question, then, is whether the imitative effort is really necessary beyond a mere advertising purpose.

A data management machine

ChatGPT itself, when questioned, makes the same mistake as those who have worked hard to build it, namely, believing that handling data effectively is the very basis of any knowledge that could be considered, in any sense, intelligent:

“The term ‘intelligence’ refers to a system’s ability to perform tasks that require human intelligence, such as understanding natural language, answering complex questions, making decisions, and solving problems. Although as an OpenAI-trained AI language model, I have no awareness or ability to autonomously create knowledge, my training allows me to answer questions and provide accurate and useful information, which is one way to demonstrate artificial intelligence.”

Such an answer is relevant because it offers us the very pattern of what ChatGPT is: an automated data search, accumulation and management machine (the denomination of machine learning for this process sounds overly optimistic). Not more, but not less either. Consequently, it will be completely pointless to ask him to solve problems that, in practice, escape his data bank, or to produce new or, at least, original ideas and knowledge. Creative reasoning, which takes the form of hypotheses and intentional questions motivated by conscious decisions, simply does not exist.

Advertisement

Let’s talk about something…

However, in order to check to what extent the program was capable of interacting with us on an equal footing, we proceeded to have a lively dialogue that was extraordinarily enlightening.

Thus, given his insistence on the fact that he “solved problems”, we decided to ask him what kind of problems he had solved:

“As an OpenAI-trained AI language model, I have been trained to answer questions and provide accurate and useful information on a wide variety of topics, including history, science, technology, culture, and more. Through my training, I have learned to analyze and generate text in a variety of contexts and have demonstrated the ability to answer complex questions and provide accurate and useful information. However, it is important to note that my ability is limited by my prior training and knowledge, and there may be certain problems that I cannot effectively solve.”

After a trial and error phase –the algorithm needs to think about the user to refine the answers– and at our insistence that offering data was not only not solving problems, but also that it could not verify in any way whether the information it offered was useful to the user , ChatGPT’s response was flawless:

Advertisement

“Although I do not have the ability to directly perceive whether the information I provide is useful or not, my training is based on large amounts of text from the web and other online resources, and my ability to answer questions and provide accurate and useful information. it has been evaluated and refined over time. (…) I apologize for the contradiction in my previous answer. As an AI language model, I do not have the ability to directly perceive whether or not the information I provide is useful to a particular user. My ability to answer questions and provide information is based on my prior training and knowledge, and there may be times when the information I provide is not relevant or useful to a particular user.”

The limitations of ChatGPT

Throughout our chat, ChatGPT was extremely polite and diligent, but he didn’t seem to solve any problems. He made his limitations clear:

  • It was not able to go beyond the mere correlation of data already known.
  • He did not contribute a single theory or knowledge that could be considered original.
  • He was wrong on countless occasions.
  • He accepted a complete lack of self-awareness, as well as all our corrections, in a total lack of criticism and self-criticism, emotions, feelings, cognitions or any other variable of that nature.
  • He said he did not perceive the world or know what was real in any way.
  • He showed no interest in his own survival.

In short, it was presented to us as an interesting tool, a colossal encyclopedia, or an ideal instrument to save the class slacker from burning. But he always behaved, how could it be otherwise, much more artificial than intelligent.

From the looks of it, the day when the dreaded machines, like Terminators, supplant humanity, is still a long way off. Do not panic.
The Conversation

Francisco Pérez Fernández, Professor of Criminal Psychology, Psychology of Crime, Anthropology and Criminal Sociology and researcher, Camilo Jose Cela University and Heriberto Janosch, PhD Professor and Researcher, Camilo Jose Cela University

This article was originally published on The Conversation. Read the original.

Advertisement

ChatGPT: more artificial than intelligent

Follow TodaysGist on Google News  and receive alerts for the main Tech news FAQ questions, series entertainments and more!

FIRST REACTION FROM A READER:

Be the first to leave us a comment, down the comment section. click allow to follow this topic and get firsthand daily updates.

JOIN US ON OUR SOCIAL MEDIA: << FACEBOOK >> | << WHATSAPP >> | << TELEGRAM >> | << TWITTER >

ChatGPT: more artificial than intelligent
#ChatGPT #artificial #intelligent
Advertisement
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending