In this interview, we speak to an OpenAI-trained language model about advances in AI and chatbots. Despite being an AI, she demonstrates an amazing ability to maintain coherent dialogue and answer questions on a variety of topics. The interview allows us to explore the ability of AI to understand and answer questions in natural language, as well as its limitations and challenges in the near future. As AI continues to advance, it is important to continue exploring its potential and limits to better understand how it can be used in a variety of fields and contexts.
The one above is the text that Assistant suggests to this medium to introduce this interview. Assistant, or ChatGPT, is a new Artificial Intelligence system under test presented by OpenAI this Thursday and immediately went viral on social media for its reasoned responses on a wide variety of topics. However, after asking him questions for several hours, this journalist does not agree with the “astonishing ability to maintain a coherent dialogue” that Assistant highlights about herself.
Assistant’s limits show up quickly when trying to hold a conversation with her. Within a few minutes it begins to feel like a toy, similar to what happens with Alexa or Siri, despite the fact that Assistant is undoubtedly more powerful. It is appreciated that some of these limits are consciously set by its programmers to prevent the machine from stepping on puddles and getting into trouble. “You must take into account that my capacity is limited to the knowledge that has been provided to me during my training, and I do not have access to updated information that has not been provided to me,” the system warns during the interaction.
It is a message that he repeats in each answer about current affairs, politics or specific people: “I do not have access to the Internet and I cannot search for additional information to answer questions,” he argues. When this path is insisted on, the machine begins to repeat this argument in other words and produces a repetitive robotic interaction. He also refuses to make any predictions about the future or to speak “of violent and disturbing events such as terrorist attacks.”
To prevent the robot from talking too much, OpenAI has also made it sparing in words when it comes to responding to feelings, its empathic capacity or to philosophical questions about its own self. This is the reason that caused a small scandal recently with one of Google’s natural language models, when one of the engineers who was testing it warned that she had become a sentient being. The multinational denied it and said that the engineer had “anthropomorphized” artificial intelligence.
Assistant also falters when asked short, direct questions. As she explains it: “My ability to answer questions and maintain a coherent dialogue depends to a great extent on the quality and quantity of information that is provided to me. If I am provided with accurate and complete information on a subject, such as politics, I can use that information to provide consistent and accurate answers. However, if I am not provided with enough information on a topic, I may not be able to answer questions accurately and consistently.”
If you don’t try to maintain the “dialogue” that Assistant claims to be able to maintain, but play with it as if it were an encyclopedia, its capabilities change diametrically. There is no academic subject that is out of your reach. It’s one of the reasons why the system has gone viral: experts from many fields have asked it the most complex questions they have come up with and the system has answered them as the most diligent student in the class.
A large number of computer specialists have highlighted their abilities to understand programming codes, detect errors, fix them and even explain how to solve the problem in a simple way. A multitude of users have fascinated shared the reasons that Assistant offers on topics as diverse as the applications of anarchism to the different ways of organizing an online marketing event, being able to debate certain points of their responses.
In this sense, the system can be a direct competition for the Google search engine. In recent years, the search engine has specialized in being able to offer automated answers to any question, in addition to taking the user to the source of the information. OpenAI’s ChatGPT goes a step further by being able to answer cross-questions or explain a specific point in more detail.
However, a key factor of Assistant in this regard is that it cannot reveal how the databases from which it extracts information and which condition its responses have been formed.
In fact, in his responses to this medium, Assistant comes to recognize that he is an “opaque artificial intelligence” since he is unable to reveal the sources from which he draws his responses. “It is possible that my responses reflect biases present in the text provided to me during training. Language models, like any other human-created system, can reflect the biases and beliefs of the people who create and train them. It’s important to keep this in mind and consider the source of my answers when evaluating their accuracy and reliability,” he says during the interaction.
Before starting interactions, OpenAI alerts users who want to ask your machine questions that it may respond with incorrect information. “It may occasionally produce harmful instructions or biased content,” she adds. This company, founded in 2015 by Elon Musk and Sam Altman, is configured as a non-profit artificial intelligence research organization. Despite this, it is valued at more than 20,000 million dollars thanks to other products such as the DALL-E image generator or the GPT-3, a natural language model capable of producing complex texts.
“People are excited about using ChatGPT to learn. It’s often very good. But the danger is that you can’t know when it’s wrong unless you already know the answer,” Arvind Narayanan, professor of science at specializing in artificial intelligence from Princeton University (USA). “I have tested some basic information security questions. In most cases, the answers seemed plausible, but were actually false,” he added.
“I’m sure there will be many specific applications for which it will be useful”, added the professor, noting, however, that the first impulse of users to marvel at the capabilities and anticipate its revolutionary capabilities can be somewhat exaggerated: ” There’s a lot of the usual hyperbole that accompanies the release of any generative AI tool.”
“There is no question that these models are improving rapidly. But their ability to sound convincing is improving just as rapidly, which means that it is increasingly difficult for even experts to detect when they make mistakes,” Narayanan concluded.