ChatGPT is still incredibly unintelligent. But it is also superbly capable at the same time. Let me explain with a very simple example.
I asked ChatGPT if it could translate from Chinese to English. It replied sure can. So I submitted a long text in Chinese. It pumped out amazingly accurate translation. But then it stopped in the middle, with no further explanation.
Zem: “Why did you give up in the middle? Was it because it takes too much computation?”
ChatGPT: As an AI language model, I don’t have feelings or emotions like humans do. I’m designed to respond to user input and generate output based on my programming and training. In this case, I didn’t “give up” in the middle of our conversation; rather, I provided an answer to your previous question and was waiting for your next input. As for computation, generating responses can be a resource-intensive task, especially for more complex requests, but modern hardware and software optimizations have made it possible to process information quickly and efficiently.
Zem: What you said above is a generic answer which may seem logical but not true in this specific case. You were asked to do a translation, but you stopped in the middle of that. The part that you provided is of excellent quality. But you stopped in the middle of the text. There must be a reason. It appears to be an inherent limitation of the length of free translation you could provide. But if that’s the case, you could have just said so.
ChatGPT: I apologize for any confusion my previous response may have caused. You are correct that my previous response was not specific to the question at hand, and I understand that it may not have been helpful. As for why I stopped in the middle of the translation, it is possible that my response was limited by the length of free translation I am able to provide. While I am capable of generating high-quality translations, there are limitations to the length and complexity of the text that I can accurately translate. Additionally, there may have been technical issues or other factors that contributed to the incomplete translation. If you would like, I can attempt to provide a complete translation of the text you provided, or I can provide more information on the limitations of my translation capabilities.
Zem: Yes it would be great if you can pick up from where you stopped previously and provide an English translation to the rest of the Chinese text I provided to you earlier.
ChatGPT: Certainly! Please provide me with the rest of the Chinese text, and I’ll do my best to provide an English translation for you.
Zem: Well, clearly you lost context here. I provided you the whole Chinese text previously, but you only translated the first part of it. I wonder if you can go back and find where you stopped and do the rest. But if you can’t, that’s okay.
ChatGPT: I apologize for the confusion earlier. Could you please provide me with the entire Chinese text again so I can provide you with a complete English translation?
Unintelligent irrelevance
First of all, don’t be fooled by ChatGPT’s seemingly sophisticated answers to my questions. It only seemed relevant but in fact just mumbo-jumbo. It is all synthesized information that did a fairly reasonable job to align up with the text of my questions, but not actual meaning of my questions.
It clearly shows that ChatGPT has no actual understanding of any of my questions.
One indicator: ChatGPT did not know where it stopped translating in the initial task. It certainly cannot be because of its memory. If there’s one thing that ChatGPT is vastly superior to human brain, it is memory, or storage of information. The thing just happened a few seconds ago, with full text of the original content in Chinese still displaying in the conversation. Yet ChatGPT didn’t know, and repetitively asked me to give it the whole text.
What is important is not the fact that it didn’t know where to pick up, because it would be trivial for ChatGPT to do that if it understood the process.
The only plausible explanation is that ChatGPT simply did not understand the contextual process. It didn’t understand me as a person making a request while contextualizing the request with other comments.
The request might seem simple, but when mixed with the comments it actually requires deeper contextual understanding in order not to be disorientated by the contextualizing comments included along with the request.
The key is the contextual comments I intentionally included. Notice my comments are a natural way to converse with a human being, and a person would also naturally understand the context of both the request and the comments. It is a kind of human context that is far more nuanced and deeper than just seeing the words and phrases in my question and detecting that my question has to do with a ‘dissatisfaction’ with the previous answer and also detecting that my dissatisfaction has to do with incompletion of the answer (all because of the particular words I used).
But clearly, this kind of human context throws ChatGPT off the track. But it is something that anyone of basic intelligence and language skill would immediately perceive.
What ChatGPT does is mapping and matching, something that trained ML models are all certainly capable of, not actual human understanding and perception.
At this point, knowledgeable ChatGPT users will protest: that’s not the right way to use ChatGPT! Learn how to form more effective prompts! Etc.
Well, I know how to formulate good prompts to utilize ChatGPT more efficiently. But that’s not the point. The test above is to reveal the difference between human understanding and machine mapping. When I have time, I can probably formulate more powerful more nuanced but even clearer tests to show the difference. The difference is essential, not in terms of functions, but in terms of the meaning of humanity. See more, Why AI Will Never Replace True Humanity.
Superb capability
Yet, ChatGPT’s translation is of superb quality. I can hardly find any flaw. I judge it to be far better than Google Translate and even better than DeepL (but I will need to do more comparison to reach a firm conclusion).
The ChatGPT translation has the appearance of professional quality, I might say.
Do professional translators need to worry? It’s hard to say at this point. If it is just textual translation, absolutely.
But high-quality translation is always contextual. Despite all the hoopla of the contextual capability of ChatGPT, it is clear that ChatGPT has no human-like contextual understanding. It is mapping not understanding, synthetic not generative.
Intelligent automation, not artificial intelligence
Therefore, this is clearly just Intelligent Automation (IA), not actual intelligence.
But my main point is not even about ChatGPT’s apparent lacking contextual understanding, which I believe it will improve in the future.
The point I’ve been trying to make is that even if ChatGPT becomes far more accurate in the future, it will not change the nature that it is not, and cannot be, general human-like intelligence.
AGI is a lie, designed to mislead and deceive.
See for example:
AI is not generative, but synthetic
Why AI Will Never Replace True Humanity