With the success and growth of tools like ChatGPT, some are speculating that the current AI could lead us to a point where AI is as smart if not smarter than us. Sounds ominous.
When considering such ominous thoughts, it’s important to step back and remember that Large Language Model (LLM) are tools based in whole or in part on machine learning technology. Despite their sophistication, they still suffer from the same limitations that other machine learning technologies suffer, namely:
-
- bias
- explainability
- overfitting
- learning the wrong lessons
- brittleness
There are more problems than those for specific tools like ChatGPT, as Gary Marcus outlines here:
- the need for retraining to get up to date
- lack of truthfulness
- lack of reliability
- it may be getting worse due to data contamination (Garbage in, garbage out)
It’s hard to know if current AI technology will overcome these limitations. It’s especially hard to know when orgs like OpenAI do this.
My belief is these tools will hit a peak soon and level off or start to decline. They won’t get as smart or smarter than us. Not in their current form. But that’s based on a general set of experiences I’ve acquired from being in IT for so long. I can’t say for certain.
Remain calm. That’s my best bit of advice I have so far. Don’t let the chattering class get you fearful. In the meanwhile, check out the links provided here. Education is the antidote to fear.