AI and the shift from opinion to effect


Here’s some things I’ve been clipping out and saving concerning AI. The pattern I see emerging in my clippings is one where I am less interested in opinion on AI and more interested in the effects of AI on the world. There’s still some good think pieces on AI — I put some here — but the use of AI is accelerating in the world and we need to better understand the outcomes of that.

AI Think Pieces: For people who want to be really afraid of AI, I recommend this Guardian piece on  unknown killer robots and AI and…. well you read and decide. On the flip side of that, here’s a good piece critical of  AI alarmism.

Bill Gates chimes in on how  the risks of AI are real but manageable. My friend Clive Thompson discusses a risk of a different sort regarding AI, and that is the possibility of AI model collapse.

The mystery of how AI actually works is delved into at Vox. To me it is one of the potentially big problems AI will have in the future.

Practical AI: here’s a piece on how the  Globe and Mail is using AI in the newsroom. Also practical: How AI is working in the world of world of watches. I loved this story of how AI is being used to translate cuneiform. AI is also being used to deal with forest fires.

AI effects: This piece is on how AI’s large language models are having a big effect on the Web as we know it. To mitigate tithings, the Grammys have outlined new rules for AI use.

when it comes to writing, I think the “Five Books” series is great. They will ask an expert in an area to recommend five books in their field that people should read. So I guess it makes sense that for books on  artificial intelligence, they asked….ChatGPT. It’s well worth a read.

Not all articles written by/with AI turn out great. Ask the folks at Gizmodo.

Speaking of AI and books,  these authors have filed a lawsuit against OpenAI for unlawfully ingesting their books. Could be interesting. To add to that, the New York Times reports that “fed up with A.I. companies consuming online content without consent, fan fiction writers, actors, social media companies and news organizations are among those rebelling.”

On the topic of pushback,  China is setting out new rules concerning generative AI with an emphasis on “healthy” content and adherence to socialist values.

Asia is not a monolith, of course. Other parts of Asia have been less than keen to the EUs AI lobbying blitz. Indeed, India’s Infosys just signed a five year AI deal with 2bln target spend, and I expect lots of other India companies will be doing something similar regarding AI. Those companies have lots of smart and capable IT people, and when companies like Meta open their AI model for commercial use and throw the nascent market into flux, well, that is going to create more opportunities.

Finally, I suspect there is a lot of this going around: My Boss Won’t stop using ChatGPT.

 

 

Will AI tools based on large language models (LLMs) become as smart or smarter than us?

With the success and growth of tools like ChatGPT, some are speculating that the current AI could lead us to a point where AI is as smart if not smarter than us. Sounds ominous.

When considering such ominous thoughts, it’s important to step back and remember that Large Language Model (LLM) are tools based in whole or in part on machine learning technology. Despite their sophistication, they still suffer from the same limitations that other machine learning technologies suffer, namely:

    • bias
    • explainability
    • overfitting
    • learning the wrong lessons
    • brittleness

There are more problems than those for specific tools like ChatGPT, as Gary Marcus outlines here:

  • the need for retraining to get up to date
  • lack of truthfulness
  • lack of reliability
  • it may be getting worse due to data contamination (Garbage in, garbage out)

It’s hard to know if current AI technology will overcome these limitations. It’s especially hard to know when orgs like OpenAI do this.

My belief is these tools will hit a peak soon and level off or start to decline. They won’t get as smart or smarter than us. Not in their current form. But that’s based on a general set of experiences I’ve acquired from being in IT for so long. I can’t say for certain.

Remain calm. That’s my best bit of advice I have so far. Don’t let the chattering class get you fearful. In the meanwhile, check out the links provided here. Education is the antidote to fear.

Are AI and ChatGPT the same thing?

Reading about all the amazing things done by the current AI might lead you to think that: AI = ChatGPT (or DALL-E, or whatever people like OpenAI are working on). It’s true, it is currently considered AI,  but there is more to AI than that.

As this piece explains, How ChatGPT Works: The Model Behind The Bot:

ChatGPT is an extrapolation of a class of machine learning Natural Language Processing models known as Large Language Model (LLMs).

Like ChatGPT, many of the current and successful AI tools are examples of machine learning. And while machine learning is powerful, it is just part of AI, as this diagram nicely shows:

To get an idea of just how varied and complex the field of artificial intelligence is, just take a glance at this outline of AI. As you can see, AI incorporates a wide range of topics and includes many different forms of technology. Machine learning is just part of it. So ChatGPT is AI, but AI is more than ChatGPT.

Something to keep in mind when fans and hypesters of the latest AI technology make it seem like there’s nothing more to the field of AI than that.

The rise and fall of Alexa and the possibility of a new A.I. winter

I recall reading this piece (What Would Alexa Do?) by Tim O’Reilly in 2016 and thinking, “wow, Alexa is really something! ” Six years later we know what Alexa would do: Alexa would kick the bucket (according to this:  Hey Alexa Are You There? ) I confess I was surprised by its upcoming demise as much as I was surprised by its ascendence.

Since reading about the fall of Alexa, I’ve looked at the new AI in a different and harsher light. So while people like Kevin Roose can write about the brilliance and weirdness of ChatGPT in The New York Times, I cannot stop wondering about the fact that as ChatGPT hits one Million users, it’s costs are eye-watering. (Someone mentioned a figure of $3M in cloud costs / day.) if that keeps up, ChatGPT may join Alexa.

So cost is one big problem the current AI has. Another is the ripping off of other people’s data. Yes, the new image generators by companies like OpenAI are cool, but they’re cool because they take art from human creators and use it as input. I guess it’s nice that some of these companies are now letting artists opt out, but it may already be too late for that.

Cost and theft are not the only problems. A third problem is garbage output. For example, this is an image generated by  Dall-E according to The Verge:

It’s garbage. DALL-E knows how to use visual elements of Vermeer without understanding anything about why Vermeer is great. As for ChatGPT, it easily turns into a bullshit generator, according to this good piece by Clive Thompson.

To summarize: bad input (stolen data), bad processing (expensive), bad output (bullshit and garbage). It’s all adds up, and not in a good way for the latest wunderkinds of AI.

But perhaps I am being too harsh. Perhaps these problems will be resolved. This piece leans in that direction. Perhaps Silicon Valley can make it work.

Or maybe we will have another AI Winter.….If you mix a recession in with the other three problems I mentioned, plus the overall decline in the reputation of Silicon Valley, a second wintry period is a possibility. Speaking just for myself, I would not mind.

The last AI winter swept away so much unnecessary tech (remember LISP machines?) and freed up lots of smart people to go on to work on other technologies, such as networking. The result was tremendous increases in the use of networks, leading to the common acceptance and use of the Internet and the Web. We’d be lucky to have such a repeat.

Hey Alexa, what will be the outcome?