Tag Archives: ChatGPT

A plethora of good links on AI

There’s still an overwhelming amount of material being written on AI. Here’s a few lists of some of the ones I found most interesting:

ChatGPT: ChatGPT (3 and 4) still dominate much of the discussion I see around AI. For instance:

Using AI: people are trying to use AI for practical purposes, as those last few links showed. Here’s some more examples:

AI and imagery: not all AI is about text. There’s quite a lot going on in the visual space too. Here’s a taste:

AI and the problems it causes: there’s lots of risks with any new technology, and AI is no exception. Cases in point:

Last but not least: 

Advertisement

The profiles (beat-sweeteners?) of Sam Altman

Oddly (not oddly at all?) both the New York Times and the Wall Street Journal had profiles of  Sam Altman at the end of March:

  1. Sam Altman, the ChatGPT King, Is Pretty Sure It’s All Going to Be OK – The New York Times
  2. The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT – WSJ

Given the contentious nature of AI and ChatGPT, you might think think that those pieces would have asked tough questions of Altman concerning AI. Especially since Leslie Stahl did something similar to execs of Microsoft, a few weeks earlier. Perhaps the work of Stahl is why Microsoft / OpenAI wanted Altman to get out there with his story. If that was the intent, then it seemed to work. Nothing too tough in either of these profiles.

Then again, perhaps they were written as beat-sweeteners. After all, getting access is just as important for tech journalists as it is for political journalists. If you want to write more about AI in the future, being able to ring up Altman and his gang and get through to them for a comment seems like something you might want for your job. No doubt profiles like that can help with that access.

For more on the topic of beat-sweeteners, I give you this: Slate’s Beat-Sweetener Reader in Columbia Journalism Review.

 

 

 

More reasons why ChatGPT is not going to replace coders

I have been arguing recently about the limits of the current AI and why it is not going to take over the job of coding yet. I am not alone in this regard. Clive Thompson, who knows a lot about the topic, recently wrote this: Why ChatGPT Won’t Replace Coders Just Yet. Among other reasons, the “‘bullshit’ problem turns up in code, too”. I recommend you read Clive’s take on the subject. And after you read that, check out his book, “Coders”. You can order it, here, from his publisher. I think it’s a classic and one of the best things written on software.

Will AI tools based on large language models (LLMs) become as smart or smarter than us?

With the success and growth of tools like ChatGPT, some are speculating that the current AI could lead us to a point where AI is as smart if not smarter than us. Sounds ominous.

When considering such ominous thoughts, it’s important to step back and remember that Large Language Model (LLM) are tools based in whole or in part on machine learning technology. Despite their sophistication, they still suffer from the same limitations that other machine learning technologies suffer, namely:

    • bias
    • explainability
    • overfitting
    • learning the wrong lessons
    • brittleness

There are more problems than those for specific tools like ChatGPT, as Gary Marcus outlines here:

  • the need for retraining to get up to date
  • lack of truthfulness
  • lack of reliability
  • it may be getting worse due to data contamination (Garbage in, garbage out)

It’s hard to know if current AI technology will overcome these limitations. It’s especially hard to know when orgs like OpenAI do this.

My belief is these tools will hit a peak soon and level off or start to decline. They won’t get as smart or smarter than us. Not in their current form. But that’s based on a general set of experiences I’ve acquired from being in IT for so long. I can’t say for certain.

Remain calm. That’s my best bit of advice I have so far. Don’t let the chattering class get you fearful. In the meanwhile, check out the links provided here. Education is the antidote to fear.

Are AI and ChatGPT the same thing?

Reading about all the amazing things done by the current AI might lead you to think that: AI = ChatGPT (or DALL-E, or whatever people like OpenAI are working on). It’s true, it is currently considered AI,  but there is more to AI than that.

As this piece explains, How ChatGPT Works: The Model Behind The Bot:

ChatGPT is an extrapolation of a class of machine learning Natural Language Processing models known as Large Language Model (LLMs).

Like ChatGPT, many of the current and successful AI tools are examples of machine learning. And while machine learning is powerful, it is just part of AI, as this diagram nicely shows:

To get an idea of just how varied and complex the field of artificial intelligence is, just take a glance at this outline of AI. As you can see, AI incorporates a wide range of topics and includes many different forms of technology. Machine learning is just part of it. So ChatGPT is AI, but AI is more than ChatGPT.

Something to keep in mind when fans and hypesters of the latest AI technology make it seem like there’s nothing more to the field of AI than that.

No, prompt engineering is not going to become a hot job. Let a former knowledge engineer explain

With the rise of AI, LLMs, ChatGPT and more, a new skill has risen. The skill involves knowing how to construct prompts for the AI software in such a way that you get an optimal result. This has led to a number of people to start saying things like this: prompt engineers is the next big job. I am here to say this is wrong. Let me explain.

I was heavily into AI in the late 20th century, just before the last AI winter. One of the hot jobs at that time was going to be knowledge engineer (KE). A big part of AI then was the development of expert systems, and the job of the KE was to take the the expertise of someone and translate it into rules that the expert system could use to make decisions. Among other things, part of my role was to be a KE.

So what happened? Well, first off, AI winter happened. People stopped developing expert systems and went and took on other roles.  Ironically, rules engines (essentially expert systems) did come back, but all the hype surrounding them was gone, and the role of KE was gone too. It wasn’t needed. A business analyst can just as easily determine what the rules are and then have a technical specialist store that in the rules engine.

Assuming tools like ChatGPT were to last, I would expect the creation of prompts for it to be taken on by business analysts and technical specialist. Business as usual, in other words. No need for a “prompt engineer”.

Also, you should not assume things like ChatGPT will last. How these tools work is highly volatile; they are not well structured things like programming languages or SQL queries. The prompts that worked on them last week may result in nothing a week later. Furthermore, there are so many problems with the new AI that I could easily see them falling into a new AI winter in the next few years.

So, no, I don’t think Prompt Engineering is a thing that will last. If you want to update your resume to say Prompt Engineer after you’ve hacked around with one of the current AI tools out there, knock yourself out. Just don’t get too far ahead of yourself and think there is going to be a career path there.

The rise and fall of Alexa and the possibility of a new A.I. winter

I recall reading this piece (What Would Alexa Do?) by Tim O’Reilly in 2016 and thinking, “wow, Alexa is really something! ” Six years later we know what Alexa would do: Alexa would kick the bucket (according to this:  Hey Alexa Are You There? ) I confess I was surprised by its upcoming demise as much as I was surprised by its ascendence.

Since reading about the fall of Alexa, I’ve looked at the new AI in a different and harsher light. So while people like Kevin Roose can write about the brilliance and weirdness of ChatGPT in The New York Times, I cannot stop wondering about the fact that as ChatGPT hits one Million users, it’s costs are eye-watering. (Someone mentioned a figure of $3M in cloud costs / day.) if that keeps up, ChatGPT may join Alexa.

So cost is one big problem the current AI has. Another is the ripping off of other people’s data. Yes, the new image generators by companies like OpenAI are cool, but they’re cool because they take art from human creators and use it as input. I guess it’s nice that some of these companies are now letting artists opt out, but it may already be too late for that.

Cost and theft are not the only problems. A third problem is garbage output. For example, this is an image generated by  Dall-E according to The Verge:

It’s garbage. DALL-E knows how to use visual elements of Vermeer without understanding anything about why Vermeer is great. As for ChatGPT, it easily turns into a bullshit generator, according to this good piece by Clive Thompson.

To summarize: bad input (stolen data), bad processing (expensive), bad output (bullshit and garbage). It’s all adds up, and not in a good way for the latest wunderkinds of AI.

But perhaps I am being too harsh. Perhaps these problems will be resolved. This piece leans in that direction. Perhaps Silicon Valley can make it work.

Or maybe we will have another AI Winter.….If you mix a recession in with the other three problems I mentioned, plus the overall decline in the reputation of Silicon Valley, a second wintry period is a possibility. Speaking just for myself, I would not mind.

The last AI winter swept away so much unnecessary tech (remember LISP machines?) and freed up lots of smart people to go on to work on other technologies, such as networking. The result was tremendous increases in the use of networks, leading to the common acceptance and use of the Internet and the Web. We’d be lucky to have such a repeat.

Hey Alexa, what will be the outcome?