With the success and growth of tools like ChatGPT, some are speculating that the current AI could lead us to a point where AI is as smart if not smarter than us. Sounds ominous.
When considering such ominous thoughts, it’s important to step back and remember that Large Language Model (LLM) are tools based in whole or in part on machine learning technology. Despite their sophistication, they still suffer from the same limitations that other machine learning technologies suffer, namely:
- learning the wrong lessons
There are more problems than those for specific tools like ChatGPT, as Gary Marcus outlines here:
- the need for retraining to get up to date
- lack of truthfulness
- lack of reliability
- it may be getting worse due to data contamination (Garbage in, garbage out)
It’s hard to know if current AI technology will overcome these limitations. It’s especially hard to know when orgs like OpenAI do this.
My belief is these tools will hit a peak soon and level off or start to decline. They won’t get as smart or smarter than us. Not in their current form. But that’s based on a general set of experiences I’ve acquired from being in IT for so long. I can’t say for certain.
Remain calm. That’s my best bit of advice I have so far. Don’t let the chattering class get you fearful. In the meanwhile, check out the links provided here. Education is the antidote to fear.
Reading about all the amazing things done by the current AI might lead you to think that: AI = ChatGPT (or DALL-E, or whatever people like OpenAI are working on). It’s true, it is currently considered AI, but there is more to AI than that.
As this piece explains, How ChatGPT Works: The Model Behind The Bot:
ChatGPT is an extrapolation of a class of machine learning Natural Language Processing models known as Large Language Model (LLMs).
Like ChatGPT, many of the current and successful AI tools are examples of machine learning. And while machine learning is powerful, it is just part of AI, as this diagram nicely shows:
To get an idea of just how varied and complex the field of artificial intelligence is, just take a glance at this outline of AI. As you can see, AI incorporates a wide range of topics and includes many different forms of technology. Machine learning is just part of it. So ChatGPT is AI, but AI is more than ChatGPT.
Something to keep in mind when fans and hypesters of the latest AI technology make it seem like there’s nothing more to the field of AI than that.
It might surprise people, but work in AI has been going on for some time. In fact it started as early as the mid-1950s. In the 50s until the 70s, “computers were solving algebra word problems, proving theorems in geometry and learning to speak English”. They were nothing like OpenAI’s ChatGPT, but they were impressive in their own way. Just like now, people were thinking the sky’s the limit.
Then three things happened: the first AI winter from 1974 until 1980, the boom years from 1980-1987, and then the next AI winter from 1987-1993. I was swept up in the second AI winter, and like the first one, there was a combination of hitting a wall in terms of what the technology could do followed by a drying up of funding.
During the boom times it seemed like there would be no stopping AI and it would eventually be able to do everything humans can do and more. It feels that way now with the current AI boom. People like OpenAI and others are saying the sky’s the limit and nothing is impossible. But just like in the previous boom eras, I think the current AI boom will hit a wall with the technology (we are seeing some of it already). At that point we may see a reduction in funding from companies like Microsoft and Google and more (just like how we are seeing a drawback from them on voice recognition technology like Alexa and Siri).
So yes, the current AI technology is exciting. And yes, it seems like there is no end to what it can do. But I think we will get another AI winter sooner than later, and during this time work will continue in the AI space but you’ll no longer be reading news about it daily. The AI effect will also occur and the work being done by people like OpenAI will just get incorporated into the everyday tools we use, just like autocorrect and image recognition is no just something we take for granted.
P.S. If you are interested in the history of the second AI winter, this piece is good.
Posted in AI
Tagged AI, IT, software
Since there is so much talk about AI now, I think it is good for people to be familiar with some key ideas concerning AI. One of these is the AI effect. The cool AI you are using now, be it ChatGPT or DALL-E or something else, will eventually get incorporated into some commonplace piece of IT and you won’t even think much of it. You certainly won’t be reading about it everywhere. If anything you and I will complain about it, much like we complain about autocorrect.
So what is the AI Effect? As Wikipedia explains:
The AI effect” is that line of thinking, the tendency to redefine AI to mean: “AI is anything that has not been done yet.” This is the common public misperception, that as soon as AI successfully solves a problem, that solution method is no longer within the domain of AI. Geist credits John McCarthy giving this phenomenon its name, the “AI effect”.
McCorduck calls it an “odd paradox” that “practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the ‘failures’, the tough nuts that couldn’t yet be cracked.”
It’s true. Many things over the years that were once thought of as AI are now considered simply software or hardware, if we even think of them at all. Whether it is winning at chess, recognizing your voice, or recognizing text in an images, these things are commonplace now, but were lofty goals for AI researchers once.
The AI effect is a key idea to keep in mind when people are hyping any new AI as the thing that will change everything. If the new AI becomes useful, we will likely stop thinking it is AI.
For more on the topic, see: AI effect – Wikipedia
Posted in AI
Tagged IT, software, AI
With the rise of AI, LLMs, ChatGPT and more, a new skill has risen. The skill involves knowing how to construct prompts for the AI software in such a way that you get an optimal result. This has led to a number of people to start saying things like this: prompt engineers is the next big job. I am here to say this is wrong. Let me explain.
I was heavily into AI in the late 20th century, just before the last AI winter. One of the hot jobs at that time was going to be knowledge engineer (KE). A big part of AI then was the development of expert systems, and the job of the KE was to take the the expertise of someone and translate it into rules that the expert system could use to make decisions. Among other things, part of my role was to be a KE.
So what happened? Well, first off, AI winter happened. People stopped developing expert systems and went and took on other roles. Ironically, rules engines (essentially expert systems) did come back, but all the hype surrounding them was gone, and the role of KE was gone too. It wasn’t needed. A business analyst can just as easily determine what the rules are and then have a technical specialist store that in the rules engine.
Assuming tools like ChatGPT were to last, I would expect the creation of prompts for it to be taken on by business analysts and technical specialist. Business as usual, in other words. No need for a “prompt engineer”.
Also, you should not assume things like ChatGPT will last. How these tools work is highly volatile; they are not well structured things like programming languages or SQL queries. The prompts that worked on them last week may result in nothing a week later. Furthermore, there are so many problems with the new AI that I could easily see them falling into a new AI winter in the next few years.
So, no, I don’t think Prompt Engineering is a thing that will last. If you want to update your resume to say Prompt Engineer after you’ve hacked around with one of the current AI tools out there, knock yourself out. Just don’t get too far ahead of yourself and think there is going to be a career path there.
I am a fan of smart speakers, despite the privacy concerns around them. If you are ok with that and you have one or are planning to get one, read these two links to see how you can get more out of them:
- How to control Sonos with Google Assistant
- Alexa Skills That Are Actually Fun and Useful | WIRED
I use Google Assistant on my Sonos and they make a great device even better. And while I do have Google Home devices in other parts of the house, I tend to be around the Sonos most, so having it there to do more than just play music is a nice thing indeed.
Posted in AI, IT
Tagged AI, alexa, google, IT, Sonos
Recently I tried to upgrade my Mac from Catalina to Big Sur. I have done OS upgrades in the past without any problems. I assumed it would be the same with Big Sur. I was wrong.
I am not sure if the problem was with Big Sur or the state of my Mac. I do know my MacBook Air was down to less than 20 GB free. When I tried to install Big Sur, my Mac first started complaining about that. However after I freed up more space (just above 20 GB) it proceeded with the install.
While it proceeded, it did not complete. No matter what I did, I could not get it to boot all the way up. Recovery mode did not resolve the problem. Internet recovery mode would allow me to install Mac OS Mojave, but not Catalina or Big Sur.
Initially I tried installing Mojave, but after the install was complete, I got a circle with a line through it (not a good sign). I tried resetting NVRAM or PRAM and that helped me get further, but even as I logged in, I could not get the MacOS to fully boot up (it just went back to the login).
Eventually I did the following:
- Bought a 256 GB flash drive. Mine was from Kingston. I bought a size that matched my drive. I could have gotten away with a smaller one, but I was tired and didn’t want to risk not having enough space to use it as a backup.
- Put the flash drive into the Mac (I had a dongle to connect regular USB to USB-C)
- Booted up the mac by going into Internet recovery mode
- Went into disk utilities and made sure my Macintosh HD, Macintosh HD – Data and KINGSTON drive were mounted. (I used the MOUNT button to mount them if they weren’t mounted).
- Ran FIRST AID on all disks.
- Left Disk Utility. Clicked on Utilities > Terminal
- Copied my most important files from Macintosh HD – DATA to KINGSTON (both of them could be found in the directory /Volumes. For example, /Volumes/KINGSTON.) The files I wanted to backup were in /Volumes/Macintosh*DATA/Users/bernie/Documents (I think).
- Once I copied the files onto the USB Drive — it took hours — I checked to make sure they were there. I then got rid of a lot more files from the Documents area on my hard drive. After careful deleting, I had about 50 GB free. At one point I was talking to AppleCare and the support person said: yeah, you need a lot more than 20 GB of free space. So I made a lot.
- Then I went back into Disk Utility and erased Macintosh HD
- This is important: I DID NOT ERASE Macintosh HD – DATA! Note: before you erase any drive using the Disk Utility, pursue other options, like contacting AppleCare. I did not erase Macintosh HD – DATA in order to save time later on recovering files. I was only going to erase it as a very last resort. It turns out I was ok with not erasing it. The problem were all on the Macintosh HD volume, the volume I DID erase.)
- Once I did that, I shut down and then came up in Internet Recovery Mode again. THIS TIME, I had the option of installing Big Sur (not Mojave). I installed Big Sur. It created a new userid for me: it didn’t recognize my old one.
- I was able to login this time and get the typical desk top. So that was all good.
- Now here is the interesting part: my computer now had two Macintosh HD – Data drives: an old one and a new one. What I did was shutdown and go into Internet Recovery Mode again and mounted both drives. I also mounted the KINGSTON USB drive. Then I moved files from the old Macintosh HD – Data to the new one. (You can use the mv command in Terminal. I did, plus I also did cp -R for recursive copying).
- My Mac is now recovered. Kinda. I mean, there are all sort of browser stuff that needed to be recovered. I had to reinstall all my favorite apps. Etc. But it is a working MacBook.
All in all, I learned a ton when it comes to recovering a Mac. If you are reading this because your Mac is in a similar situation, I wish you success.
While I was trying to do the repair, these links were helpful:
(Photo by Charles Deluvio on Unsplash)
Posted in AI, IT
Tagged advice, BigSur, Catalina, diagnostics, IT, Mac, Macbook, MacOS, Mojave, problems, repair
- How to control Sonos with Google Assistant – good if you like / use Google assistant
- Sonos speakers now work with IFTTT so you can automate your music – good if you are a fan of IFTTT, like I am
The Sonos One is a smart little speaker. Using Google Assistant and IFTTT.com make it even smarter.
Chatbots are relatively straightforward to deploy these days. AI providers like IBM and others provide all the technology you need. But do you really need them? And if you already have a bunch of them deployed, are you doing it right? If these questions have you wondering, I recommend you read this: Does Your Company Really Need a Chatbot?
You still may want to proceed with chatbots: they make a lot of business sense for certain types of work. But you will have a better idea when not to use them, too.
Here are some good links I have been collecting over time on IT that are still worth reading. They cover AI, the IOT, containers, and more. Enjoy!
- How to build a supercomputer with Raspberry Pis: Fun!
- 6 things I’ve learned in my first 6 months using serverless: Good stuff for serverless fans
- Building a serverless website in AWS: More good serverless stuff
- The Strange Birth and Long Life of Unix: A really good history of Unix. Well written.
- Spring Boot Memory Performance: If you use springboot, this is worth your while
- The end of windows: Anything that stratechery puts out is good, including this
- Dockerize a Spring Boot application: Speaking of springboot, this is useful
- Building a Deep Neural Network to play FIFA 18: A fascinating example of using AI to play games
- ThinkPad 25th Anniversary Edition : A great commemoration of a fine computer
- GitHub Is Microsoft’s $7.5 Billion Undo Button: A good piece on the story behind this investment by Microsoft
- Circuito.io: Want to build circuits, but don’t know how. This killer site is for you.
- Effie robot claims to do all your ironing: If you like robots and hate ironing, this could be for you.
- How To Install and Use TensorFlow on Ubuntu 16.04: For AI fans
- Set up a firewall on Ubuntu: Another good tutorial from Digital Ocean
- Not even IBM is sure where its quantum computer experiments will lead: For IBM Quantum fans
- In an Era of ‘Smart’ Things, Sometimes Dumb Stuff Is Better: Why analog is sometimes better.
- A simple neural network with Python and Keras: A good way to dabble with NNs
- The Talk: A comic which wonderfully explains quantum computing
- Use case diagrams: For those who like UML
- Eating disorder and social media: Wired has a good piece on how people avoid controls
Posted in AI, IT
Nope. And this piece, Machine Learning Vs. Artificial Intelligence: How Are They Different?, does a nice job of reviewing them at a non-technical level. At the end, you should see the differences.
(The image, via g2crowd.com, also shows this nicely).
Possibly, but as this article argues, there are at least three areas where robots and suck at:
Creative endeavours: These include creative writing, entrepreneurship, and scientific discovery. These can be highly paid and rewarding jobs. There is no better time to be an entrepreneur with an insight than today, because you can use technology to leverage your invention.
Social interactions: Robots do not have the kinds of emotional intelligence that humans have. Motivated people who are sensitive to the needs of others make great managers, leaders, salespeople, negotiators, caretakers, nurses, and teachers. Consider, for example, the idea of a robot giving a half-time pep talk to a high school football team. That would not be inspiring. Recent research makes clear that social skills are increasingly in demand.
Physical dexterity and mobility: If you have ever seen a robot try to pick up a pencil you see how clumsy and slow they are, compared to a human child. Humans have millennia of experience hiking mountains, swimming lakes, and dancing—practice that gives them extraordinary agility and physical dexterity.
Read the entire article; there’s much more in it than that. But if your job has some element of those three qualities, chances are robots won’t be replacing you soon.
This piece: What it’s like to be a modern engraver, the most automated job in the United States — Quartz, reminded me once again that the best use of technology is to augment the people doing the work, and not simply take away the work. Must reading for anyone who’s believes that the best way to use AI and other advanced tech is to eliminate jobs. My believe is that the best way to use AI and other advanced tech is to make jobs better, both for the employee, the employer, and the customer. The businesses that will succeed will have that belief as well.
(Image from this piece on how humans and robots can work together.)
If you are looking to build AI tech, or just learn about it, then you will find these interesting:
- Artificial intelligence pioneer says we need to start over – Axios – if Hinton says it, it is worth taking note
- Robots Will Take Fast-Food Jobs, But Not Because of Minimum Wage Hikes | Inverse – true. Economists need to stop making such a strong link here.
- Artificial Intelligence 101: How to Get Started | HackerEarth Blog – a good 101 piece
- Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level – MIT Technology Review – the ability of tech to learn is accelerating.
- Now AI Machines Are Learning to Understand Stories – MIT Technology Review – and not just accelerating, but getting deeper.
- Robots are coming for your job. That might not be bad news – good alternative insight from Laurie Penny.
- Pocket: Physicists Unleash AI to Devise Unthinkable Experiments – not surprisingly, a smart use of AI
- AI’s dueling definitions – O’Reilly Media – this highlights one of the problems with AI, and that it is it is a suitcase word (or term) and people fill it with what they want to fill it with
- A Neural Network Playground – a very nice tool to start working with AI
- Foxconn replaces ‘60,000 factory workers with robots’ – BBC News – there is no doubt in places like Foxconn, robots are taking jobs.
- 7 Steps to Mastering Machine Learning With Python – don’t be put off by this site’s design: there is good stuff here
- How Amazon Triggered a Robot Arms Race – Bloomberg – Amazon made a smart move with that acquisition and it is paying off
- When Police Use Robots to Kill People – Bloomberg this is a real moral quandary and I am certain the police aren’t the only people to be deciding on it. See also: A conversation on the ethics of Dallas police’s bomb robot – The Verge
- How to build and run your first deep learning network – O’Reilly Media – more good stuff on ML/DL/AI
- This expert thinks robots aren’t going to destroy many jobs. And that’s a problem. | The new new economy – another alternative take on robots and jobs
- Neural Evolution – Building a natural selection process with AI – more tutorials
- Uber Parking Lot Patrolled By Security Robot | Popular Science – not too long after this, one of these robots drowned in a pool in a mall. Technology: it’s not easy 🙂
- A Robot That Harms: When Machines Make Life Or Death Decisions : All Tech Considered : NPR – this is kinda dumb, but worth a quick read.
- Mathematics of Machine Learning | Mathematics | MIT OpenCourseWare – if you have the math skills, this looks promising
- Small Prolog | Managing organized complexity – I will always remain an AI/Prolog fan, so I am including this link.
- TensorKart: self-driving MarioKart with TensorFlow – a very cool application
- AI Software Learns to Make AI Software – MIT Technology Review – there is less here than it appears, but still worth reviewing
- How to Beat the Robots – The New York Times – meh. I think people need to learn to work with the technology, not try to defeat it. If you disagree, read this.
- People want to know: Why are there no good bots? – bot makers, take note.
- Noahpinion: Robuts takin’ jerbs
- globalinequality: Robotics or fascination with anthropomorphism – everyone is writing about robots and jobs, it seems.
- Valohai – more ML tools
- Seth’s Blog: 23 things artificially intelligent computers can do better/faster/cheaper than you can – like I said, everyone is writing about AI. Even Seth Godin.
- The Six Main Stories, As Identified by a Computer – The Atlantic – again, not a big deal, but interesting.
- A poet does TensorFlow – O’Reilly Media – artists will always experiment with new mediums
- How to train your own Object Detector with TensorFlow’s Object Detector API – more good tooling.
- Rise of the machines – the best – by far! – non-technical piece I have read about AI and robots.
- We Trained A Computer To Search For Hidden Spy Planes. This Is What It Found. – I was super impressed what Buzzfeed did here.
- The Best Machine Learning Resources – Machine Learning for Humans – Medium – tons of good resources here.
Google, Facebook, and Twitter are platforms. So are some retail sites. What does that mean? It means that they provide the means for people to use their technology to create things for themselves. Most of the time, this is a good thing. People can communicate in ways they never could before such platforms. Likewise, people can sell things to people they never could.
Now these platforms are in a bind, as you can see in this piece and in other places: Google, Facebook, and Twitter Sell Hate Speech Targeted Ads. They are in a bind partly due to their own approach, by boasting of their ability to use AI to stop such things. They should have been much more humble. AI as it currently stands will only take you so far. Instead of relying on things like AI, they need to have better governance mechanisms in place. Governance is a cost of organizations, and often times organizations don’t insert proper governance until flaws like this start to occur.
That said, this particular piece has several weaknesses. First up, this comment: “that the companies are incapable of building their systems to reflect moral values”. It would be remarkable for global companies to build systems to reflect moral values when even within individual nations there is conflicts regarding such values. Likewise the statement: “It seems highly unlikely that these platforms knowingly allow offensive language to slip through the cracks”. Again, define offensive language at a global level. To make it harder still, trying doing it with different languages and different cultures. The same thing occurs on retail sites when people put offensive images on T shirts. For some retail systems no one from the company that own the platform takes time to review every product that comes in.
And that gets to the problem. All these platforms could be mainly content agnostic, the way the telephone system is platform agnostic. However people are expecting them to insert themselves and not be content agnostic. Once that happens, they are going to be in an exceptional bind. We don’t live in a homogenous world where everyone shares the same values. Even if they converted to non-profits and spent a lot more revenue on reviewing content, there would still be limits to what they could do.
To make things better, these platforms need to be humble and realistic about what they can do and communicate that consistently and clearly with the people that use these systems. Otherwise, they are going to find that they are going to be governed in ways they are not going to like. Additionally, they need to decide what their own values are and communicate and defend them. They may lose users and customers, but the alternative of trying to be different things in different places will only make their own internal governance impossible.
According to this, chatbots in China have been removed after being critical of the Chinese government. This to me is not unlike what happened to Microsoft's chat bot that became racist after being feed racist input from users. If you put AI out there and allow any form of input, then the equivalent of vandals can overtake you AI and feed it whatever they choose. I'm not certain if that was the case in China but I suspect it was.
AI researchers need to expect the worst case use cases if they allow their software to do unsupervised learning on the Internet. If they don't, it's likely that their projects will be a disaster and they will do damage to the AI community in general.
Posted in AI
Tagged AI, chatbots, China
In France, politician Jean-Luc Mélenchon plans to be in seven places at once using something similar to a hologram. According to Le Parisien:
Strictly speaking, these are not holograms. Jean-Luc Mélenchon will be present in seven different places thanks to … an optical illusion discovered for the first time half a century ago by an Italian physicist
Virtual Mélenchon reminds me of the politician Yance in Philip K Dick’s novel, The Penultimate Truth. We may not be far off where we get virtual candidate that look like people but behind the scenes we have AI or some combination of AI and people.
For more on the technology, see the article in Le Parisien. For more on Dick’s novel, see Wikipedia. Read up now: I think we can expect to see more of this technology in use soon.
Posted in AI, ideas, IT, politics
Tagged AI, France, French, IT, philipkdick, politics, sci-fi, sciencefiction, SF
This piece, Most engineers are white — and so are the faces they use to train software – Recode, implies that AI software doesn’t do a good job recognizing non-white faces because most engineers (i.e. software developers) are white. I’d argue that the AI does a poor job because of this: the developers aren’t very good.
Good software developers, in particular the lead developers, take an active role in ensuring they have good test data. The success of their software when it goes live is dependent on it. Anyone using training data (i.e. using test data) in AI projects that is not using a broad set of faces is doing a poor job. Period. Regardless of whether or not they are white.
If the AI is supposed to do something (i.e. recognize all faces) and it does not, then the AI sucks. Don’t blame it on anything but technical abilities.
Because if you don’t have augmented intelligence, and if you solely depend on AI like software, you get problems like this, whereby automated software triggers an event that a trained human might have picked up on.
AI and ML (machine learning) can be highly probabilistic and limited to the information it is trained on. Having a human involved makes up for those limits. Just like AI can process much more information quicker than a limited human can.
See the link to the New York Times story to see what I mean.