Tag Archives: AI

Using AI in art making, from David Salle to Kevin Kelly (and how Walter Benjamin can help)


Using technology to make art is not new. Gerhard Richter used special software to make the work you see above (4900 colours). Before computers, artists would use lens and photographs and even craftsmen and women to help them create their final artwork.

What is new now is artists (and non-artists) are using AI to make art. Kevin Kelly talks about how he is using AI in his creative process. David Salle has dived deep into making new work using AI. NYT columnist Farhad Manjoo is using visual tools like Procreate to make AI art.

I have seen Kelly’s work, and Manjoo and Salle’s work are on display in their articles. Kelly experiments with AI to produce images in various styles. Perhaps he has changed, but there is no artist in his work that I can see. With Manjoo, you can see more of him in his drawings. And with Salle the artist’s presence comes in as an editor of the works the AI produces out of his original pieces.

In trying to assess these AI generated works, I think Walter Benjamin and his idea of an artwork having an aura can be useful here. Benjamin was thinking about how original art works have an aura that reproduced images of it do not have. I agree with that: no matter how good a reproduction of a work is, it rarely compares to the original work. There’s that something extra in the original.

I think we can extend out the idea of a work having an aura and also having a humanity. What does a work say about the person who created it? What do I recognize in it that is human and unique to that person? What ideas are there that could only come from that person in that time?

You can argue back that this is irrelevant and that AI generated images are interesting and beautiful and furthermore I cannot distinguish them from human generated images. That might be true. Maybe galleries will be filled with images and sculpture with no human involvement whatsoever, not unlike deep learning software that comes up with ways to be best at playing games like Chess and Go. Such AI artwork may be interesting and even beautiful and may seem to have that aura Benjamin talks about. They just won’t be associated with a human.

Even minimal and conceptual art has a humanity associated with it. Duchamp’s Fountain embodies Duchamp’s intelligence and wit and contrariness.  Arp’s “According to the Laws of Chance” likewise shows his interest in pushing the bounds of what is acceptable in a composition of an abstract work. A person is responsible for the work and the work is tied to them. A person is what makes the work relevant to us in a way that a wall covered with children’s collages or a shelf of toilets in a hardware store are not.

We need a new aesthetic philosophy to deal with the firehose of AI art that is coming our way. I propose we tie the art back to our humanity.

P.S. For more on Richter’s 4900 colours, you can see it here on his web site. There’s also a great view of  4900 colors, here,

 

AI scales up and out. Here’s some pieces that shows that.


While there are still prophets and pundits arguing doom and gloom regarding AI, most people and organizations have moved past them and have been adopting the technology widely. Some times that has been good, some times not. To get a sample of how it’s going, here’s a few dozen pieces on AI worth a look:

  1. The WSJ argues that you soon won’t be able to avoid AI at work or at home. It’s true, but so what?
  2. AI is being used to deal with the threat of wildfires. Good. Also good: AI allows farmers to monitor crops in real time. More good AI:  AI used to find antibodies. By the way, here’s a piece on how to turn chatgpt into a chemistry assistant.
  3. A worthwhile piece on AI lawsuits that are coming due to intellectual property rights.
  4. The New York Times has stopped Openai from crawling its site. More on that, here.
  5. Here’s the associated press AI guidelines for journalists.
  6. Students and writers, bookmark this in case you need it: what to do when you’re accused of writing with AI.
  7. Also, what can you do when AI lies about you?
  8. This is dumb: AI builds software under 7 minutes for less than a dollar.
  9. It’s not surprising hackers from lots of security holes in AI.
  10. Take this with a big grain of salt…one of the leaders from Palantir wonders if AI should be compared to atomic energy.
  11. This is bad: how facial recognition tech resulted in a false arrest.
  12. This is not good: a story on using AI to generate books and other junk here.
  13. This is different:  Microsoft Teams is pairing up with Maybelline to provide AI generated beauty filters / virtual makeup.
  14. It’s not news but it is interesting that NVIDIA is a hot company now due to AI. See more about that, here.
  15. Maybe chatgpt and other AI will just be a tool to do inefficient things efficiently.
  16. A thoughtful piece on powering generative AI and large language models with hybrid cloud with a surprise ending, from one of the senior leaders in my group at IBM.

(Photo: link to image in the NVIDIA story. By Philip Cheung for The New York Times)

Paul McCartney’s newest creations using history and science fiction


McCartney has always been one to explore new ideas. So it doesn’t surprise me to discover that he used AI, which he said enabled him on a ‘final’ Beatles song. Unlike others who might muck about and try to create something Beatlesque with AI, he argues that there is nothing artificial in the “new” Beatles song made using this technology. If you read the two pieces, you’ll likely agree. I did. AI was just an additional instrument Paul used to create music.

While he’s been in the realm of science fiction with his AI project, he’s also been going back in time using photographs to produce a new book. He writes about the book, “1964: Eyes of the Storm – Photographs and Reflections” in the Guardian, here and in The Atlantic, here.

Regardless of what he is using, here’s a good essay by Austin Kleon on McCartney’s creative process: McCartney on not knowing and doing it now. McCartney often gets dinged for his creative failures, but I would argue he has been so massively successful because he tries and fails often enough and he does not stop whenever so called failure occurs. (It helps that things that were once considered failures (e.g., McCartney I and II) turn out later to be considered successes.)

Here’s to Paul successfully living to be a 100 and providing us more great creative works.

(Image of McCartney recording McCartney II, via Austin Kleon’s site)

AI and the shift from opinion to effect


Here’s some things I’ve been clipping out and saving concerning AI. The pattern I see emerging in my clippings is one where I am less interested in opinion on AI and more interested in the effects of AI on the world. There’s still some good think pieces on AI — I put some here — but the use of AI is accelerating in the world and we need to better understand the outcomes of that.

AI Think Pieces: For people who want to be really afraid of AI, I recommend this Guardian piece on  unknown killer robots and AI and…. well you read and decide. On the flip side of that, here’s a good piece critical of  AI alarmism.

Bill Gates chimes in on how  the risks of AI are real but manageable. My friend Clive Thompson discusses a risk of a different sort regarding AI, and that is the possibility of AI model collapse.

The mystery of how AI actually works is delved into at Vox. To me it is one of the potentially big problems AI will have in the future.

Practical AI: here’s a piece on how the  Globe and Mail is using AI in the newsroom. Also practical: How AI is working in the world of world of watches. I loved this story of how AI is being used to translate cuneiform. AI is also being used to deal with forest fires.

AI effects: This piece is on how AI’s large language models are having a big effect on the Web as we know it. To mitigate tithings, the Grammys have outlined new rules for AI use.

when it comes to writing, I think the “Five Books” series is great. They will ask an expert in an area to recommend five books in their field that people should read. So I guess it makes sense that for books on  artificial intelligence, they asked….ChatGPT. It’s well worth a read.

Not all articles written by/with AI turn out great. Ask the folks at Gizmodo.

Speaking of AI and books,  these authors have filed a lawsuit against OpenAI for unlawfully ingesting their books. Could be interesting. To add to that, the New York Times reports that “fed up with A.I. companies consuming online content without consent, fan fiction writers, actors, social media companies and news organizations are among those rebelling.”

On the topic of pushback,  China is setting out new rules concerning generative AI with an emphasis on “healthy” content and adherence to socialist values.

Asia is not a monolith, of course. Other parts of Asia have been less than keen to the EUs AI lobbying blitz. Indeed, India’s Infosys just signed a five year AI deal with 2bln target spend, and I expect lots of other India companies will be doing something similar regarding AI. Those companies have lots of smart and capable IT people, and when companies like Meta open their AI model for commercial use and throw the nascent market into flux, well, that is going to create more opportunities.

Finally, I suspect there is a lot of this going around: My Boss Won’t stop using ChatGPT.

 

 

AI AI AI AI: here’s some good, bad and scary stuff on AI


I am glad that Apple released a new device last week. It was a refreshing change from what most IT discussions are about recently. And what’s topic is most discussed? AI, of course.

And for good reason! There’s lots and lots happening in this space. New AI technology is coming out. New uses for AI are developed. It’s an exciting space. Like many, I am having a hard time keeping it with it all. But try and keep up I must. And as I do, I have found some interesting links for me (and you) to read:

Clive Thompson has a grim take on the boring apocalypse of today’s AI 

Also grim is this story in WiReD about  tessa, the eating disorder chatbot, and why it had to be suspended. Don’t leave your AI unattended!

Grimly funny: what happens when a lawyer misuses ChatGPT? Hijinx insue!

Not grim, but clever:  A Vienna museum turned to AI and cats — yes AI and cats — to lure visitors.

Also in WiReD is this thoughtful piece on how  non english languages are being left out of the AI revolution, at least for now. I see this changing really fast.

A good New York Times piece on how training chatbots on smaller language datasets could make them better.

Fascinating to see how much AI is listed in Zapier’s app tips here.

Also fascinating: Google didn’t talk about any of their old AI while discussing their new AI during their I/O 2023 event recently. I wonder why. I wonder if they’re missing an opportunity.

AI junk: Spotify has reportedly removed tens of thousands of ai generated songs. Also junk, in a way: AI interior design. Still more garbage AI uses, this time in the form of  spam books written using ChatGPT.

This seems like an interesting technology:  liquid neural networks.

What is falcon 40b? Only “the best open-source model currently available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. ” Worth a visit.

Here’s a how-to on using AI for photo editing. Also, here’s some advice on writing better ChatGPT prompts.

This is a good use of AI: accurately diagnosing tomato leaf diseases.

For those that care: deep learning pioneer Geoffrey Hinton quit Google.

Meanwhile Sam Altman is urging the US congress to regulate AI. In the same time period, he threatens to withdraw from Europe if there is too much regulation, only to back down. It seems like he is playing people here. Writers like Naomi Klein are rightly critical. Related is this piece: Inside the fight to reclaim AI from Big Tech’s control | MIT Technology Review.

Here’s another breathless piece on the AI start up scence in San Francisco. Yawn. Here’s a piece on a new startup with a new AI called Character.ai that lets you talk to famous people. I guess….

Here’s some things my company is doing with AI: Watsonx. But also: IBM to pause hiring for back office jobs that ai could kill. Let’s see about that.

Finally, this story from BlogTO on how “Josh Shiaman, a senior feature producer at TSN, set out to create a Jays ad using text-to-video AI generation, admitting that the results “did not go well.”” Not go well is an understatement! It’s the stuff of nightmares! 🙂 Go here and see.

In some ways, maybe that video is a good metaphor for AI: starts off dreamy and then turns horrific.

Or maybe not.

If you want to get a better understanding of generative AI, it pays to see what the New York Times and Bloomberg are up to

One of the problems with generative AI like ChatGPT is it makes you think it is magical. You type in some prompt, it comes back with a complex answer, the next thing you know, you are thinking this thing is smarter than a human. It doesn’t help that there is so much hype surrounding the technology. All of this can make you think it’s supernatural.

Well, it isn’t. It’s simply good IT. It consists of data, hardware and software, just like any other IT. To get a better appreciation of the ordinary nature of that, it helps to look at two recent examples: the AI the New York Times recently built and the AI Bloomberg just built.

It’s best to start with what the Times built. They used software called nanoGPT (karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs) and took the works of Jane Austen, Shakespeare and more to build a chatGPT-like program on their laptops. Then they walked through the steps of getting it working, here: Let Us Show You How GPT Works — Using Jane Austen – The New York Times. It works pretty well after much much training. Obviously it is not as massive or sophisticated as ChatGPT, but after reading the article, you will have a better sense of how this technology works, and why it’s impressive but not magical.

After that, I recommend reading more about BloombergGPT. Their press release states:

Bloomberg today released a research paper detailing the development of BloombergGPT, a new large-scale generative artificial intelligence (AI) model. This large language model (LLM) has been specifically trained on a wide range of financial data to support a diverse set of natural language processing (NLP) tasks within the financial industry.

You can find a link to that research paper, here:  BloombergGPT: A Large Language Model for Finance. What I liked about that paper is it walks through the approach they took, the data they used, and the technology deployed to make their model. Even better, they talk about how it is currently succeeding and what some of the limits of it are.

I’m happy that both these companies have been good about sharing what they are doing with this technology. I might even try and use an old laptop to build my own AI. I mean who wouldn’t benefit from tapping into the genius of Shakespeare or Jane Austen.

For more on what Bloomberg is doing, see this: Bloomberg plans to integrate GPT-style A.I. into its terminal

 

A plethora of good links on AI

There’s still an overwhelming amount of material being written on AI. Here’s a few lists of some of the ones I found most interesting:

ChatGPT: ChatGPT (3 and 4) still dominate much of the discussion I see around AI. For instance:

Using AI: people are trying to use AI for practical purposes, as those last few links showed. Here’s some more examples:

AI and imagery: not all AI is about text. There’s quite a lot going on in the visual space too. Here’s a taste:

AI and the problems it causes: there’s lots of risks with any new technology, and AI is no exception. Cases in point:

Last but not least: 

The profiles (beat-sweeteners?) of Sam Altman

Oddly (not oddly at all?) both the New York Times and the Wall Street Journal had profiles of  Sam Altman at the end of March:

  1. Sam Altman, the ChatGPT King, Is Pretty Sure It’s All Going to Be OK – The New York Times
  2. The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT – WSJ

Given the contentious nature of AI and ChatGPT, you might think think that those pieces would have asked tough questions of Altman concerning AI. Especially since Leslie Stahl did something similar to execs of Microsoft, a few weeks earlier. Perhaps the work of Stahl is why Microsoft / OpenAI wanted Altman to get out there with his story. If that was the intent, then it seemed to work. Nothing too tough in either of these profiles.

Then again, perhaps they were written as beat-sweeteners. After all, getting access is just as important for tech journalists as it is for political journalists. If you want to write more about AI in the future, being able to ring up Altman and his gang and get through to them for a comment seems like something you might want for your job. No doubt profiles like that can help with that access.

For more on the topic of beat-sweeteners, I give you this: Slate’s Beat-Sweetener Reader in Columbia Journalism Review.

 

 

 

The Gartner Hype Cycle: one good way to think about technological hype

Below is the Gartner hype cycle curve with it’s famous five phases:

For those not familiar with it, the chart below breaks it down further and helps you see it in action. Let’s examine that.

Chances are if you are not working with emerging IT and you start hearing about a hyped technology (e.g., categories like blockchain, AI), it is in the phase: Peak of Inflated Expectations. At that stage the technology starts going from discussions in places like Silicon Valley to write ups in the New York Times.  It’s also in that phase two other things happen: “Activity beyond early adopters” and “Negative press begins”.

That’s where AI — specifically generative AI — is: lots of write ups have occurred, people are playing around with it, and now the negative press occurs.

After that phase technologies like AI start to slide down into my favorite phase of the curve: the Trough of Disillusionment. It’s the place where technology goes to die. It’s the place where technology tries to cross the chasm and fails.

See that gap on Technology Adoption Lifecycle curve? If technology can get past  that gap (“The Chasm”) and get adopted by more and more people, then it will move on through the Gartner hype curve, up the Slope of Enlightenment and onto the Plateau of Productivity. As that happens, there is less talking and more doing when it comes to the tech.

That said, my belief is that most technology dies in the Trough. Most technology does not and cannot cross the chasm. Case in point, blockchain. Look at the hype curve for blockchain in 2019:

At the time people were imagining blockchain everywhere: in gaming, in government, in supply chain…you name it. Now some of that has moved on to the end of the hype cycle, but most of it is going to die in the Trough.

The Gartner Hype Curve is a useful way to assess technology that is being talked about, as is the Technology Adoption Curve. Another good way of thinking about hype can be found in this piece I wrote here. In that piece I show there are five levels of hype: Marketing Claims, Exaggerated Returns, Utopian Futures, Magical Thinking, and Othering. For companies like Microsoft talking about AI, the hype levels are at the level of Exaggerated Returns. For people writing think pieces on AI, the hype levels go from Utopian Futures to Othering.

In the end, however you assess it, its all just Hype. When a technology comes out, assess it for yourself as best as you can. Take anything being said and assign it a level of hype from 1-5. If you are trying to figure out if something will eventually be adopted, use the curves above.

Good luck!

A handy guide to spotting AI generated images

Well, two handy guides. One from the Verge and one from the Washington Post. The Verge talks about the phenomenon in general, while the Post is more specific.

It’s possible that the AI software that generates imagery will get better. But for now, those guides are helpful in spotting fakes.

(Image from the Verge. It is highlighting things to look for: weird hands, illegibility, odd shadows.)

More reasons why ChatGPT is not going to replace coders

I have been arguing recently about the limits of the current AI and why it is not going to take over the job of coding yet. I am not alone in this regard. Clive Thompson, who knows a lot about the topic, recently wrote this: Why ChatGPT Won’t Replace Coders Just Yet. Among other reasons, the “‘bullshit’ problem turns up in code, too”. I recommend you read Clive’s take on the subject. And after you read that, check out his book, “Coders”. You can order it, here, from his publisher. I think it’s a classic and one of the best things written on software.

I asked ChatGPT to do my job and write some code to built out some cloud infrastructure. How did it go?

For fun I decide to see if I am ready to be replaced with AI via ChatGPT. Part of my work is building cloud based infrastructure using a language called HCL from Hashicorp using a tool called Terraform. You write one or more program files using HCL, run it through Terraform, and voila, you have the infrastructure you need.

For this exercise I entered the following prompt:

Create a terraform file for IBM cloud that will provision a VPC, set up networking for the VPC, and provision a virtual server for VPC. It will have 1 security group and a rule for this security group to allow SSH connection to your virtual server instance, 1 subnet to enable networking in your VPC, 1 VPC virtual server instance and 1 floating IP address that you use to access your VPC virtual server instance over the public network.

A good prompt, I thought. Nothing vague or conflicting. Practically a form of code itself.

I was shocked when in a few seconds I got this response:

To provision a VPC, networking, and a virtual server with a security group and a floating IP address on IBM Cloud, you can use the following Terraform configuration file:

# Configure the IBM provider
provider “ibm” {
generation = 2
region = “us-south”
}# Create a VPC
resource “ibm_is_vpc” “vpc” {
name = “my-vpc”
address_prefix_management = “auto”
}
# Create a subnet in the VPC
resource “ibm_is_subnet” “subnet” {
name = “my-subnet”
vpc = ibm_is_vpc.vpc.id
ipv4_cidr_block = “10.240.0.0/24”
}
# Create a security group
resource “ibm_is_security_group” “security_group” {
name = “my-security-group”
vpc = ibm_is_vpc.vpc.id
description = “Allow SSH inbound traffic”
}
# Add an SSH rule to the security group
resource “ibm_is_security_group_rule” “ssh_rule” {
security_group = ibm_is_security_group.security_group.id
direction = “inbound”
protocol = “tcp”
port_min = 22
port_max = 22
remote = “0.0.0.0/0”
}
# Create a virtual server instance in the subnet
resource “ibm_is_instance” “instance” {
name = “my-instance”
image = “ibm-centos-7-10-minimal-amd64-2”
profile = “bx2-2×8”
primary_network_interface {
subnet = ibm_is_subnet.subnet.id
}
vpc = ibm_is_vpc.vpc.id
security_groups = [ibm_is_security_group.security_group.id]
}
# Create a floating IP address
resource “ibm_is_floating_ip” “floating_ip” {
name = “my-floating-ip”
}
# Associate the floating IP address with the virtual server instance
resource “ibm_is_floating_ip_attachment” “attachment” {
floating_ip = ibm_is_floating_ip.floating_ip.address
instance = ibm_is_instance.instance.id
}

Before running the above code, replace with your IBM Cloud API key and and with the appropriate values for your IBM Cloud region and availability zone.
Once you have saved this configuration file, you can initialize Terraform by running terraform init, and then create the resources by running terraform apply. After a few minutes, Terraform will create the VPC, subnet, security group, virtual server, and floating IP address. You can then use the floating IP address to SSH into your virtual server instance over the public network.

Wow! What an amazing response, I thought!  It was quick, detailed, seemingly knowledgeable. Also…wrong.

There was so much fundamentally wrong with the code that the only way to get it to work was to rewrite much of it and also add a bunch of new code to make it work. I can’t even tell you how it came up with some of the statements: they are not defined that way for Terraform specs for IBM Cloud.

I even had the AI regenerate the code to see if it could get it right the second time. Instead the new code had 9 errors in it.

Fine. My manager provided me with a prompt of his own: see if it will work on AWS. (Good prompt, Ted!)

I did try it on AWS and Azure. With Azure the response was an incomplete script. (At least the IBM script was complete, though wrong.) With AWS the script was better. I could enter terraform plan and terraform thought it looked good. But once I entered terraform apply to build the resources, it failed.

I suspect the larger problem is lack of sufficient overlapping data for the AI tools to train on. So it sort of gets the code right, but sort of isn’t really good enough.

I see people on the Internet raving about how well AI is doing writing code. Some of the examples are interesting, but I think it has a way to go. I’ll stick to doing my day job. Without AI to help. 🙂

 

Paul Kedrosky & Eric Norlin of SKV know nothing about software and you should ignore them

Last week Paul Kedrosky & Eric Norlin of SKV wrote this piece, Society’s Technical Debt and Software’s Gutenberg Moment, and several smart people I follow seemed to like this and think it something worthwhile. It’s not.

It’s not worthwhile because Kedrosky and Norlin seem to know little if anything about software. Specifically, they don’t seem to know anything about:

  • software development
  • the nature of programming or coding
  • technical debt
  • the total cost of software

Let me wade through their grand and woolly pronouncements and focus on that.

They don’t understand software development: For Kedrosky and Norlin, what software engineers do is predictable and grammatical. (See chart, top right).

To understand why that is wrong, we need to step back. The first part of software development and software engineering should start with requirements. It is a very hard and very human thing to gather those requirements, analyze them, and then design a system around them that meets the needs of the person(s) with the requirements. See where architects are in that chart? In the Disordered and Ad hoc part in the bottom left. Good IT architects and business analysts and software engineers also reside there, at least in the first phase of software development. To get to the predictable and grammatical section which comes in later phases should take a lot of work. It can be difficult and time consuming. That is why software development can be expensive. (Unless you do it poorly: then you get a bunch of crappy code that is hard to maintain or has to be dramatically refactored and rewritten because of the actual technical debt you incurred by rushing it out the door.)

Kedrosky and Norlin seem to exclude that from the role of software engineering. For them, software engineering seems to be primarily writing software. Coding in other words. Let’s ignore the costs of designing the code, testing the code, deploying the code, operating the code, and fixing the code. Let’s assume the bulk of the cost is in writing the code and the goal is to reduce that cost to zero.

That not just my assumption: it seems to be their assumption, too. They state: “Startups spend millions to hire engineers; large companies continue spending millions keeping them around. And, while markets have clearing prices, where supply and demand meet up, we still know that when wages stay higher than comparable positions in other sectors, less of the goods gets produced than is societally desirable. In this case, that underproduced good is…software”.

Perhaps that is how they do things in San Francisco, but the rest of the world has moved on from that model ages ago. There are reasons that countries like India have become powerhouses in terms of software development: they are good software developers and they are relatively low cost. So when they say: “software is chugging along, producing the same thing in ways that mostly wouldn’t seem vastly different to developers doing the same things decades ago….(with) hands pounding out code on keyboards”, they are wrong because the nature of developing software has changed. And one of the way it has changed is that the vast majority of software is written in places that have the lowest cost software developers. So when they say “that software cannot reach its fullest potential without escaping the shackles of the software industry, with its high costs, and, yes, relatively low productivity”, they seem to be locked in a model where software is written they way it is in Silicon Valley by Stanford educated software engineers. The model does not match the real world of software development. Already the bulk of the cost in writing code in most of the world has been reduced not to zero, but to a very small number compared to the cost of writing code in Silicon Valley or North America. Those costs have been wrung out.

They don’t understand coding: Kedrosky and Norlin state:A software industry where anyone can write software, can do it for pennies, and can do it as easily as speaking or writing text, is a transformative moment”. In their piece they use an example of AI writing some Python code that can “open a text file and get rid of all the emojis, except for one I like, and then save it again”. Even they know this is “a trivial, boring and stupid example” and say “it’s not complex code”.

Here’s the problem with writing code at least with the current AI. There are at least three difficulties that AI code generators suffers from: triviality, incorrectness, and prompt skill.

First, the problem of triviality. It’s true: AI is good at making trivial code. It’s hard to know how machine learning software produces this trivial code, but it’s likely because there are lots of examples of such code on the Internet for them to train on. If you need trivial code, AI can quickly produce it.

That said, you don’t need AI to produce trivial code. The Internet is full of it. (How do you think the AI learned to code?) If someone who is not a software developer wants to learn how to write trivial code they can just as easily go to a site like w3schools.com and get it. Anyone can also copy and paste that code and it too will run. And with a tutorial site like w3schools.com the explanation for the code you see will be correct, unlike some of the answers I’ve received from AI.

But what about non-trivial code? That’s where we run into the problem of  incorrectness. If someone prompts AI for code (trivial or non-trivial) they have no way of knowing it is correct, short of running it. AI can produce code quickly and easily for you, but if it is incorrect then you have to debug it. And debugging is a non-trivial skill. The more complex or more general you make your request, the more buggy the code will likely be, and the more effort and skill you have to contribute to make it work.

You might say: incorrectness can be dealt with by better prompting skills. That’s a big assumption, but let’s say it’s true. Now you get to the third problem. To get correct and non-trivial outputs — if you can get it at all, you have to craft really good prompts. That’s not a skill anyone will have. You will have to develop specific skills — prompt engineering skills — to be able to have the AI write python or Go or whatever computer language you need. At that point the prompt to produce that code is a form of code itself.

You might push back and say: sure, the prompts might be complex, but it is less complicated than the actual software I produce. And that leads to the next problem: technical debt.

They don’t understand technical debt: when it comes to technical debt, Kedrosky and Norlin have two problems. First, they don’t understand the idea of technical debt! In the beginning of their piece they state: “Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt.”

That’s not how those of us in the IT community define it.  Technical debt is not a lack of software supply. Even Wikipedia knows better: “In software development, technical debt (also known as design debtor code debt) is the implied cost of future reworking required when choosing an easy but limited solution instead of a better approach that could take more time”. THAT is technical debt.

One of the things I do in my work is assess technical debt, either in legacy systems or new systems. My belief is that once AI can produce code that is non-trivial and correct and based on prompts, we are going to get an explosion of technical debt. We are going to get code that appears to solve a problem and do so with a volume of python (or Java or Go or what have you) that the prompt engineer generated and does not understand. It will be like copy and paste code amplified. Years from now people will look at all this AI generated code and wonder why it is the way it is and why it works the way it does. It will take a bunch of counter AI to translate this code into something understandable, if that will even be possible. Meanwhile companies will be burdened with higher levels of technical debt accelerated by the use of AI developed software. AI is going to make things much worse, if anything.

They don’t understand the total cost of software:  Kedrosky and Norlin included this fantasy chart in their piece.

First off, most people or companies purchase software, not software engineers. That’s the better comparison to hardware.  And if you do replace “Software engineers” with software, then in certain areas of software this chart has already happened. The cost of software has been driven to zero.

What drove this? Not AI. Two big things that drove this are open source and app stores.

In many cases, open source eliminated the (licensing) cost of software to zero. For example, when the web first took off in the 90s, I recall Netscape sold their web server software for $10,000. Now? You can download and run free web server software like nginx on a Raspberry Pi for free. Heck can write your own web server using node.js.

Likewise with app stores. If you wanted to buy software for your PC in the 80s or 90s, you had to pay significantly more than 99 cents for it. It certainly was not free. But the app stores drove the expectation people had that software should be free or practically free. And that expectation drove down the cost of software.

Yet despite developments like open source and app stores driving the cost of software close to zero, people are organizations are still paying plenty for the “free” software. And you will too with AI software, whether it’s commercial software or software for your personal use.

I believe that if you have AI generating tons of free personal software, then you will get a glut of crappy apps and other software tools. If you think it’s hard to determine good personal software now, wait until that happens. There will still be good software, but to develop that will cost money, and that money will be recovered somehow, just like it is today with free apps with in app purchases or apps that steal your personal information and sell it to others. And people will still pay for software from companies like Adobe. They are paying for quality.

Likewise with commercial software. There is tons of open source software out there. Most of it is wisely avoided in commercial settings. However the good stuff is used and it is indeed free to licence and use.

However the total cost of software is more than the licencing cost. Bad AI software will need more capacity to run and more people to support, just like bad open source does. And good AI software will need people and services to keep it going, just like good open source does. Some form of operations, even if it is AIOps (another cost), will need expensive humans to insure the increasing levels of quality required.

So AI can churn out an tons of free software. But the total cost of such software will go elsewhere.

To summarize, producing good software is hard. It’s hard to figure out what is required, and it is hard to design and built and run it to do what is required.  Likewise, understanding software is hard. It’s called code for a reason. Bad code is tough to figure out, but even good code that is out of date or used incorrectly can have problems and solving those problems is hard. And last, free software has other costs associated with it.

P.S. It’s very hard to keep up and counter all the hot takes on what AI is going to do for the world. Most of them I just let slide or let others better than me deal with. But I wanted to address this piece in particular, since it seemed influential and un-countered.

P.S.S. Beside all that above, they also made some statements that just had me wondering what they were thinking. For example, when they wrote: “This technical debt is about to contract in a dramatic, economy-wide fashion as the cost and complexity of software production collapses, releasing a wave of innovation.” Pure hype.

Or this : “Software is misunderstood. It can feel like a discrete thing, something with which we interact. But, really, it is the intrusion into our world of something very alien. It is the strange interaction of electricity, semiconductors, and instructions, all of which somehow magically control objects that range from screens to robots to phones, to medical devices, laptops, and a bewildering multitude of other things.” I mean, what is that all about?

And this:  “The current generation of AI models are a missile aimed, however unintentionally, directly at software production itself”. Pure bombast.

Or this hype: “They are “toys” in that they are able to produce snippets of code for real people, especially non-coders, that one incredibly small group would have thought trivial, and another immense group would have thought impossible. That. Changes. Everything.”

And this is flat up wrong: “This is just the beginning (and it will only get better). It’s possible to write almost every sort of code with such technologies, from microservices joining together various web services (a task for which you might previously have paid a developer $10,000 on Upwork) to an entire mobile app (a task that might cost you $20,000 to $50,000 or more).”

 

 

 

Will AI tools based on large language models (LLMs) become as smart or smarter than us?

With the success and growth of tools like ChatGPT, some are speculating that the current AI could lead us to a point where AI is as smart if not smarter than us. Sounds ominous.

When considering such ominous thoughts, it’s important to step back and remember that Large Language Model (LLM) are tools based in whole or in part on machine learning technology. Despite their sophistication, they still suffer from the same limitations that other machine learning technologies suffer, namely:

    • bias
    • explainability
    • overfitting
    • learning the wrong lessons
    • brittleness

There are more problems than those for specific tools like ChatGPT, as Gary Marcus outlines here:

  • the need for retraining to get up to date
  • lack of truthfulness
  • lack of reliability
  • it may be getting worse due to data contamination (Garbage in, garbage out)

It’s hard to know if current AI technology will overcome these limitations. It’s especially hard to know when orgs like OpenAI do this.

My belief is these tools will hit a peak soon and level off or start to decline. They won’t get as smart or smarter than us. Not in their current form. But that’s based on a general set of experiences I’ve acquired from being in IT for so long. I can’t say for certain.

Remain calm. That’s my best bit of advice I have so far. Don’t let the chattering class get you fearful. In the meanwhile, check out the links provided here. Education is the antidote to fear.

Are AI and ChatGPT the same thing?

Reading about all the amazing things done by the current AI might lead you to think that: AI = ChatGPT (or DALL-E, or whatever people like OpenAI are working on). It’s true, it is currently considered AI,  but there is more to AI than that.

As this piece explains, How ChatGPT Works: The Model Behind The Bot:

ChatGPT is an extrapolation of a class of machine learning Natural Language Processing models known as Large Language Model (LLMs).

Like ChatGPT, many of the current and successful AI tools are examples of machine learning. And while machine learning is powerful, it is just part of AI, as this diagram nicely shows:

To get an idea of just how varied and complex the field of artificial intelligence is, just take a glance at this outline of AI. As you can see, AI incorporates a wide range of topics and includes many different forms of technology. Machine learning is just part of it. So ChatGPT is AI, but AI is more than ChatGPT.

Something to keep in mind when fans and hypesters of the latest AI technology make it seem like there’s nothing more to the field of AI than that.

What is AI Winter all about and why do people who’ve worked in AI tend to talk about it?

It might surprise people, but work in AI has been going on for some time. In fact it started as early as the mid-1950s. In the 50s until the 70s, “computers were solving algebra word problems, proving theorems in geometry and learning to speak English”. They were nothing like OpenAI’s ChatGPT, but they were impressive in their own way. Just like now, people were thinking the sky’s the limit.

Then three things happened: the first AI winter from 1974 until 1980, the boom years from 1980-1987, and then the next AI winter from 1987-1993. I was swept up in the second AI winter, and like the first one, there was a combination of hitting a wall in terms of what the technology could do followed by a drying up of funding.

During the boom times it seemed like there would be no stopping AI and it would eventually be able to do everything humans can do and more. It feels that way now with the current AI boom. People like OpenAI and others are saying the sky’s the limit and nothing is impossible. But just like in the previous boom eras, I think the current AI boom will hit a wall with the technology (we are seeing some of it already). At that point we may see a reduction in funding from companies like Microsoft and Google and more (just like how we are seeing a drawback from them on voice recognition technology like Alexa and Siri).

So yes, the current AI technology is exciting. And yes, it seems like there is no end to what it can do. But I think we will get another AI winter sooner than later, and during this time work will continue in the AI space but you’ll no longer be reading news about it daily. The AI effect will also occur and the work being done by people like OpenAI will just get incorporated into the everyday tools we use, just like autocorrect and image recognition is no just something we take for granted.

P.S. If you are interested in the history of the second AI winter, this piece is good.

What is the AI effect and why should you care?

Since there is so much talk about AI now, I think it is good for people to be familiar with some key ideas concerning AI. One of these is the AI effect. The cool AI you are using now, be it ChatGPT or DALL-E or something else, will eventually get incorporated into some commonplace piece of IT and you won’t even think much of it. You certainly won’t be reading about it everywhere. If anything you and I will complain about it, much like we complain about autocorrect.

So what is the AI Effect? As Wikipedia explains:

The AI effect” is that line of thinking, the tendency to redefine AI to mean: “AI is anything that has not been done yet.” This is the common public misperception, that as soon as AI successfully solves a problem, that solution method is no longer within the domain of AI. Geist credits John McCarthy giving this phenomenon its name, the “AI effect”.

McCorduck calls it an “odd paradox” that “practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the ‘failures’, the tough nuts that couldn’t yet be cracked.”[5]

It’s true. Many things over the years that were once thought of as AI are now considered simply software or hardware, if we even think of them at all.  Whether it is winning at chess, recognizing your voice, or recognizing text in an images, these things are commonplace now, but were lofty goals for AI researchers once.

The AI effect is a key idea to keep in mind when people are hyping any new AI as the thing that will change everything. If the new AI becomes useful, we will likely stop thinking it is AI.

For more on the topic, see: AI effect – Wikipedia

No, prompt engineering is not going to become a hot job. Let a former knowledge engineer explain

With the rise of AI, LLMs, ChatGPT and more, a new skill has risen. The skill involves knowing how to construct prompts for the AI software in such a way that you get an optimal result. This has led to a number of people to start saying things like this: prompt engineers is the next big job. I am here to say this is wrong. Let me explain.

I was heavily into AI in the late 20th century, just before the last AI winter. One of the hot jobs at that time was going to be knowledge engineer (KE). A big part of AI then was the development of expert systems, and the job of the KE was to take the the expertise of someone and translate it into rules that the expert system could use to make decisions. Among other things, part of my role was to be a KE.

So what happened? Well, first off, AI winter happened. People stopped developing expert systems and went and took on other roles.  Ironically, rules engines (essentially expert systems) did come back, but all the hype surrounding them was gone, and the role of KE was gone too. It wasn’t needed. A business analyst can just as easily determine what the rules are and then have a technical specialist store that in the rules engine.

Assuming tools like ChatGPT were to last, I would expect the creation of prompts for it to be taken on by business analysts and technical specialist. Business as usual, in other words. No need for a “prompt engineer”.

Also, you should not assume things like ChatGPT will last. How these tools work is highly volatile; they are not well structured things like programming languages or SQL queries. The prompts that worked on them last week may result in nothing a week later. Furthermore, there are so many problems with the new AI that I could easily see them falling into a new AI winter in the next few years.

So, no, I don’t think Prompt Engineering is a thing that will last. If you want to update your resume to say Prompt Engineer after you’ve hacked around with one of the current AI tools out there, knock yourself out. Just don’t get too far ahead of yourself and think there is going to be a career path there.

Fake beaches! Fake lawyers! ChatGPT! and more (what I find interesting in AI, Feb 2023)


There is so much being written about AI that I decided to blog about it separately from other tech. Plus AI is so much more than just tech. It touches on education, art, the law, medicine…pretty much anything you can think of. Let me show you.

Education: there’s been lots said about how students can (are?) using ChatGPT to cheat on tests. This piece argues that this is a good time to reassess education as a result. Meanwhile, this Princeton Student built GPTZero to detect AI-written essays, so I suspect some people will also just want to crack down on the use of AI. Will that stop the use of AI? I doubt it. Already companies like Microsoft are looking to add AI technology to software like Word. Expect AI to flood and overwhelm education, just like calculators once did.

Art: artists have been adversely affected by AI for awhile. Some artists decided to rise up against it by creating anti-AI protest work. You can read about that, here. It’s tough for artists to push back on AI abuses: they don’t have enough clout. One org that will not have a problem with clout is Getty Images. They’ve already started to fight back against AI with a lawsuit. Good.

Is AI doing art a bad thing? I’ve read many people saying it will cause illustrators and other professional artists to lose their jobs. Austin Kleon has an interesting take on that. I think he is missing the point for some artists, but it’s worth reading.

Work: beside artists losing their jobs, others could as well. The NYPost did a piece on how ChatGPT could make this list of jobs obsolete . That may be shocking to some, but for people like me who have been in IT for some time, it’s just a fact that technology takes away work. Many of us embrace that, so that when AI tools come along and do coding, we say “Yay!”. In my experience, humans just move on to provide business value in different ways.

The law: one place I wish people would be more cautious with using AI is in the law. For instance, we had this happen: an AI robot lawyer was set to argue in court. Real lawyers shut it down. I get it: lawyers are expensive and AI can help some people, but that’s not the way to do it. Another example is this, where you have AI generating wills. Needless to say, it has a way to go.  An even worse example: Developers Created AI to Generate Police Sketches. Experts Are Horrified. Police are often the worse abusers of AI and other technology, sadly.

Medicine: AI can help with medicine, as this shows. Again, like the law, doctors need to be careful. But that seems more promising.

The future and the present: if you want an idea of where AI is going, I recommend this piece in technologyreview and this piece in WaPo.

Meanwhile in the present Microsoft and Google will be battling it out in this year. Microsoft is in the lead so far, but reading this, I am reminded of the many pitfalls ahead: Microsoft’s new AI Prometheus didn’t want to talk about the Holocaust. Yikes. As for Google, reading this blogpost of theirs on new AI tool Bard had me thinking it would be a contender. Instead it was such a debacle even Googlers were complaining about it! I am sure they will get it right, but holy smokes.

Finally: this what AI thinks about Toronto. Ha! As for that beach I mentioned, you will want to read here:  This beach does not exist.

(Image above: ChatGPT logo from Wikipedia)

 

Some very good thoughts (especially at the end) and the usual ramblings on a new year (i.e. the January 2023 edition of my not-a-newsletter newsletter)

We finally closed the book on another pandemic year (2022), and have moved through the first month of 2023. Yay for us!  Is 2023 going to be a pandemic year as well? An endemic year perhaps? We don’t know. One thing for sure: compared to last January, this one has been much gentler.

I think in some ways 2023 may be a transition year. We continue to have transitions when it comes to COVID. We still have new variants like the Kraken (XBB.1.5) that has surged to 40.5% of all infections and rises in hospitalizations. But we take that as a matter of course now. Indeed, there is talk of having annual COVID and flu vaccines. COVID may be more serious than the flu in terms of illness and death, but we may end up approaching them in the same way. No one talks much of flu deaths, and perhaps other than places like Nova Scotia, no one will talk about COVID deaths either. For example, in my province of Ontario it is relatively easy to track hospitalizations related to COVID: it’s relatively hard to report on deaths.

I know because I still have been reporting on COVID hospitalizations every week on twitter for months. My last update was this one:

As I tweeted, the numbers have been dropping recently. Even the ICU numbers, which shot up due to the tripledemic, have declined as the tripledemic declined. Thank god: the pediatric ICUs in November were over 100% full for a time.

So we are transitioning in a positive direction. Good. And not just with COVID.  Everywhere you see spike graphs, like this one for unemployment:

To this one for inflation:

My expectation is that the annual inflation rate will continue to transition and decline in 2023, and interest rates will follow them. That is not to diminish the impact that inflation has had so far. Things have reached the point where people are stealing food and law firms are promising to defend them for free. That said, many, including the New York Times, expect inflation to cool this year. Perhaps it will drop back to where it used to be (i.e. below 3%). If you are skeptical, I recommend this piece in VOX.

Unlike COVID or inflation, not everything has the prospect of improving in 2023. Guns in the US  continue to be a major problem. There is no end in sight for the war in the Ukraine NATO is still supportive and continues to send weapons, although it seems like Zelenskyy had to clear the decks before that occurred. As for cryptocurrencies, it may not be a year of recovery for them as the trial of SBF and FTX unfolds. But who knows: maybe this rally will be a difference.

I suspect crypto will stay dormant for many reasons. One big reason is that tech is going to change its focus from Web3 to AI. Sorry Web3. (Sorry metaverse for that matter!) Microsoft alone is spending billions on it. AI will be all anyone will talk about this year. (No one knew what to do with crypto, save techies and rich people flogging NFTs. Everyone I know seems to be using ChatGPT and the like. That’s a key difference). I’ll be writing more about AI in standalone posts in 2023, there will be so much going on.

In 2023 I expect a continuation of the trend of people flooding back into cities after having left them, based on data like this: Annual demographic estimates census metropolitan areas and census. While residences have become scarce (and rents have become high) as a result, people have not been flooding back into offices. So much so that places like NYC are looking to convert office spaces to residential spaces. The problem with the pandemic is that the changes it has forced on society are more rapid than social systems can respond. But respond they will.

Then again, a new surge could reoccur in China. If that occurs, all bets are off. For now my bets are staying on the table.

Finally, thanks for reading this and anything else you read on this blog recently. I appreciate it. I am optimistic for 2023 in many ways. I hope you are too.

Keep wearing your masks when advisable. Get vaxxed to the max.  Try not to pay attention to Elon Musk or the fate of Twitter: that will all play out in due course. Don’t get too hung up about what AI is going to do: that will all play out as well. Continue to read newsletters. Watch streaming. Listen to podcasts. Most importantly: get out and about whenever you can.

There will always be bad people in the world, and bad acts occurring. Do what you can to prevent that from happening, but don’t rob yourself of your capacity for joy as a result. Be a happy warrior on the side of good. Joy is your armour.

Never forget: you have lived and possibly thrived through some of the most dramatically difficult times in history.  You deserve better times ahead.

Enjoy yourself. Live your life robustly. Whenever you feel lethargic, think back to those times of being locked down and unable to even go to a park and sit down.  Let’s go and get it. Here’s to a better year ahead. We are counting on you, 2023.

Sorry robots: no one is afraid of YOU any more. Now everyone is freaking out about AI instead


It seems weird to think there are trends when it comes to fearing technology. But thinking about it, there seems to be. For awhile my sources of information kept providing me stories of how fearful robots were. Recently that has shifted, and the focus moved to how fearful AI is. Fearing robots is no longer trendy.

Well, trendy or not, here are some stories about robots that have had people concerned. If you have any energy left from being fearful of AI, I recommend them. 🙂

The fact that a city is even contemplating this is worrying: San Francisco Supervisors Vote To Allow Killer Robots. Relatedly, Boston Dynamics pledges not to weaponize its robots.

Not that robots need weapons to be dangerous, as this showed: chess robot breaks childs finger russia tournament. I mean who worries about a “chess robot”??

Robots can harm in other ways, as this story on training robots to be racist and sexist showed.

Ok, not all the robot stories were frightening. These three are more just of interest:

This was a good story on sewer pipe inspection that uses cable-tethered robots. I approve this use of robots, though there are some limitations.

I am not quite a fan of this development:  Your Next Airport Meal May Be Delivered By Robot. I can just see these getting in the way and making airports that much harder to get around.

Finally, here’s a  327 Square Foot Apartment With 5 Rooms Thanks to Robot Furniture. Robot furniture: what will they think of next?

(Image is of the sewer pipe inspection robot.)

 

2022 is done. Thoughts and rambling on the last 365 days (i.e. the December 2022 edition)

Another year over. A semi-pandemic year, in a sense. Covid is still with us, but we did not (so far) get slammed with a bad new variant like we did last year with Omicron. Instead the pandemic is lesser than it was, but greater than the flu in terms of the sickness and death it brings. We still get vaccinated, though less than before. Schools are attended (though  affected),  restaurants are dined in, parties and special events are attended.

You could say things look….normal. But then you can look towards China: they seem to be struggling to deal with COVID lately. Who knows what 2023 will bring? More normal or more like China?

But that’s for 2023. As for last year and what was trending, we can look to  Google which has all its data. One place that was trending alot in 2022: China. China is struggling with both Covid and Xi’s approach to it, as this shows. As for the Chinese leader himself, it was a bad year for Xi, as well as Putin and other global bad guys, sez VOX. And it’s not just the Chinese residents that are having to deal with Xi and his government: Canada has been investigating chinese police stations in Canada. More on that here. I expect China will also trend in 2023. Let’s hope for better reasons.

Other trending events in 2022? Crypto. There was lots of talk about it and people like Sam Bankman-Fried after the collapse of his crypto currency exchange and subsequent arrest. We had stories like this: How I turned $15 000 into $1.2m during the pandemic and then lost it all. Tragic. The overall collapse of the industry has lead to things like bans on crypto mining. That’s good. It has lead to questions around the fundamentals, like: Blockchains What Are They Good For? Last, to keep track of all the shenanigans, I recommend this site: Web3 is Going Just Great. I expect crypto to remain a shambles next year. Time and money will tell.

Elon Musk also managed to trend quite often due to his take over of Twitter and more. He still has fans, but many are disillusioned. After all, his campaign to win back Twitter Advertisers isn’t going well. He was outright booed on stage with Dave Chapelle. (No doubt being a jerk contributed to this.) Tesla stock is tanking. Even his  Starlink is losing money. What a year of failure. I can’t see his 2023 improving either. Hard to believe he was Time’s Man of the Year in 2021!

Because of Musk, people are looking to join other networks, like Mastodon. (BTW, here’s some help on How to Make a Mastodon Account and Join the Fediverse). Some are looking to old networks, like this: the case for returning to tumblr. Some are looking at new ways to socialize online, like this.

Musk was not alone in trending this year due to being a bad guy. Let’s not forget that Kanye West trended as well due to his freakish behavior and antisemitism.

AI was another big trend this year, with things like ChatGPT and stable diffusion (here’s how you can set it up on AWS). We also had stories like this: Madison Square Garden Uses Facial Recognition to Ban Its Owner’s Enemies. Not good. What’s next for AI?  This takes a look. I think we may get an AI winter, but we have 12 months to see if that holds true.

For what it’s worth, Newsletters like Matt Yglesias’s are still going strong, though levelling off I think.

Trends and development aside, here’s some other topics I found interesting and worth being up to close the year:

Assisted death was a grim topic in 2022 in Canada. I remain glued to stories like this: We’re all implicated in Michael Fraser;s decision to die, and  this and this. It all seems like a failure, although this argues that assisted dying is working.

Here’s two good pieces on homelessness Did Billions in Spending Make a Dent in Homelessness? And ‘It’s a sin that we all had to leave’: Moving out of Meagher Park.

Need some advice for the new year? Try this: How Much and Where Are You Really Supposed to Tip? Consider this a good approach to  reading. Here’s a good approach to  slowing down, while here’s a good discussion on  Boundaries. Things to avoid:  the biggest wastes of time we regret when we get older.

Things I found interesting in sports this year:

Things I found interesting in general this year:

Finally, here’s some good advice to close out the year: Don’t Treat Your Life as a Project.

Thanks for reading this and anything else you read on this blog in 2022. I appreciate it. I managed to blog about roughly 3000 things on the internet this year. I hope you found some of them useful.

Happy New Year!

The rise and fall of Alexa and the possibility of a new A.I. winter

I recall reading this piece (What Would Alexa Do?) by Tim O’Reilly in 2016 and thinking, “wow, Alexa is really something! ” Six years later we know what Alexa would do: Alexa would kick the bucket (according to this:  Hey Alexa Are You There? ) I confess I was surprised by its upcoming demise as much as I was surprised by its ascendence.

Since reading about the fall of Alexa, I’ve looked at the new AI in a different and harsher light. So while people like Kevin Roose can write about the brilliance and weirdness of ChatGPT in The New York Times, I cannot stop wondering about the fact that as ChatGPT hits one Million users, it’s costs are eye-watering. (Someone mentioned a figure of $3M in cloud costs / day.) if that keeps up, ChatGPT may join Alexa.

So cost is one big problem the current AI has. Another is the ripping off of other people’s data. Yes, the new image generators by companies like OpenAI are cool, but they’re cool because they take art from human creators and use it as input. I guess it’s nice that some of these companies are now letting artists opt out, but it may already be too late for that.

Cost and theft are not the only problems. A third problem is garbage output. For example, this is an image generated by  Dall-E according to The Verge:

It’s garbage. DALL-E knows how to use visual elements of Vermeer without understanding anything about why Vermeer is great. As for ChatGPT, it easily turns into a bullshit generator, according to this good piece by Clive Thompson.

To summarize: bad input (stolen data), bad processing (expensive), bad output (bullshit and garbage). It’s all adds up, and not in a good way for the latest wunderkinds of AI.

But perhaps I am being too harsh. Perhaps these problems will be resolved. This piece leans in that direction. Perhaps Silicon Valley can make it work.

Or maybe we will have another AI Winter.….If you mix a recession in with the other three problems I mentioned, plus the overall decline in the reputation of Silicon Valley, a second wintry period is a possibility. Speaking just for myself, I would not mind.

The last AI winter swept away so much unnecessary tech (remember LISP machines?) and freed up lots of smart people to go on to work on other technologies, such as networking. The result was tremendous increases in the use of networks, leading to the common acceptance and use of the Internet and the Web. We’d be lucky to have such a repeat.

Hey Alexa, what will be the outcome?

UGC (user generated content) is a sucker’s game. We should resolve to be less suckers in 2023

I started to think of UGC when I read that tweet last night.

We don’t talk about UGC much anymore. We take it for granted since it is so ubiquitous. Any time we use social media we are creating UGC. But it’s not limited to site like Twitter or Instagram. Web site like Behance and GitHub are also repositories of UGC. Even Google Docs and Spotify are ways for people to generate content (a spreadsheet is UGC for Google to mine, just like a playlist is.)

When platforms came along for us to post our words and images, we embraced them. Even when we knew they were being exploited for advertising, many of us shrugged and accepted it as a deal: we get free platforms in exchange for our attention and content.

Recently though it’s gotten more exploitive. Companies like OpenAI and others are scrapping all our UGC from the web and turning it into data sets. Facial recognition software is turning our selfies into ways to track us. Never mind all the listening devices we let into our houses (“Hey Google, are you recording all my comings and goings?”…probably)

Given that, we should resolve to be smarter about our UGC in 2023. Always consider what you are sharing, and find ways to limit it if you can. Indeed give yourself some boundaries so that when the next company comes along with vowel problems (looking at you, Trackt) and asks for our data, we say no thanks.

We can’t stop companies from taking advantage of the things we share. So let’s aim to share things wisely and in a limited way.

It’s Sunday. Here are nine pieces to mull over this afternoon.

Sure you can make yourself busy on this warm summer weekend. Or you can chill for a bit and read one of these thoughtful pieces. I know which one I am going to do. 🙂

  1. Here’s a piece on the joy of Latin. Really.
  2. 100% this: The Case for Killing the Trolley Problem
  3. Worthwhile: Piketty on equality.
  4. This is a weak piece that tries to link AI to colonialism but fails to make the case:  AI colonialism.
  5. Do you have siblings? Read this:  How Your Siblings Can Make You Happier.
  6. Worth chewing on:The limits of forgiveness.
  7. On one of our oldest technologies: the importance of wood .
  8. Dive into this list of common misconceptions.
  9. Finally, this piece on  Alexa with the voice of dead people will get you thinking.

Computer memory isn’t our memory and AI isn’t our intelligence


Since the beginning of the digital age, we have referred to quickly retrievable computer storage as “memory”. It has some resemblance to memory, but it has none of the complexity of our memories and how they work. If you talked to most people about this, I don’t think there would be many who would think they are the same.

Artificial Intelligence isn’t our Intelligence, regardless of how good it gets. AI is going to have some resemblance to our intelligence, but it has none of the complexity of our intelligence and how it works. Yet you can talk to many who think that over time they will become the same.

I was thinking about that last week after the kerfuffle from the Google engineer who exclaimed their software was sentient. Many many think pieces have been written about it; I think this one is the best I read from a lay person’s perspective. If you are concerned about it or simply intrigued, read that. It’s a bit harsh on Turing’s test, but I think overall it’s worth your time.

It is impressive what leaps information technology is making. But however much it resembles us as humans, it is not human. It will not have our memories. It will not have our intelligence. It will not have the qualities that make us human, any more than a scarecrow does.

Today in good robots: reforesting drones


I’m often critical of robots and their relatives here, but these particular drones seem very good indeed. As that linked article explains:

swarms of (theese) seed-firing drones … are planting 40,000 trees a day to fight deforestation…(their) novel technology combines artificial intelligence with specially designed proprietary seed pods that can be fired into the ground from high in the sky. The firm claims that it performs 25 times faster and 80 percent cheaper compared to traditional seed-planting methodologies.

I am sure there is still a role for humans in reforestation, but the faster and cheaper it can be done, the better. A good use of technology.

You cannot learn anything from AI technology that makes moral judgements. Do this instead

books
Apparently…

Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.

What can I say? Well, for one thing, I am embarrassed for my profession that anyone takes that system seriously. It’s a joke. Anyone who has done any reading on ethics or morality can tell you very quickly that any moral decision of weight cannot be resolved with a formula. The Delphi system can’t make moral decisions. It’s like ELIZA: it could sound like a doctor but it couldn’t really help you with your mental health problem.

Too often people from IT blunder into a field, reduce the problems in them to something computational, produce a new system, and yell “Eureka!”.  The lack of humility is embarrassing.

What IT people should do is spend time reading and thinking about ethics and morality.. If they did, they’d be better off. If you are one of those people, go to fivebooks.com and search for “ethics” or “moral”. From those books you will learn something. You cannot learn anything from the Delphi system.

P.S. For more on that Delphi system, see: Can a Machine Learn Morality? – The New York Times.

(Photo by Gabriella Clare Marino on Unsplash )

On intelligence: in cells, in A.I., in us


This article on cells – yes, cells! – navigating mazes is fascinating and worth a read: Seeing around corners: Cells solve mazes and respond at a distance using attractant breakdown

After reading I thought: I need to rethink “intelligence”. Navigating mazes is something that was considered an intelligent act. Indeed one of the early experiments in A.I. was in the 1950s, when Marvin Minsky developed a smart “rat” (see above) to make its way through a maze. (That’s worth reading about as well.)

Seeing the cell navigate the maze, I thought: if the qualities we associate with intelligence are found at a cellular level, then I don’t really understand intelligence at all. It’s as if intelligence has an atomic level. As if intelligence is at all levels of life, not just the more complex levels.

Maybe the concept of intelligence is next to meaningless and needs to be replaced by something better. Read those pieces and think for yourself. After all, you are intelligent. 🙂

On Pepper and Watson


If you have even a passing knowledge of IT, you likely have heard of Pepper and Watson. Pepper was a robot and Watson was an AI system that won at Jeopardy. Last week the Verge and the New York Times had articles on them both:

  1. Go read how Pepper was a very bad robot – The Verge
  2. What Ever Happened to IBM’s Watson? – The New York Times

I don’t have any specific insights or conclusions into either technology, other than trite summations like “cutting edge technology is hard” and “don’t believe the hype”. AI and robotics are especially hard, so the risks are high and the chances of failure are high. That comes across in these two pieces.

Companies from Tesla to Boston Dynamics and more are making grand claims about their AI and their robotics. I suspect much of it will suffer the same fate as Pepper and Watson. Like all failure, none of it is final or fatal. People learn from their mistakes and move on to make better things. AI and robotics will continue to advance…just not at the pace many would like it too.

In the meantime, go read those articles.  Especially if you are finding yourself falling for the hype.

(Image: link of image on The Verge)

How to get more from your smart speakers


I am a fan of smart speakers, despite the privacy concerns around them. If you are ok with that and you have one or are planning to get one, read these two links to see how you can get more out of them:

  1. How to control Sonos with Google Assistant
  2. Alexa Skills That Are Actually Fun and Useful | WIRED

I use Google Assistant on my Sonos and they make a great device even better. And while I do have Google Home devices in other parts of the house, I tend to be around the Sonos most, so having it there to do more than just play music is a nice thing indeed.

Quote

Me: pandemic masks will make it hard to do face tracking. TensorFlow: we have face AND hand tracking


Yep. See here for more details:

Face and hand tracking in the browser with MediaPipe and TensorFlow.js — The TensorFlow Blog

 

Quote

Two ways to get more out of your Sonos One

  1. How to control Sonos with Google Assistant – good if you like / use Google assistant
  2. Sonos speakers now work with IFTTT so you can automate your music – good if you are a fan of IFTTT, like I am

The Sonos One is a smart little speaker. Using Google Assistant and IFTTT.com make it even smarter.

Quote

If you are thinking of using chatbots in your work, read this


Chatbots are relatively straightforward to deploy these days. AI providers like IBM and others provide all the technology you need. But do you really need them? And if you already have a bunch of them deployed, are you doing it right? If these questions have you wondering, I recommend you read this: Does Your Company Really Need a Chatbot?

You still may want to proceed with chatbots: they make a lot of business sense for certain types of work. But you will have a better idea when not to use them, too.

Quote

What are some of the flaws with facial recognition software?

What are some of the flaws with facial recognition software? Too many for me just to list. Instead, read this article to get a sense of how bad this software can be.

San Francisco is in the vanguard of trying to rein in this technology. Let’s hope more jurisdictions do the same.

Quote

How machine learning (ML) is different from artificial intelligence (AI)

I am glad to see more articles highlighting the difference between ML and AI. For example, this one: How machine learning is different from artificial intelligence – IBM Developer.

There is still lots to be done in the field of machine learning, but I think technologists and scientists need to break out of that tight circle and explore AI in general.

(Image: from the article)

Quote

Isn’t machine learning (ML) and Artificial Intelligence (AI) the same thing?


Nope. And this piece, Machine Learning Vs. Artificial Intelligence: How Are They Different?, does a nice job of reviewing them at a non-technical level. At the end, you should see the differences.

(The image, via g2crowd.com, also shows this nicely).

Quote

It’s Monday morning: are robots going to replace you at your job?

Possibly, but as this article argues, there are at least three areas where robots and suck at:

Creative endeavours: These include creative writing, entrepreneurship, and scientific discovery. These can be highly paid and rewarding jobs. There is no better time to be an entrepreneur with an insight than today, because you can use technology to leverage your invention.

Social interactions: Robots do not have the kinds of emotional intelligence that humans have. Motivated people who are sensitive to the needs of others make great managers, leaders, salespeople, negotiators, caretakers, nurses, and teachers. Consider, for example, the idea of a robot giving a half-time pep talk to a high school football team. That would not be inspiring. Recent research makes clear that social skills are increasingly in demand.

Physical dexterity and mobility: If you have ever seen a robot try to pick up a pencil you see how clumsy and slow they are, compared to a human child. Humans have millennia of experience hiking mountains, swimming lakes, and dancing—practice that gives them extraordinary agility and physical dexterity.

Read the entire article; there’s much more in it than that. But if your job has some element of those three qualities, chances are robots won’t be replacing you soon.

What I find interesting in tech, November 2017


Here’s an assortment of 42 links covering everything from Kubernetes to GCP and other cloud platforms to IoT to Machine Learning and AI to all sorts of other things. Enjoy! (Image from the last link)

  1. Prometheus Kubernetes | Up and Running with CoreOS , Prometheus and Kubernetes: DeployingKubernetes monitoring with Prometheus in 15 minutes – some good links on using Prometheus here
  2. Deploying a containerized web application  |  Container Engine Documentation  |  Google Cloud Platform – a good intro to using GCP
  3. How to classify workloads for cloud migration and decide on a deployment model – Cloud computing news – great insights for any IT Architects
  4. IP Address Locator – Where is this IP Address? – a handy tool, especially if you are browsing firewall logs
  5. Find a Google Glass and kick it from the networkDetect and disconnect WiFi cameras in that AirBnB you’re staying in– Good examples of how to catch spying devices
  6. The sad graph of software death – a great study on technical deby
  7. OpenTechSchool – Websites with Python Flask – get started building simple web sites using Python
  8. Build Your Own “Smart Mirror” with a Two-Way Mirror and an Android Device – this was something I wanted to do at some point
  9. Agile for Everybody: Why, How, Prototype, Iterate – On Human-Centric Systems – Medium – Helpful for those new or confused by Agile
  10. iOS App Development with Swift | Coursera – For Swift newbies
  11. Why A Cloud Guru Runs Serverless on AWS | ProgrammableWeb – If you are interested in serverless, this is helpful
  12. Moving tech forward with Gomix, Express, and Google Spreadsheets | MattStauffer.com – using spreadsheets as a database. Good for some
  13. A Docker Tutorial for Beginners – More Docker 101.
  14. What is DevOps? Think, Code, Deploy, Run, Manage, Learn – IBM Cloud Blog – DevOps 101
  15. Learning Machine Learning | Tutorials and resources for machine learning and data analysis enthusiasts – Lots of good ML links
  16. Importing Data into Maps  |  Google Maps JavaScript API  |  Google Developers – A fine introduction into doing this
  17. Machine learning online course: I just coded my first AI algorithm, and oh boy, it felt good — Quartz – More ML
  18. New Wireless Tech Will Free Us From the Tyranny of Carriers | WIRED – This is typical Wired hype, but interesting
  19. How a DIY Network Plans to Subvert Time Warner Cable’s NYC Internet Monopoly – Motherboard – related to the link above
  20. Building MirrorMirror – more on IT mirrors
  21. Minecraft and Bluemix, Part 1: Running Minecraft servers within Docker – fun!
  22. The 5 Most Infamous Software Bugs in History – OpenMind – also fun!
  23. The code that took America to the moon was just published to GitHub, and it’s like a 1960s time capsule — Quartz – more fun stuff. Don’t submit pull requests 🙂
  24. The 10 Algorithms Machine Learning Engineers Need to Know – More helpful ML articles
  25. User Authentication with the MEAN Stack — SitePoint – if you need authentication, read this…
  26. Easy Node Authentication: Setup and Local ― Scotch – .. or this
  27. 3 Small Tweaks to make Apache fly | Jeff Geerling – Apache users, take note
  28. A Small Collection of NodeMCU Lua Scripts – Limpkin’s blog – Good for ESP users
  29. Facebook OCP project caused Apple networking team to quit – Business Insider – Interesting, though I doubt Cisco is worried
  30. Hacked Cameras, DVRs Powered Today’s Massive Internet Outage — Krebs on Security – more on how IoT is bad
  31. Learn to Code and Help Nonprofits | freeCodeCamp – I want to do this
  32. A Simple and Cheap Dark-Detecting LED Circuit | Evil Mad Scientist Laboratories – a fun hack
  33. Hackers compromised free CCleaner software, Avast’s Piriform says | Article [AMP] | Reuters – this is sad, since CCleaner is a great tool
  34. Is AI Riding a One-Trick Pony? – MIT Technology Review – I believe it is and if AI proponents are not smart they will run into another AI winter.
  35. I built a serverless Telegram bot over the weekend. Here’s what I learned. – Bot developers might like this.
  36. Google’s compelling smartphone pitch – Pixel 2 first impressions | IT World Canada News – The Pixel 2 looks good. If you are interested, check this out
  37. Neural networks and deep learning – more ML
  38. These 60 dumb passwords can hijack over 500,000 IoT devices into the Mirai botnet – more bad IoT
  39. If AWS is serious about Kubernetes, here’s what it must do | InfoWorld – good read
  40. 5 Ways to Troll Your Neural Network | Math with Bad Drawings – interesting
  41. IBM, Docker grow partnership to drive container adoption across public cloud – TechRepublic – makes sense
  42.  Modern JavaScript Explained For Dinosaurs – fun

The home speaker / AI market heats up as Sonos makes advances

Sonos One

WIRED has a good review of the latest product from Sonos, here: Sonos One Review: Amazon’s Alexa Is Here, But It Still Has Some Growing Up to Do

What makes this development significant to me is that it signals that Sonos is concerned with Apple and others coming and taking away market share. Sonos has a great line of products already, but Apple is threatening to take a piece of that with their new home speaker with Siri/AI capability. Sonos has beefed up their AI capability to meet the challenge.

I expect that the next big thing in IT will be the vocal interface tied in with a speaker system in some form. I expect we will see them everywhere. Perhaps not for extended communication, but for brief and frequent requests.

If you are an IT person, I recommend you learn more about chatbot technology and how it will integrate with the work you are doing. More and more users will want to be able to communicate with your systems using voice. You need to provide a vocal interface for them to get information and send information.

Most homes will have one device acting as an aural hub. Sonos wants to make sure it is one they make, and not Apple.