Blogging in the AI era

Traffic surge due to AI

Last week I wrote about AI and food blogging. After I wrote that, I got to thinking about my own blog. Above you see my daily traffic, with a big spike happening at the beginning of December. I looked at what visitors were looking at on my blog then: I could see someone looking at a bunch of links beginning with “A”, then beginning with “B”, then “C”, etc. It’s not something a user would do, but it is something a bot would do. I am assuming it is somehow AI related.

It made me wonder why I am still blogging if people are just going to bypass my blog and read it from AI. In some ways I don’t mind: if someone finds my content useful via AI versus my own blog, then it doesn’t matter to me. In other ways I do mind: I keep up a weekly discipline of posting at least 2x/week as much to keep the blog a living thing, but if AI is going to kill off my traffic, then what is the point of maintaining this discipline? Likewise, if I am getting traffic due to AI, then maybe it doesn’t matter to keep posting, since my site will still be getting traffic. It’s a pickle, I think.

I think I will still post because I choose and want to post, but the days of posting to help maintain interest in the blog may be over. Something to consider as I go into the new year.

On the rise of AI, and the decline of food blogging (and potentially all user generated content)

Food bloggers are seeing a drop in traffic as home cooks turn to AI, according to Bloomberg:

Across the internet, writers say their vetted recipes are hidden by the flood. Pinterest feeds are stuffed with AI-generated images of food that the attached instructions won’t achieve; Google’s AI Overviews surface error-filled cooking steps that siphon away clicks from professionals. Meanwhile, Facebook content farms use AI-generated images of supposedly delicious but impossible dishes to the top of people’s feeds, in an attempt to turn any clicks into ad revenue.

How bad can it be? To give just one example (from Fortune):

For Carrie Forrest, who runs Clean Eating Kitchen, AI has been devastating: 80% of her traffic — and her revenue — has disappeared in two years. Although the views started dropping when OpenAI’s ChatGPT was released, it wasn’t until Google launched AI Mode in search that her traffic collapsed, she said. Since then, she’s gone from employing about ten people to letting everyone go. “I’m going to have to find something else to do.”

My thoughts on this:

  • There already was several problems in the food blogging area, like too many people trying to make a go of it by writing about cooking. Not to mention that there was a problem with plagiarism within the world of food blogging. Now there is a new plagiarism problem, with AI essentially copying recipes and putting them forward as another source of cooking information.
  • AI plagiarism is not limited to food blogging. I’ve seen the same problem within the IT domain, with vapid sites posting “solutions” to technical problems that are a waste of time.
  • For people and organization that generate content hoping to get people to read it, AI will be squeezing people from both sides. Human generated content with be squeezed out by AI generated content. And with things like Google Zero, search engines will use AI to summarize that content, so that the person searching for information will not go to the next step of clicking through a link to get to another site.
  • Like the person who runs the site Clean Eating Kitchen, I suspect many smaller sites will dry up as a source of content. I think you will then have these aggregators (either Google or large AI generated content sites) plus other quality sites that can still compete, be they cooking.nytimes.com or marthastewart.com or even specialized sites like alisoneroman.com. This is not unlike Travel or Hotel sites that can still maintain a presence even though search engines like Google will scrape prices and present them.
  • Eventually, like all things in IT, there will be a continuing swing towards centralization, until the classic problems of centralization occurs. Then you will have niche sites breaking out again and things will decentralize. Until then, the rise of AI both on the front end (Google) and the back end (AI generated sites) will force out these midtier sites like Clean Eating Kitchen.
  • I am not immune to this problem. I see this site being scraped on the regular, presumably by AI. The good news for me is I write this site as much for me as for anyone else. I am happy if I can post things that help people. However, if some AI site presents the information and it helps people, I don’t care that they didn’t get it directly from me. I still benefit from writing these posts. If I were paying my bills with this site, it would be a problem. But this site is just a hobby for me.

(Image from Pixabay, which appears to be AI generated. Pixabay used to be a good source of imagery, but it has been overrun by AI generated images.  I found this by looking for “ai generated cooking” images.)

On my recent vibe coding experiences building a web site and a spotify app using Copilot and Claude, October 2025. Here’s what I did and what I learned.

Computer and code

I recently took a vibe coding approach to try and do two things:

  1. build a simple web site / blog using Microsoft Copilot
  2. write an app to extract information from Spotify using Claude from Anthropic.

Why? For some time I had these two projects in mind:

  1. Go back to the days when a blog — web log — was nothing more than a simple html page.
  2. Programatically built Spotify music playlists vs doing it in the Spotify app.

My main constraint was do it quickly: it was meant to be fun. So don’t spend all weekend getting up to speed on APIs and HTML and CSS: just see what I could do aided by A.I.

First up, to build the web site, I started with Microsoft’s A.I. Since I had some requirements of what I wanted the web log to look like, I told them to Copilot and had it built me the one page blog web site. It helps to be clear on your requirements, but I found that I only needed a few of them to start with. As I went along, new requirements would come to me (e.g. the ability to add photos from the Internet) and I would tell Copilot to now incorporate these new requirements and give me a new web site. My experience vibe coding is that there is a lot of back and forth in order to be effective. As well, there were things that I could just manually do by hand, like add a background tile and change the picture on the header, so I eventually I bailed on using Copilot and finished it by hand. You can see the result here. It’s just what I wanted.

What makes things better was that I asked Copilot to write me a python program which would allow me to easily add to the blog and then push it to AWS. That was a great new addition. Now I can just enter a line on the command line and the blog gets updated.

All in all a very successful project.

As for my Second project with Spotify, I switched from Microsoft to Anthropic. At first Claude produced great code: I asked it to build me a UI that allowed me to type in the name of three songs and then take these songs, use Spotify to build me a playlist built around those songs and lo and behold it did. Things went downhill from there. Much of the code, while long, had numerous errors. I would provide the errors to Claude and it would correct things. The code did get better, but after 30 versions, it was time to call it quits. Instead I took small chunks of the code and using VS Code, manually tried to determine why it was not working. I was able to ultimately nail it down to one Spotify API call. And why wasn’t it working? Because Spotify disabled access to it in 2024. Did Claude know that? I don’t know. It certainly didn’t act like it.

All in all a very unsuccessful project.

What did I learn for this? What would I recommend to you?

  • I have been most successful vibe coding when I get the AI to code in small chunks. Give it small requirements and see if it can successfully do them. Then build up the chunks. That was the case in Copilot. With Claude I took the big bang approach, and then spent lots of time debugging. Eventually to discover the problem, I went to the small chunk approach manually.
  • A.I. is great for grunt level coding. Writing python code to loop through input and extract data under complicated conditions is something I hate to do. A.I. does it better and quicker than me. Again, it’s like using a mixer in the kitchen instead of your arm. It’s impressive to do things with your arm, but the mixer is fine.
  • A.I. is great for fussy coding. One thing I like about coding HTML and CSS using A.I. is I do not have to remember how to get the divs done and which CSS code I do for certain colors, etc. I just tell the A.I. and it does it.
  • A.I. has replaced my templates. I used to have a fair amount of code templates, and when I would start a project, I would get out a good template. When I didn’t have a template, I would often times spend hours going through old code trying to find an example to use as a template. Now I just get A.I. to give it to me.
  • Know when to bail on using A.I. and start doing the work yourself. I think of A.I. as a power tool: it let’s you do things fast, but for the detail work, you need to get in there with the hand tools and do things manually.
  • Make LOTS of backups. Backup your prompts too if you can. I have gone down a certain path in vibe coding, forget to do a backup, and it’s been a mess. As well, at times the A.I. will start to produce bad code. If you version control things, you can go back to a copy from an hour ago that did work and start again.
  • Most LLMs do a pretty good job of coding. I’d recommend Copilot because it is easy: it’s integrated into my Microsoft tools. The results from Claude were good too. I suspect as things advance, the code that comes out of all of them will get better and better.
  • I am not afraid of forgetting how to program in python or html any more than I was afraid of forgetting how to program in assembler when I moved on to C. Os using SQL to work with data rather than hand coding PL/1 code to do things. Or using Java and JDBC. The goal for me is to get my results via some form of coding, and if I can achieve that faster with a higher level of code combined with greater abstraction, I am fine with that.
  • The better you already are at coding, the better your chances of success. I have never had A.I. completely build my code. I get to 80-90%, then do the rest by hand. I am fine with that: I literally save hours every time I do this vs my old approach of using templates and old source code. If you have to depend on A.I. to do 100% of the coding, I suspect you will have more chances of failure. Also, if the code runs successfully with some inputs but fails with other inputs, having debugging skills will make a difference.
  • YMMV. These are my experiences based on these projects. Your experience using A.I. to assist with coding your project may be wildly different than mine. I do hope you have a good/better experience.

Thanks for reading this. I hope it helps.

P.S. When I talk of vibe coding, I am using the definition used by my current employer. The opinions expressed above are mine only and not necessarily those of my employer.

 

How generative AI works (it’s not designed, at least the way you think)

How generative AI works is likely not how you think it works.

What led me to state this was two discussions I’ve had this week about the “design” of gen AI. I think the “design” conclusion that people come up with is based on emergent behaviours of the system. You can shape this behaviour in a number of ways, based on the data you feed the system or some ways you configure the software being trained. However at best you are influencing the behaviour of the system, vs designing the behaviour of the system.

In some ways it’s like taking a bucket of seeds and tossing them around a large area of a field. If you took only seeds of one or two flowers and distributed the seeds so that only these one or two flowers and grew there, you could say you designed the garden to grow these flowers. Likewise, if you divided up the land into rows and planted one type of seed in each row, you can say you designed the garden. However if you have a wide range of seeds included in your bucket and you don’t target the planting of the seeds but just toss them into the ground, it will no longer be considered designed.

That’s why I think gen AI is not really designed. It’s a alot like a big bucket of random seeds not planted in any order. What design you see there is likely how you look at it.

P.S. If you want to explore more on how gen AI works, see this. For a great example of how a gen AI system is built from the ground up, see this.

 

Some thoughts on using chatGPT to write a program to determine which foods are fresh in Ontario

It is easy to find out which foods are fresh in Ontario. There are several guides, including this one from Foodland Ontario, that can help you with this. However, I wanted a particular guide that would list for me all the fresh foods for a given month only.  And since I couldn’t find a guide like that, I decide to write a python program to make such a guide.

In the past, to write a program like that, I would go through sample code I have, pull out bits of code that were good, and cobble together something from all these bits. Lately, though, I simply ask a service like ChatGPT or others to write the code for me. I find nowadays it’s just so much faster to go that route. Call me lazy. 🙂

Since I wanted this done quickly, I pointed chatGPT at the Foodland Ontario guide and asked it to do the following:

Write a python program that strips out the text on this page https://www.ontario.ca/foodland/page/availability-guide?gad_campaignid=22412095602
and leaves just the foods and the month they are grown on. Include all food that states that is Year Round.

Did ChatGPT do that? Yes, it did. Was the program any good? No, it was not! It somehow looked at that web page and decided the values were stored in a table, even though they were not. The web page is more complex than that and so the program was a pretty failure.

After many prompts, I gave up and took an alternative approach. For this new approach, I stripped out the data manually and created a simple CSV file. I then asked ChatGPT to write a program to process the CSV file. Since it is a simpler file, ChatGPT was able to produce a workable Python program that was able to process the CSV file and output the information I needed.

Perhaps a more skilled prompt engineer could have written a better prompt to process the code. I dunno. I am finding that LLMs — not just ChatGPT — are fine with writing some straightforward code based on non-complex inputs and outputs. They are not so fine once the inputs or outputs get complex. But that’s just my experience. YMMV.

I have also concluded that even warmer months like May in Ontario do not have much in the way of fresh food. No wonder there are so many food stories on asparagus and rhubarb! 🙂 You really need to hit June or later before you get into a cornucopia of fresh produce.

If you’d like to see the end result of my coding, go here to this repo: https://github.com/blm849/ontfoods

 

On how generative AI is an accelerant and how it compares to PCs and the Web

 

People have many perspectives on generative AI.  On Bluesky in particular, it’s perceived negatively. They see it as a huge drain on environment. They see the people who develop it as IP thieves. They see it as taking away jobs.

For people who think this is the only way generative AI can be, I’d like to point them to the work my employer is doing with AI and the AI ethical guidelines they’ve published here.

Generative AI can be seen in a positive way. My opinion (not speaking for my employer) is that as the tools that sit in front of gen AI get better and the models that underline gen AI improve, we all will use it every day, in the same way we use search engines and spreadsheets every day.

I’d add that gen AI technology can be considered an accelerant. In any given social order, some participants will choose to adopt an accelerant and disrupt that order by speeding past others. It could be high skilled or low skilled participants. Those who value the current order and their place in it will try to prevent that from happening but likely will fail. This happened with previous accelerants like personal computers and the Web. People who were invested in the order before PCs and the Web were disrupted by those who adopted and exploited the capabilities of the accelerants. (Not all accelerants are technological: literacy, voting rights and access to financial services are also accelerants. I just feel more confident talking about comp sci vs poli sci.) I think this will be true for generative AI. Back in the 80s I thought that individuals and companies that invested in personal computers would leapfrog individuals and companies that ignored PCs. That turned out to be true, just as it was true for individuals and companies that embraced the Web. I think the same will hold for generative AI.

So don’t be like Linda: learn more about gen AI and do not confuse it with A1 sauce. 🙂 If it can help, I wrote a guide on it recently that could be worth your while to check out.

P.S. For anyone wondering, this post is my own and doesn’t necessarily represent IBM’s positions, strategies or opinions. For more on that, see IBM’s social media policy, which as an employee I follow.

P.S.S. I think if you are going to be speaking on AI as the Secretary for Education, you should at least know how to say it.

A guide to generative AI and LLM (large language models), February 2025


I decided to go through all my posts on AI and pull out information that would be useful to anyone wanting to learn more about generative AI (often referred to as gen AI or genAI) and the LLMs they run. If you have used chatGPT, you have used genAI. But there’s much more to the technology than what you find on that site. To see what I mean, click on any of the blue underlined text and you will be taken to a site talking about something to do with gen AI.

Enjoy!

Tutorials/Introductions: for people just getting started with gen AI, I found these links useful: how generative AI works, what is generative AI, how LLMs works,  sentence word embeddings which kinda shows  how LLM works, best practices for prompt engineering with openai api a beginners guide to tokens, a chatGPT cheat sheet,  demystifying tokens: a beginners guide to understanding AI building blocks, what are tokens and how to count them, how to build an llm rag pipeline with llama 2 pgvector and llamaindex and finally this: azure search openai demo. (Some of these are introductory for technical people – don’t worry if you don’t understand all of them.)

For people who are comfortable with github, this is a really good repo / course on generative AI for beginners. (and check out these other repositories here, too). This here on the importance of responsible AI. and here’s a step by step guide to using generative AI in your business, here.

Prompts and Prompt Engineering: if you want some guidance on how best to write prompts as you work with gen AI, I recommend this, thisthis, this, this, this, this, and this.

Finally:  Here’s the associated press AI guidelines for journalists. This here’s a piece on how the  Globe and Mail is using AI in the newsroom. Here’s a how-to on using AI for photo editing. Also, here’s some advice on writing better ChatGPT prompts. How Kevin Kelly is using  AI as an intern, as told to Austin Kleon. A good guide on  how to use AI to do practical stuff.

Note: AI (artificial intelligence) is a big field incorporating everything from vision recognition to game playing to machine learning and more. Generative AI is a part of that field. However nowadays when we talk of AI people usually mean gen AI. A few years ago it was machine learning and before that it was expert systems. Just something to keep in mind as you learn more about AI and gen AI in particular.

 

From RAG to riches (What I find interesting in AI, Dec. 2024)


Here’s my latest post on AI with 60+ worthwhile links found since my last post on it in June. It comes in three sections: the first section is about AI technology, the middle section is on AI in the news and the oped pages, and at the end there’s a good but odd bunch worth a look. Enjoy!

Technology – LLMs:

Technology – Ollama / Flowise:

Technology – RAG:

Technology – watsonx.ai:

AI in the news / oped sections:

Last but not least:

Forget ChatGPT. Now you can build your own large language model (LLM) from scratch

Yep, it’s true. If you have some technical skill, you can download this repo from github: rasbt/LLMs-from-scratch: Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step and build your own LLM.

What I like about this is that it demystifies LLMs. LLMs aren’t magic, they aren’t Skynet and they’re not some sentient being. They’re software. That’s all.

So ignore all the hype and handwaving about LLMs and go make your own.

Prefer to read it in dead tree form? You can get the book here.

Philippe Starck reaches for a new design tool: A.I.


According to the site, Design-Milk, Philippe Starck is working with Kartell to use A.I. in the design of furniture:

Known as a progressive Italian brand that doesn’t shy away from experimentation and innovation, Kartell is becoming increasingly daring as their endeavors expand. At the moment, that means stepping into the world of artificial intelligence. In collaboration with Philippe Starck, they’ve released environmentally friendly furniture designs that merge human creativity with AI, bolstered by advancements in materials and manufacturing.

In case you are worried about them replacing designers, the site goes on to say:

Rather than replace the designer – a huge fear amongst creatives – artificial intelligence helps streamline the prototyping and planning phases, sticking to Starck’s vision and ensuring Kartell’s high quality and production standards are met. In this case, AI also contributes to calculating optimal material usage, a reduction in waste, and ultimately environmental sustainability.

Good to see! Speaking of good to see, you can see more of the furniture on the Design-Milk site, as well as the Kartell site.

AI: from the era of talking to the era of doing

AI a year ago was mostly talking about AI. AI today is about what to do with the technology.

There are still good things being said about AI. This in depth piece by Navneet Alang here in the Walrus was the best writing on AI that I’ve read in a long time. And this New York Times piece on the new trend of AI slop got me thinking too. But for the most part I’ve stopped reading pieces on what does AI mean, or gossip pieces on OpenAI.

Instead I’ve been focused on what I can do with AI. Most of the links that follow reflect that.

Tutorials/Introductions: for people just getting started with gen AI, I found these links useful: how generative AI works, what is generative AI, how LLMs work, best practices for prompt engineering with openai api a beginners guide to tokens, a chatGPT cheat sheet, what are generative adversarial networks gans, demystifying tokens: a beginners guide to understanding AI building block, what are tokens and how to count them, how to build an llm rag pipeline with llama 2 pgvector and llamaindex and finally this: azure search openai demo.

Software/Ollama: Ollama is a great tool for experimenting with LLMs. I recommend it to anyone wanting to do more hands on with AI. Here’s where you can get it. This will help you with how to set up and run a local llm with ollama and llama 2. Also this: how to run llms locally on your laptop using ollama. If you want to run it in Docker, read this. Read this if you want to know where Ollama stores it’s models. Read this if you want to customize a model. If you need to uninstall Ollama manually. you want this.

Software/RAG: I tried to get started with RAG fusion here and was frustrated. Fortunately my manager recommended a much better and easier way to get working with RAG by using this no-code/low-code tool, Flowise. Here’s a guide to getting started with it.

Meanwhile, if you want more pieces on RAG, go here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, and here. I know: it’s a lot. But I found those all those useful, and yes, each “here” takes you to a different link.

Software/embedding: if you are interested in the above topics, you may want to learn more about vector databases and embeddings. Here are four good links on that: one  two,  three, four.

Software/models: relatedly, here’s four good links on models (mostly mixtral which I like alot): mixtral, dolphin 25 mixtral 8x7b,  dolphin 2 5 mixtral 8x7b uncensored mistral , Mistral 7B Instruct v0.2 GGUF,plus a comparison of models.

Software/OpenAI: while it is great to use Ollama for your LLM work, you may want to do work with a SaaS like OpenAI. I found that when I was doing that, these links came in handy: how OpenAI’s billing works, info on your OpenAI  api keys, how to get an OpenAI key, what are tokens and how to count them, more on tokens, and learn OpenAI on Azure.

Software/Sagemaker: here’s some useful links on AWS’s Sagemaker, including pieces on what is amazon sagemaker, a tutorial on it, how to get started with this quick Amazon SageMaker Autopilot, some amazon sagemaker examples , a number of pieces on sagemaker notebooks such as creating a sagemaker notebook, a notebooks comparison, something on distributed training notebook examples and finally this could be helpful: how to deploy llama 2 on aws sagemaker.

Software in general: these didn’t fit any specific software category, but I liked them. There’s something on python and GANs, on autogen, on FLAMLon python vector search tutorial gpt4 and finally how to use ai to build your own website!

Prompt Engineering: if you want some guidance on how best to write prompts as you work with gen AI, I recommend this, thisthis, this, this, this, this, and this.

IT Companies: companies everywhere are investing in AI. Here’s some pieces on what Apple, IBM, Microsoft and…IKEA…are doing:

Apple Microsoft copilot app is available for the iphone and ipad.

IBM: Here’s pieces on ibm databand with self learning for anomaly detection;  IBM and AI and the EI; IBM’s Granite LLM; WatsonX on AWS; installing watsonX; watsonx-code-assistant-4z; IBM Announces Availability of Open Source Mistral AI Model on watsonx; IBM’s criteria for adopting gen AI ;probable root cause accelerating incident remediation with causal AI; Watsonx on Azure; Watsonx and litellm; and conversational ai use cases for enterprises 

IKEA:  here’s something on the IKEA ai assistant using chatgpt for home design.

Microsoft from vision to value realization –  a closer look at how customers are embracing ai transformation to unlock innovation and deliver business outcomes, plus an OpenAI reference.

Hardware: I tend to think of AI in terms of software, but I found these fun hardware links too. Links such as: how to run chatgpt on raspberry pi; how this maker uses raspberry pi and ai to block noisy neighbors music by hacking nearby bluetooth speakers; raspberry pi smart fridge uses chat gpt4 to keep track of your food. Here’s something on the rabbit r1 ai assistant. Here’s the poem 1 AI poetry clock which is cool.

AI and the arts: AI continues to impact the arts for ways good and bad. For instance, here’s something on how to generate free ai music with suno. Relatedly here’s a piece on gen ai, suno music, the music industry, musicians and copyright. This is agood piece on artists and AI in the Times. Also good:  art that can be easily copied by AI is meaningless, says Ai Weiwei. Over at the Washington Post is something on AI image generation. In the battle with AI, here’s how artists can use glaze and nightshade to stop ai from stealing your art. Regarding fakes, here’s a piece on Taylor Swift and ai generated fake images. Speaking of fake, here’s something on AI and the porn industry. There’s also this  piece on generative ai and copyright violation.

Finally: I was looking into the original Eliza recently and thought these four links on it were good: one, two, three and four. Then there’s these stories: on AI to help seniors with loneliness, the new york times / openai/  microsoft lawsuit, another AI lawsuit involving air canada’s chatbot. stunt AI (bot develop software in 7minutes instead of 4 weeks) and a really good AI hub: chathub.gg.

Whew! That’s a tremendous amount of research I’ve done on AI in the last year. I hope you find some of it useful.

Instacart recipes have a secret ingredient: AI. What cooks should know.

One convenient feature of Instacart is a listing of “Related recipes” it will provide you if you are searching for a product or ingredient. For instance, I was searching for “biscuits” and one of the recipes that appeared to me was no-milk biscuits. Hmmmm, that could be good and simple, I thought. Seemed reasonable too. Then I scrolled down the bottom to the recipe and came across the text: this recipe is powered by the magic of AI.

First off, let me say: AI is not magic. Second, this COULD be a good recipe. Or it could be something where AI hallucinated ingredients or quantities. For this recipe, it is somewhat close to this recipe for dairy free biscuits at allRecipes.com, but it is missing some ingredients and the quantities of some of the ingredients are different. I searched other recipes at other sites, and while it is close to them, it seems…lacking…in comparison.

Simply put: you would be better off getting recipes from either reputable cookbooks or websites where recipes are tested and confirmed. It is convenient to use the recipes in places like Instacart, because you can easily add the ingredients of the recipe to your shopping cart, but the results from the recipes generated by AI could leave you very disappointed. Especially when it comes to baking!

P.S. Not to pick just on Instacart: I suspect we are going to see more and more of this type of AI generated advice on websites. Keep an eye out for it.

Using AI in art making, from David Salle to Kevin Kelly (and how Walter Benjamin can help)


Using technology to make art is not new. Gerhard Richter used special software to make the work you see above (4900 colours). Before computers, artists would use lens and photographs and even craftsmen and women to help them create their final artwork.

What is new now is artists (and non-artists) are using AI to make art. Kevin Kelly talks about how he is using AI in his creative process. David Salle has dived deep into making new work using AI. NYT columnist Farhad Manjoo is using visual tools like Procreate to make AI art.

I have seen Kelly’s work, and Manjoo and Salle’s work are on display in their articles. Kelly experiments with AI to produce images in various styles. Perhaps he has changed, but there is no artist in his work that I can see. With Manjoo, you can see more of him in his drawings. And with Salle the artist’s presence comes in as an editor of the works the AI produces out of his original pieces.

In trying to assess these AI generated works, I think Walter Benjamin and his idea of an artwork having an aura can be useful here. Benjamin was thinking about how original art works have an aura that reproduced images of it do not have. I agree with that: no matter how good a reproduction of a work is, it rarely compares to the original work. There’s that something extra in the original.

I think we can extend out the idea of a work having an aura and also having a humanity. What does a work say about the person who created it? What do I recognize in it that is human and unique to that person? What ideas are there that could only come from that person in that time?

You can argue back that this is irrelevant and that AI generated images are interesting and beautiful and furthermore I cannot distinguish them from human generated images. That might be true. Maybe galleries will be filled with images and sculpture with no human involvement whatsoever, not unlike deep learning software that comes up with ways to be best at playing games like Chess and Go. Such AI artwork may be interesting and even beautiful and may seem to have that aura Benjamin talks about. They just won’t be associated with a human.

Even minimal and conceptual art has a humanity associated with it. Duchamp’s Fountain embodies Duchamp’s intelligence and wit and contrariness.  Arp’s “According to the Laws of Chance” likewise shows his interest in pushing the bounds of what is acceptable in a composition of an abstract work. A person is responsible for the work and the work is tied to them. A person is what makes the work relevant to us in a way that a wall covered with children’s collages or a shelf of toilets in a hardware store are not.

We need a new aesthetic philosophy to deal with the firehose of AI art that is coming our way. I propose we tie the art back to our humanity.

P.S. For more on Richter’s 4900 colours, you can see it here on his web site. There’s also a great view of  4900 colors, here,

 

AI scales up and out. Here’s some pieces that shows that.


While there are still prophets and pundits arguing doom and gloom regarding AI, most people and organizations have moved past them and have been adopting the technology widely. Some times that has been good, some times not. To get a sample of how it’s going, here’s a few dozen pieces on AI worth a look:

  1. The WSJ argues that you soon won’t be able to avoid AI at work or at home. It’s true, but so what?
  2. AI is being used to deal with the threat of wildfires. Good. Also good: AI allows farmers to monitor crops in real time. More good AI:  AI used to find antibodies. By the way, here’s a piece on how to turn chatgpt into a chemistry assistant.
  3. A worthwhile piece on AI lawsuits that are coming due to intellectual property rights.
  4. The New York Times has stopped Openai from crawling its site. More on that, here.
  5. Here’s the associated press AI guidelines for journalists.
  6. Students and writers, bookmark this in case you need it: what to do when you’re accused of writing with AI.
  7. Also, what can you do when AI lies about you?
  8. This is dumb: AI builds software under 7 minutes for less than a dollar.
  9. It’s not surprising hackers from lots of security holes in AI.
  10. Take this with a big grain of salt…one of the leaders from Palantir wonders if AI should be compared to atomic energy.
  11. This is bad: how facial recognition tech resulted in a false arrest.
  12. This is not good: a story on using AI to generate books and other junk here.
  13. This is different:  Microsoft Teams is pairing up with Maybelline to provide AI generated beauty filters / virtual makeup.
  14. It’s not news but it is interesting that NVIDIA is a hot company now due to AI. See more about that, here.
  15. Maybe chatgpt and other AI will just be a tool to do inefficient things efficiently.
  16. A thoughtful piece on powering generative AI and large language models with hybrid cloud with a surprise ending, from one of the senior leaders in my group at IBM.

(Photo: link to image in the NVIDIA story. By Philip Cheung for The New York Times)

Paul McCartney’s newest creations using history and science fiction


McCartney has always been one to explore new ideas. So it doesn’t surprise me to learn that he used AI to help with a ‘final’ Beatles song. Unlike others who might muck about and try to create something Beatlesque with AI, he argues that there is nothing artificial in the “new” Beatles song. AI was just an additional instrument Paul used to create music.

While he’s been in the realm of science fiction with his AI project, he’s also been going back in time using photographs to produce a new book. He writes about the book, “1964: Eyes of the Storm – Photographs and Reflections” in the Guardian, here and in The Atlantic, here.

Regardless of what he is using, here’s a good essay by Austin Kleon on McCartney’s creative process: McCartney on not knowing and doing it now. McCartney often gets dinged for his creative failures, but I would argue he has been so massively successful because he tries and fails often enough and he does not stop whenever so called failure occurs. (It helps that things that were once considered failures (e.g., McCartney I and II) turn out later to be considered successes.)

Here’s to Paul successfully living to be a 100 and providing us more great creative works.

(Image of McCartney recording McCartney II, via Austin Kleon’s site)

AI and the shift from opinion to effect


Here’s some things I’ve been clipping out and saving concerning AI. The pattern I see emerging in my clippings is one where I am less interested in opinion on AI and more interested in the effects of AI on the world. There’s still some good think pieces on AI — I put some here — but the use of AI is accelerating in the world and we need to better understand the outcomes of that.

AI Think Pieces: For people who want to be really afraid of AI, I recommend this Guardian piece on  unknown killer robots and AI and…. well you read and decide. On the flip side of that, here’s a good piece critical of  AI alarmism.

Bill Gates chimes in on how  the risks of AI are real but manageable. My friend Clive Thompson discusses a risk of a different sort regarding AI, and that is the possibility of AI model collapse.

The mystery of how AI actually works is delved into at Vox. To me it is one of the potentially big problems AI will have in the future.

Practical AI: here’s a piece on how the  Globe and Mail is using AI in the newsroom. Also practical: How AI is working in the world of world of watches. I loved this story of how AI is being used to translate cuneiform. AI is also being used to deal with forest fires.

AI effects: This piece is on how AI’s large language models are having a big effect on the Web as we know it. To mitigate tithings, the Grammys have outlined new rules for AI use.

when it comes to writing, I think the “Five Books” series is great. They will ask an expert in an area to recommend five books in their field that people should read. So I guess it makes sense that for books on  artificial intelligence, they asked….ChatGPT. It’s well worth a read.

Not all articles written by/with AI turn out great. Ask the folks at Gizmodo.

Speaking of AI and books,  these authors have filed a lawsuit against OpenAI for unlawfully ingesting their books. Could be interesting. To add to that, the New York Times reports that “fed up with A.I. companies consuming online content without consent, fan fiction writers, actors, social media companies and news organizations are among those rebelling.”

On the topic of pushback,  China is setting out new rules concerning generative AI with an emphasis on “healthy” content and adherence to socialist values.

Asia is not a monolith, of course. Other parts of Asia have been less than keen to the EUs AI lobbying blitz. Indeed, India’s Infosys just signed a five year AI deal with 2bln target spend, and I expect lots of other India companies will be doing something similar regarding AI. Those companies have lots of smart and capable IT people, and when companies like Meta open their AI model for commercial use and throw the nascent market into flux, well, that is going to create more opportunities.

Finally, I suspect there is a lot of this going around: My Boss Won’t stop using ChatGPT.

 

 

AI AI AI AI: here’s some good, bad and scary stuff on AI


I am glad that Apple released a new device last week. It was a refreshing change from what most IT discussions are about recently. And what’s topic is most discussed? AI, of course.

And for good reason! There’s lots and lots happening in this space. New AI technology is coming out. New uses for AI are developed. It’s an exciting space. Like many, I am having a hard time keeping it with it all. But try and keep up I must. And as I do, I have found some interesting links for me (and you) to read:

Clive Thompson has a grim take on the boring apocalypse of today’s AI 

Also grim is this story in WiReD about  tessa, the eating disorder chatbot, and why it had to be suspended. Don’t leave your AI unattended!

Grimly funny: what happens when a lawyer misuses ChatGPT? Hijinx insue!

Not grim, but clever:  A Vienna museum turned to AI and cats — yes AI and cats — to lure visitors.

Also in WiReD is this thoughtful piece on how  non english languages are being left out of the AI revolution, at least for now. I see this changing really fast.

A good New York Times piece on how training chatbots on smaller language datasets could make them better.

Fascinating to see how much AI is listed in Zapier’s app tips here.

Also fascinating: Google didn’t talk about any of their old AI while discussing their new AI during their I/O 2023 event recently. I wonder why. I wonder if they’re missing an opportunity.

AI junk: Spotify has reportedly removed tens of thousands of ai generated songs. Also junk, in a way: AI interior design. Still more garbage AI uses, this time in the form of  spam books written using ChatGPT.

This seems like an interesting technology:  liquid neural networks.

What is falcon 40b? Only “the best open-source model currently available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. ” Worth a visit.

Here’s a how-to on using AI for photo editing. Also, here’s some advice on writing better ChatGPT prompts.

This is a good use of AI: accurately diagnosing tomato leaf diseases.

For those that care: deep learning pioneer Geoffrey Hinton quit Google.

Meanwhile Sam Altman is urging the US congress to regulate AI. In the same time period, he threatens to withdraw from Europe if there is too much regulation, only to back down. It seems like he is playing people here. Writers like Naomi Klein are rightly critical. Related is this piece: Inside the fight to reclaim AI from Big Tech’s control | MIT Technology Review.

Here’s another breathless piece on the AI start up scence in San Francisco. Yawn. Here’s a piece on a new startup with a new AI called Character.ai that lets you talk to famous people. I guess….

Here’s some things my company is doing with AI: Watsonx. But also: IBM to pause hiring for back office jobs that ai could kill. Let’s see about that.

Finally, this story from BlogTO on how “Josh Shiaman, a senior feature producer at TSN, set out to create a Jays ad using text-to-video AI generation, admitting that the results “did not go well.”” Not go well is an understatement! It’s the stuff of nightmares! 🙂 Go here and see.

In some ways, maybe that video is a good metaphor for AI: starts off dreamy and then turns horrific.

Or maybe not.

If you want to get a better understanding of generative AI, it pays to see what the New York Times and Bloomberg are up to

One of the problems with generative AI like ChatGPT is it makes you think it is magical. You type in some prompt, it comes back with a complex answer, the next thing you know, you are thinking this thing is smarter than a human. It doesn’t help that there is so much hype surrounding the technology. All of this can make you think it’s supernatural.

Well, it isn’t. It’s simply good IT. It consists of data, hardware and software, just like any other IT. To get a better appreciation of the ordinary nature of that, it helps to look at two recent examples: the AI the New York Times recently built and the AI Bloomberg just built.

It’s best to start with what the Times built. They used software called nanoGPT (karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs) and took the works of Jane Austen, Shakespeare and more to build a chatGPT-like program on their laptops. Then they walked through the steps of getting it working, here: Let Us Show You How GPT Works — Using Jane Austen – The New York Times. It works pretty well after much much training. Obviously it is not as massive or sophisticated as ChatGPT, but after reading the article, you will have a better sense of how this technology works, and why it’s impressive but not magical.

After that, I recommend reading more about BloombergGPT. Their press release states:

Bloomberg today released a research paper detailing the development of BloombergGPT, a new large-scale generative artificial intelligence (AI) model. This large language model (LLM) has been specifically trained on a wide range of financial data to support a diverse set of natural language processing (NLP) tasks within the financial industry.

You can find a link to that research paper, here:  BloombergGPT: A Large Language Model for Finance. What I liked about that paper is it walks through the approach they took, the data they used, and the technology deployed to make their model. Even better, they talk about how it is currently succeeding and what some of the limits of it are.

I’m happy that both these companies have been good about sharing what they are doing with this technology. I might even try and use an old laptop to build my own AI. I mean who wouldn’t benefit from tapping into the genius of Shakespeare or Jane Austen.

For more on what Bloomberg is doing, see this: Bloomberg plans to integrate GPT-style A.I. into its terminal

 

A plethora of good links on AI

There’s still an overwhelming amount of material being written on AI. Here’s a few lists of some of the ones I found most interesting:

ChatGPT: ChatGPT (3 and 4) still dominate much of the discussion I see around AI. For instance:

Using AI: people are trying to use AI for practical purposes, as those last few links showed. Here’s some more examples:

AI and imagery: not all AI is about text. There’s quite a lot going on in the visual space too. Here’s a taste:

AI and the problems it causes: there’s lots of risks with any new technology, and AI is no exception. Cases in point:

Last but not least: 

The profiles (beat-sweeteners?) of Sam Altman

Oddly (not oddly at all?) both the New York Times and the Wall Street Journal had profiles of  Sam Altman at the end of March:

  1. Sam Altman, the ChatGPT King, Is Pretty Sure It’s All Going to Be OK – The New York Times
  2. The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT – WSJ

Given the contentious nature of AI and ChatGPT, you might think think that those pieces would have asked tough questions of Altman concerning AI. Especially since Leslie Stahl did something similar to execs of Microsoft, a few weeks earlier. Perhaps the work of Stahl is why Microsoft / OpenAI wanted Altman to get out there with his story. If that was the intent, then it seemed to work. Nothing too tough in either of these profiles.

Then again, perhaps they were written as beat-sweeteners. After all, getting access is just as important for tech journalists as it is for political journalists. If you want to write more about AI in the future, being able to ring up Altman and his gang and get through to them for a comment seems like something you might want for your job. No doubt profiles like that can help with that access.

For more on the topic of beat-sweeteners, I give you this: Slate’s Beat-Sweetener Reader in Columbia Journalism Review.

 

 

 

The Gartner Hype Cycle: one good way to think about technological hype

Below is the Gartner hype cycle curve with it’s famous five phases:

For those not familiar with it, the chart below breaks it down further and helps you see it in action. Let’s examine that.

Chances are if you are not working with emerging IT and you start hearing about a hyped technology (e.g., categories like blockchain, AI), it is in the phase: Peak of Inflated Expectations. At that stage the technology starts going from discussions in places like Silicon Valley to write ups in the New York Times.  It’s also in that phase two other things happen: “Activity beyond early adopters” and “Negative press begins”.

That’s where AI — specifically generative AI — is: lots of write ups have occurred, people are playing around with it, and now the negative press occurs.

After that phase technologies like AI start to slide down into my favorite phase of the curve: the Trough of Disillusionment. It’s the place where technology goes to die. It’s the place where technology tries to cross the chasm and fails.

See that gap on Technology Adoption Lifecycle curve? If technology can get past  that gap (“The Chasm”) and get adopted by more and more people, then it will move on through the Gartner hype curve, up the Slope of Enlightenment and onto the Plateau of Productivity. As that happens, there is less talking and more doing when it comes to the tech.

That said, my belief is that most technology dies in the Trough. Most technology does not and cannot cross the chasm. Case in point, blockchain. Look at the hype curve for blockchain in 2019:

At the time people were imagining blockchain everywhere: in gaming, in government, in supply chain…you name it. Now some of that has moved on to the end of the hype cycle, but most of it is going to die in the Trough.

The Gartner Hype Curve is a useful way to assess technology that is being talked about, as is the Technology Adoption Curve. Another good way of thinking about hype can be found in this piece I wrote here. In that piece I show there are five levels of hype: Marketing Claims, Exaggerated Returns, Utopian Futures, Magical Thinking, and Othering. For companies like Microsoft talking about AI, the hype levels are at the level of Exaggerated Returns. For people writing think pieces on AI, the hype levels go from Utopian Futures to Othering.

In the end, however you assess it, its all just Hype. When a technology comes out, assess it for yourself as best as you can. Take anything being said and assign it a level of hype from 1-5. If you are trying to figure out if something will eventually be adopted, use the curves above.

Good luck!

A handy guide to spotting AI generated images

Well, two handy guides. One from the Verge and one from the Washington Post. The Verge talks about the phenomenon in general, while the Post is more specific.

It’s possible that the AI software that generates imagery will get better. But for now, those guides are helpful in spotting fakes.

(Image from the Verge. It is highlighting things to look for: weird hands, illegibility, odd shadows.)

More reasons why ChatGPT is not going to replace coders

I have been arguing recently about the limits of the current AI and why it is not going to take over the job of coding yet. I am not alone in this regard. Clive Thompson, who knows a lot about the topic, recently wrote this: Why ChatGPT Won’t Replace Coders Just Yet. Among other reasons, the “‘bullshit’ problem turns up in code, too”. I recommend you read Clive’s take on the subject. And after you read that, check out his book, “Coders”. You can order it, here, from his publisher. I think it’s a classic and one of the best things written on software.

I asked ChatGPT to do my job and write some code to built out some cloud infrastructure. How did it go?

For fun I decide to see if I am ready to be replaced with AI via ChatGPT. Part of my work is building cloud based infrastructure using a language called HCL from Hashicorp using a tool called Terraform. You write one or more program files using HCL, run it through Terraform, and voila, you have the infrastructure you need.

For this exercise I entered the following prompt:

Create a terraform file for IBM cloud that will provision a VPC, set up networking for the VPC, and provision a virtual server for VPC. It will have 1 security group and a rule for this security group to allow SSH connection to your virtual server instance, 1 subnet to enable networking in your VPC, 1 VPC virtual server instance and 1 floating IP address that you use to access your VPC virtual server instance over the public network.

A good prompt, I thought. Nothing vague or conflicting. Practically a form of code itself.

I was shocked when in a few seconds I got this response:

To provision a VPC, networking, and a virtual server with a security group and a floating IP address on IBM Cloud, you can use the following Terraform configuration file:

# Configure the IBM provider
provider “ibm” {
generation = 2
region = “us-south”
}# Create a VPC
resource “ibm_is_vpc” “vpc” {
name = “my-vpc”
address_prefix_management = “auto”
}
# Create a subnet in the VPC
resource “ibm_is_subnet” “subnet” {
name = “my-subnet”
vpc = ibm_is_vpc.vpc.id
ipv4_cidr_block = “10.240.0.0/24”
}
# Create a security group
resource “ibm_is_security_group” “security_group” {
name = “my-security-group”
vpc = ibm_is_vpc.vpc.id
description = “Allow SSH inbound traffic”
}
# Add an SSH rule to the security group
resource “ibm_is_security_group_rule” “ssh_rule” {
security_group = ibm_is_security_group.security_group.id
direction = “inbound”
protocol = “tcp”
port_min = 22
port_max = 22
remote = “0.0.0.0/0”
}
# Create a virtual server instance in the subnet
resource “ibm_is_instance” “instance” {
name = “my-instance”
image = “ibm-centos-7-10-minimal-amd64-2”
profile = “bx2-2×8”
primary_network_interface {
subnet = ibm_is_subnet.subnet.id
}
vpc = ibm_is_vpc.vpc.id
security_groups = [ibm_is_security_group.security_group.id]
}
# Create a floating IP address
resource “ibm_is_floating_ip” “floating_ip” {
name = “my-floating-ip”
}
# Associate the floating IP address with the virtual server instance
resource “ibm_is_floating_ip_attachment” “attachment” {
floating_ip = ibm_is_floating_ip.floating_ip.address
instance = ibm_is_instance.instance.id
}

Before running the above code, replace with your IBM Cloud API key and and with the appropriate values for your IBM Cloud region and availability zone.
Once you have saved this configuration file, you can initialize Terraform by running terraform init, and then create the resources by running terraform apply. After a few minutes, Terraform will create the VPC, subnet, security group, virtual server, and floating IP address. You can then use the floating IP address to SSH into your virtual server instance over the public network.

Wow! What an amazing response, I thought!  It was quick, detailed, seemingly knowledgeable. Also…wrong.

There was so much fundamentally wrong with the code that the only way to get it to work was to rewrite much of it and also add a bunch of new code to make it work. I can’t even tell you how it came up with some of the statements: they are not defined that way for Terraform specs for IBM Cloud.

I even had the AI regenerate the code to see if it could get it right the second time. Instead the new code had 9 errors in it.

Fine. My manager provided me with a prompt of his own: see if it will work on AWS. (Good prompt, Ted!)

I did try it on AWS and Azure. With Azure the response was an incomplete script. (At least the IBM script was complete, though wrong.) With AWS the script was better. I could enter terraform plan and terraform thought it looked good. But once I entered terraform apply to build the resources, it failed.

I suspect the larger problem is lack of sufficient overlapping data for the AI tools to train on. So it sort of gets the code right, but sort of isn’t really good enough.

I see people on the Internet raving about how well AI is doing writing code. Some of the examples are interesting, but I think it has a way to go. I’ll stick to doing my day job. Without AI to help. 🙂

 

Paul Kedrosky & Eric Norlin of SKV know nothing about software and you should ignore them

Last week Paul Kedrosky & Eric Norlin of SKV wrote this piece, Society’s Technical Debt and Software’s Gutenberg Moment, and several smart people I follow seemed to like this and think it something worthwhile. It’s not.

It’s not worthwhile because Kedrosky and Norlin seem to know little if anything about software. Specifically, they don’t seem to know anything about:

  • software development
  • the nature of programming or coding
  • technical debt
  • the total cost of software

Let me wade through their grand and woolly pronouncements and focus on that.

They don’t understand software development: For Kedrosky and Norlin, what software engineers do is predictable and grammatical. (See chart, top right).

To understand why that is wrong, we need to step back. The first part of software development and software engineering should start with requirements. It is a very hard and very human thing to gather those requirements, analyze them, and then design a system around them that meets the needs of the person(s) with the requirements. See where architects are in that chart? In the Disordered and Ad hoc part in the bottom left. Good IT architects and business analysts and software engineers also reside there, at least in the first phase of software development. To get to the predictable and grammatical section which comes in later phases should take a lot of work. It can be difficult and time consuming. That is why software development can be expensive. (Unless you do it poorly: then you get a bunch of crappy code that is hard to maintain or has to be dramatically refactored and rewritten because of the actual technical debt you incurred by rushing it out the door.)

Kedrosky and Norlin seem to exclude that from the role of software engineering. For them, software engineering seems to be primarily writing software. Coding in other words. Let’s ignore the costs of designing the code, testing the code, deploying the code, operating the code, and fixing the code. Let’s assume the bulk of the cost is in writing the code and the goal is to reduce that cost to zero.

That not just my assumption: it seems to be their assumption, too. They state: “Startups spend millions to hire engineers; large companies continue spending millions keeping them around. And, while markets have clearing prices, where supply and demand meet up, we still know that when wages stay higher than comparable positions in other sectors, less of the goods gets produced than is societally desirable. In this case, that underproduced good is…software”.

Perhaps that is how they do things in San Francisco, but the rest of the world has moved on from that model ages ago. There are reasons that countries like India have become powerhouses in terms of software development: they are good software developers and they are relatively low cost. So when they say: “software is chugging along, producing the same thing in ways that mostly wouldn’t seem vastly different to developers doing the same things decades ago….(with) hands pounding out code on keyboards”, they are wrong because the nature of developing software has changed. And one of the way it has changed is that the vast majority of software is written in places that have the lowest cost software developers. So when they say “that software cannot reach its fullest potential without escaping the shackles of the software industry, with its high costs, and, yes, relatively low productivity”, they seem to be locked in a model where software is written they way it is in Silicon Valley by Stanford educated software engineers. The model does not match the real world of software development. Already the bulk of the cost in writing code in most of the world has been reduced not to zero, but to a very small number compared to the cost of writing code in Silicon Valley or North America. Those costs have been wrung out.

They don’t understand coding: Kedrosky and Norlin state:A software industry where anyone can write software, can do it for pennies, and can do it as easily as speaking or writing text, is a transformative moment”. In their piece they use an example of AI writing some Python code that can “open a text file and get rid of all the emojis, except for one I like, and then save it again”. Even they know this is “a trivial, boring and stupid example” and say “it’s not complex code”.

Here’s the problem with writing code at least with the current AI. There are at least three difficulties that AI code generators suffers from: triviality, incorrectness, and prompt skill.

First, the problem of triviality. It’s true: AI is good at making trivial code. It’s hard to know how machine learning software produces this trivial code, but it’s likely because there are lots of examples of such code on the Internet for them to train on. If you need trivial code, AI can quickly produce it.

That said, you don’t need AI to produce trivial code. The Internet is full of it. (How do you think the AI learned to code?) If someone who is not a software developer wants to learn how to write trivial code they can just as easily go to a site like w3schools.com and get it. Anyone can also copy and paste that code and it too will run. And with a tutorial site like w3schools.com the explanation for the code you see will be correct, unlike some of the answers I’ve received from AI.

But what about non-trivial code? That’s where we run into the problem of  incorrectness. If someone prompts AI for code (trivial or non-trivial) they have no way of knowing it is correct, short of running it. AI can produce code quickly and easily for you, but if it is incorrect then you have to debug it. And debugging is a non-trivial skill. The more complex or more general you make your request, the more buggy the code will likely be, and the more effort and skill you have to contribute to make it work.

You might say: incorrectness can be dealt with by better prompting skills. That’s a big assumption, but let’s say it’s true. Now you get to the third problem. To get correct and non-trivial outputs — if you can get it at all, you have to craft really good prompts. That’s not a skill anyone will have. You will have to develop specific skills — prompt engineering skills — to be able to have the AI write python or Go or whatever computer language you need. At that point the prompt to produce that code is a form of code itself.

You might push back and say: sure, the prompts might be complex, but it is less complicated than the actual software I produce. And that leads to the next problem: technical debt.

They don’t understand technical debt: when it comes to technical debt, Kedrosky and Norlin have two problems. First, they don’t understand the idea of technical debt! In the beginning of their piece they state: “Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt.”

That’s not how those of us in the IT community define it.  Technical debt is not a lack of software supply. Even Wikipedia knows better: “In software development, technical debt (also known as design debtor code debt) is the implied cost of future reworking required when choosing an easy but limited solution instead of a better approach that could take more time”. THAT is technical debt.

One of the things I do in my work is assess technical debt, either in legacy systems or new systems. My belief is that once AI can produce code that is non-trivial and correct and based on prompts, we are going to get an explosion of technical debt. We are going to get code that appears to solve a problem and do so with a volume of python (or Java or Go or what have you) that the prompt engineer generated and does not understand. It will be like copy and paste code amplified. Years from now people will look at all this AI generated code and wonder why it is the way it is and why it works the way it does. It will take a bunch of counter AI to translate this code into something understandable, if that will even be possible. Meanwhile companies will be burdened with higher levels of technical debt accelerated by the use of AI developed software. AI is going to make things much worse, if anything.

They don’t understand the total cost of software:  Kedrosky and Norlin included this fantasy chart in their piece.

First off, most people or companies purchase software, not software engineers. That’s the better comparison to hardware.  And if you do replace “Software engineers” with software, then in certain areas of software this chart has already happened. The cost of software has been driven to zero.

What drove this? Not AI. Two big things that drove this are open source and app stores.

In many cases, open source eliminated the (licensing) cost of software to zero. For example, when the web first took off in the 90s, I recall Netscape sold their web server software for $10,000. Now? You can download and run free web server software like nginx on a Raspberry Pi for free. Heck can write your own web server using node.js.

Likewise with app stores. If you wanted to buy software for your PC in the 80s or 90s, you had to pay significantly more than 99 cents for it. It certainly was not free. But the app stores drove the expectation people had that software should be free or practically free. And that expectation drove down the cost of software.

Yet despite developments like open source and app stores driving the cost of software close to zero, people are organizations are still paying plenty for the “free” software. And you will too with AI software, whether it’s commercial software or software for your personal use.

I believe that if you have AI generating tons of free personal software, then you will get a glut of crappy apps and other software tools. If you think it’s hard to determine good personal software now, wait until that happens. There will still be good software, but to develop that will cost money, and that money will be recovered somehow, just like it is today with free apps with in app purchases or apps that steal your personal information and sell it to others. And people will still pay for software from companies like Adobe. They are paying for quality.

Likewise with commercial software. There is tons of open source software out there. Most of it is wisely avoided in commercial settings. However the good stuff is used and it is indeed free to licence and use.

However the total cost of software is more than the licencing cost. Bad AI software will need more capacity to run and more people to support, just like bad open source does. And good AI software will need people and services to keep it going, just like good open source does. Some form of operations, even if it is AIOps (another cost), will need expensive humans to insure the increasing levels of quality required.

So AI can churn out an tons of free software. But the total cost of such software will go elsewhere.

To summarize, producing good software is hard. It’s hard to figure out what is required, and it is hard to design and built and run it to do what is required.  Likewise, understanding software is hard. It’s called code for a reason. Bad code is tough to figure out, but even good code that is out of date or used incorrectly can have problems and solving those problems is hard. And last, free software has other costs associated with it.

P.S. It’s very hard to keep up and counter all the hot takes on what AI is going to do for the world. Most of them I just let slide or let others better than me deal with. But I wanted to address this piece in particular, since it seemed influential and un-countered.

P.S.S. Beside all that above, they also made some statements that just had me wondering what they were thinking. For example, when they wrote: “This technical debt is about to contract in a dramatic, economy-wide fashion as the cost and complexity of software production collapses, releasing a wave of innovation.” Pure hype.

Or this : “Software is misunderstood. It can feel like a discrete thing, something with which we interact. But, really, it is the intrusion into our world of something very alien. It is the strange interaction of electricity, semiconductors, and instructions, all of which somehow magically control objects that range from screens to robots to phones, to medical devices, laptops, and a bewildering multitude of other things.” I mean, what is that all about?

And this:  “The current generation of AI models are a missile aimed, however unintentionally, directly at software production itself”. Pure bombast.

Or this hype: “They are “toys” in that they are able to produce snippets of code for real people, especially non-coders, that one incredibly small group would have thought trivial, and another immense group would have thought impossible. That. Changes. Everything.”

And this is flat up wrong: “This is just the beginning (and it will only get better). It’s possible to write almost every sort of code with such technologies, from microservices joining together various web services (a task for which you might previously have paid a developer $10,000 on Upwork) to an entire mobile app (a task that might cost you $20,000 to $50,000 or more).”

 

 

 

Will AI tools based on large language models (LLMs) become as smart or smarter than us?

With the success and growth of tools like ChatGPT, some are speculating that the current AI could lead us to a point where AI is as smart if not smarter than us. Sounds ominous.

When considering such ominous thoughts, it’s important to step back and remember that Large Language Model (LLM) are tools based in whole or in part on machine learning technology. Despite their sophistication, they still suffer from the same limitations that other machine learning technologies suffer, namely:

    • bias
    • explainability
    • overfitting
    • learning the wrong lessons
    • brittleness

There are more problems than those for specific tools like ChatGPT, as Gary Marcus outlines here:

  • the need for retraining to get up to date
  • lack of truthfulness
  • lack of reliability
  • it may be getting worse due to data contamination (Garbage in, garbage out)

It’s hard to know if current AI technology will overcome these limitations. It’s especially hard to know when orgs like OpenAI do this.

My belief is these tools will hit a peak soon and level off or start to decline. They won’t get as smart or smarter than us. Not in their current form. But that’s based on a general set of experiences I’ve acquired from being in IT for so long. I can’t say for certain.

Remain calm. That’s my best bit of advice I have so far. Don’t let the chattering class get you fearful. In the meanwhile, check out the links provided here. Education is the antidote to fear.

Are AI and ChatGPT the same thing?

Reading about all the amazing things done by the current AI might lead you to think that: AI = ChatGPT (or DALL-E, or whatever people like OpenAI are working on). It’s true, it is currently considered AI,  but there is more to AI than that.

As this piece explains, How ChatGPT Works: The Model Behind The Bot:

ChatGPT is an extrapolation of a class of machine learning Natural Language Processing models known as Large Language Model (LLMs).

Like ChatGPT, many of the current and successful AI tools are examples of machine learning. And while machine learning is powerful, it is just part of AI, as this diagram nicely shows:

To get an idea of just how varied and complex the field of artificial intelligence is, just take a glance at this outline of AI. As you can see, AI incorporates a wide range of topics and includes many different forms of technology. Machine learning is just part of it. So ChatGPT is AI, but AI is more than ChatGPT.

Something to keep in mind when fans and hypesters of the latest AI technology make it seem like there’s nothing more to the field of AI than that.

What is AI Winter all about and why do people who’ve worked in AI tend to talk about it?

It might surprise people, but work in AI has been going on for some time. In fact it started as early as the mid-1950s. In the 50s until the 70s, “computers were solving algebra word problems, proving theorems in geometry and learning to speak English”. They were nothing like OpenAI’s ChatGPT, but they were impressive in their own way. Just like now, people were thinking the sky’s the limit.

Then three things happened: the first AI winter from 1974 until 1980, the boom years from 1980-1987, and then the next AI winter from 1987-1993. I was swept up in the second AI winter, and like the first one, there was a combination of hitting a wall in terms of what the technology could do followed by a drying up of funding.

During the boom times it seemed like there would be no stopping AI and it would eventually be able to do everything humans can do and more. It feels that way now with the current AI boom. People like OpenAI and others are saying the sky’s the limit and nothing is impossible. But just like in the previous boom eras, I think the current AI boom will hit a wall with the technology (we are seeing some of it already). At that point we may see a reduction in funding from companies like Microsoft and Google and more (just like how we are seeing a drawback from them on voice recognition technology like Alexa and Siri).

So yes, the current AI technology is exciting. And yes, it seems like there is no end to what it can do. But I think we will get another AI winter sooner than later, and during this time work will continue in the AI space but you’ll no longer be reading news about it daily. The AI effect will also occur and the work being done by people like OpenAI will just get incorporated into the everyday tools we use, just like autocorrect and image recognition is no just something we take for granted.

P.S. If you are interested in the history of the second AI winter, this piece is good.

What is the AI effect and why should you care?

Since there is so much talk about AI now, I think it is good for people to be familiar with some key ideas concerning AI. One of these is the AI effect. The cool AI you are using now, be it ChatGPT or DALL-E or something else, will eventually get incorporated into some commonplace piece of IT and you won’t even think much of it. You certainly won’t be reading about it everywhere. If anything you and I will complain about it, much like we complain about autocorrect.

So what is the AI Effect? As Wikipedia explains:

The AI effect” is that line of thinking, the tendency to redefine AI to mean: “AI is anything that has not been done yet.” This is the common public misperception, that as soon as AI successfully solves a problem, that solution method is no longer within the domain of AI. Geist credits John McCarthy giving this phenomenon its name, the “AI effect”.

McCorduck calls it an “odd paradox” that “practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the ‘failures’, the tough nuts that couldn’t yet be cracked.”[5]

It’s true. Many things over the years that were once thought of as AI are now considered simply software or hardware, if we even think of them at all.  Whether it is winning at chess, recognizing your voice, or recognizing text in an images, these things are commonplace now, but were lofty goals for AI researchers once.

The AI effect is a key idea to keep in mind when people are hyping any new AI as the thing that will change everything. If the new AI becomes useful, we will likely stop thinking it is AI.

For more on the topic, see: AI effect – Wikipedia

No, prompt engineering is not going to become a hot job. Let a former knowledge engineer explain

With the rise of AI, LLMs, ChatGPT and more, a new skill has risen. The skill involves knowing how to construct prompts for the AI software in such a way that you get an optimal result. This has led to a number of people to start saying things like this: prompt engineers is the next big job. I am here to say this is wrong. Let me explain.

I was heavily into AI in the late 20th century, just before the last AI winter. One of the hot jobs at that time was going to be knowledge engineer (KE). A big part of AI then was the development of expert systems, and the job of the KE was to take the the expertise of someone and translate it into rules that the expert system could use to make decisions. Among other things, part of my role was to be a KE.

So what happened? Well, first off, AI winter happened. People stopped developing expert systems and went and took on other roles.  Ironically, rules engines (essentially expert systems) did come back, but all the hype surrounding them was gone, and the role of KE was gone too. It wasn’t needed. A business analyst can just as easily determine what the rules are and then have a technical specialist store that in the rules engine.

Assuming tools like ChatGPT were to last, I would expect the creation of prompts for it to be taken on by business analysts and technical specialist. Business as usual, in other words. No need for a “prompt engineer”.

Also, you should not assume things like ChatGPT will last. How these tools work is highly volatile; they are not well structured things like programming languages or SQL queries. The prompts that worked on them last week may result in nothing a week later. Furthermore, there are so many problems with the new AI that I could easily see them falling into a new AI winter in the next few years.

So, no, I don’t think Prompt Engineering is a thing that will last. If you want to update your resume to say Prompt Engineer after you’ve hacked around with one of the current AI tools out there, knock yourself out. Just don’t get too far ahead of yourself and think there is going to be a career path there.

Fake beaches! Fake lawyers! ChatGPT! and more (what I find interesting in AI, Feb 2023)


There is so much being written about AI that I decided to blog about it separately from other tech. Plus AI is so much more than just tech. It touches on education, art, the law, medicine…pretty much anything you can think of. Let me show you.

Education: there’s been lots said about how students can (are?) using ChatGPT to cheat on tests. This piece argues that this is a good time to reassess education as a result. Meanwhile, this Princeton Student built GPTZero to detect AI-written essays, so I suspect some people will also just want to crack down on the use of AI. Will that stop the use of AI? I doubt it. Already companies like Microsoft are looking to add AI technology to software like Word. Expect AI to flood and overwhelm education, just like calculators once did.

Art: artists have been adversely affected by AI for awhile. Some artists decided to rise up against it by creating anti-AI protest work. You can read about that, here. It’s tough for artists to push back on AI abuses: they don’t have enough clout. One org that will not have a problem with clout is Getty Images. They’ve already started to fight back against AI with a lawsuit. Good.

Is AI doing art a bad thing? I’ve read many people saying it will cause illustrators and other professional artists to lose their jobs. Austin Kleon has an interesting take on that. I think he is missing the point for some artists, but it’s worth reading.

Work: beside artists losing their jobs, others could as well. The NYPost did a piece on how ChatGPT could make this list of jobs obsolete . That may be shocking to some, but for people like me who have been in IT for some time, it’s just a fact that technology takes away work. Many of us embrace that, so that when AI tools come along and do coding, we say “Yay!”. In my experience, humans just move on to provide business value in different ways.

The law: one place I wish people would be more cautious with using AI is in the law. For instance, we had this happen: an AI robot lawyer was set to argue in court. Real lawyers shut it down. I get it: lawyers are expensive and AI can help some people, but that’s not the way to do it. Another example is this, where you have AI generating wills. Needless to say, it has a way to go.  An even worse example: Developers Created AI to Generate Police Sketches. Experts Are Horrified. Police are often the worse abusers of AI and other technology, sadly.

Medicine: AI can help with medicine, as this shows. Again, like the law, doctors need to be careful. But that seems more promising.

The future and the present: if you want an idea of where AI is going, I recommend this piece in technologyreview and this piece in WaPo.

Meanwhile in the present Microsoft and Google will be battling it out in this year. Microsoft is in the lead so far, but reading this, I am reminded of the many pitfalls ahead: Microsoft’s new AI Prometheus didn’t want to talk about the Holocaust. Yikes. As for Google, reading this blogpost of theirs on new AI tool Bard had me thinking it would be a contender. Instead it was such a debacle even Googlers were complaining about it! I am sure they will get it right, but holy smokes.

Finally: this what AI thinks about Toronto. Ha! As for that beach I mentioned, you will want to read here:  This beach does not exist.

(Image above: ChatGPT logo from Wikipedia)

 

Some very good thoughts (especially at the end) and the usual ramblings on a new year (i.e. the January 2023 edition of my not-a-newsletter newsletter)

We finally closed the book on another pandemic year (2022), and have moved through the first month of 2023. Yay for us!  Is 2023 going to be a pandemic year as well? An endemic year perhaps? We don’t know. One thing for sure: compared to last January, this one has been much gentler.

I think in some ways 2023 may be a transition year. We continue to have transitions when it comes to COVID. We still have new variants like the Kraken (XBB.1.5) that has surged to 40.5% of all infections and rises in hospitalizations. But we take that as a matter of course now. Indeed, there is talk of having annual COVID and flu vaccines. COVID may be more serious than the flu in terms of illness and death, but we may end up approaching them in the same way. No one talks much of flu deaths, and perhaps other than places like Nova Scotia, no one will talk about COVID deaths either. For example, in my province of Ontario it is relatively easy to track hospitalizations related to COVID: it’s relatively hard to report on deaths.

I know because I still have been reporting on COVID hospitalizations every week on twitter for months. My last update was this one:

As I tweeted, the numbers have been dropping recently. Even the ICU numbers, which shot up due to the tripledemic, have declined as the tripledemic declined. Thank god: the pediatric ICUs in November were over 100% full for a time.

So we are transitioning in a positive direction. Good. And not just with COVID.  Everywhere you see spike graphs, like this one for unemployment:

To this one for inflation:

My expectation is that the annual inflation rate will continue to transition and decline in 2023, and interest rates will follow them. That is not to diminish the impact that inflation has had so far. Things have reached the point where people are stealing food and law firms are promising to defend them for free. That said, many, including the New York Times, expect inflation to cool this year. Perhaps it will drop back to where it used to be (i.e. below 3%). If you are skeptical, I recommend this piece in VOX.

Unlike COVID or inflation, not everything has the prospect of improving in 2023. Guns in the US  continue to be a major problem. There is no end in sight for the war in the Ukraine NATO is still supportive and continues to send weapons, although it seems like Zelenskyy had to clear the decks before that occurred. As for cryptocurrencies, it may not be a year of recovery for them as the trial of SBF and FTX unfolds. But who knows: maybe this rally will be a difference.

I suspect crypto will stay dormant for many reasons. One big reason is that tech is going to change its focus from Web3 to AI. Sorry Web3. (Sorry metaverse for that matter!) Microsoft alone is spending billions on it. AI will be all anyone will talk about this year. (No one knew what to do with crypto, save techies and rich people flogging NFTs. Everyone I know seems to be using ChatGPT and the like. That’s a key difference). I’ll be writing more about AI in standalone posts in 2023, there will be so much going on.

In 2023 I expect a continuation of the trend of people flooding back into cities after having left them, based on data like this: Annual demographic estimates census metropolitan areas and census. While residences have become scarce (and rents have become high) as a result, people have not been flooding back into offices. So much so that places like NYC are looking to convert office spaces to residential spaces. The problem with the pandemic is that the changes it has forced on society are more rapid than social systems can respond. But respond they will.

Then again, a new surge could reoccur in China. If that occurs, all bets are off. For now my bets are staying on the table.

Finally, thanks for reading this and anything else you read on this blog recently. I appreciate it. I am optimistic for 2023 in many ways. I hope you are too.

Keep wearing your masks when advisable. Get vaxxed to the max.  Try not to pay attention to Elon Musk or the fate of Twitter: that will all play out in due course. Don’t get too hung up about what AI is going to do: that will all play out as well. Continue to read newsletters. Watch streaming. Listen to podcasts. Most importantly: get out and about whenever you can.

There will always be bad people in the world, and bad acts occurring. Do what you can to prevent that from happening, but don’t rob yourself of your capacity for joy as a result. Be a happy warrior on the side of good. Joy is your armour.

Never forget: you have lived and possibly thrived through some of the most dramatically difficult times in history.  You deserve better times ahead.

Enjoy yourself. Live your life robustly. Whenever you feel lethargic, think back to those times of being locked down and unable to even go to a park and sit down.  Let’s go and get it. Here’s to a better year ahead. We are counting on you, 2023.

Sorry robots: no one is afraid of YOU any more. Now everyone is freaking out about AI instead


It seems weird to think there are trends when it comes to fearing technology. But thinking about it, there seems to be. For awhile my sources of information kept providing me stories of how fearful robots were. Recently that has shifted, and the focus moved to how fearful AI is. Fearing robots is no longer trendy.

Well, trendy or not, here are some stories about robots that have had people concerned. If you have any energy left from being fearful of AI, I recommend them. 🙂

The fact that a city is even contemplating this is worrying: San Francisco Supervisors Vote To Allow Killer Robots. Relatedly, Boston Dynamics pledges not to weaponize its robots.

Not that robots need weapons to be dangerous, as this showed: chess robot breaks childs finger russia tournament. I mean who worries about a “chess robot”??

Robots can harm in other ways, as this story on training robots to be racist and sexist showed.

Ok, not all the robot stories were frightening. These three are more just of interest:

This was a good story on sewer pipe inspection that uses cable-tethered robots. I approve this use of robots, though there are some limitations.

I am not quite a fan of this development:  Your Next Airport Meal May Be Delivered By Robot. I can just see these getting in the way and making airports that much harder to get around.

Finally, here’s a  327 Square Foot Apartment With 5 Rooms Thanks to Robot Furniture. Robot furniture: what will they think of next?

(Image is of the sewer pipe inspection robot.)

 

2022 is done. Thoughts and rambling on the last 365 days (i.e. the December 2022 edition)

Another year over. A semi-pandemic year, in a sense. Covid is still with us, but we did not (so far) get slammed with a bad new variant like we did last year with Omicron. Instead the pandemic is lesser than it was, but greater than the flu in terms of the sickness and death it brings. We still get vaccinated, though less than before. Schools are attended (though  affected),  restaurants are dined in, parties and special events are attended.

You could say things look….normal. But then you can look towards China: they seem to be struggling to deal with COVID lately. Who knows what 2023 will bring? More normal or more like China?

But that’s for 2023. As for last year and what was trending, we can look to  Google which has all its data. One place that was trending alot in 2022: China. China is struggling with both Covid and Xi’s approach to it, as this shows. As for the Chinese leader himself, it was a bad year for Xi, as well as Putin and other global bad guys, sez VOX. And it’s not just the Chinese residents that are having to deal with Xi and his government: Canada has been investigating chinese police stations in Canada. More on that here. I expect China will also trend in 2023. Let’s hope for better reasons.

Other trending events in 2022? Crypto. There was lots of talk about it and people like Sam Bankman-Fried after the collapse of his crypto currency exchange and subsequent arrest. We had stories like this: How I turned $15 000 into $1.2m during the pandemic and then lost it all. Tragic. The overall collapse of the industry has lead to things like bans on crypto mining. That’s good. It has lead to questions around the fundamentals, like: Blockchains What Are They Good For? Last, to keep track of all the shenanigans, I recommend this site: Web3 is Going Just Great. I expect crypto to remain a shambles next year. Time and money will tell.

Elon Musk also managed to trend quite often due to his take over of Twitter and more. He still has fans, but many are disillusioned. After all, his campaign to win back Twitter Advertisers isn’t going well. He was outright booed on stage with Dave Chapelle. (No doubt being a jerk contributed to this.) Tesla stock is tanking. Even his  Starlink is losing money. What a year of failure. I can’t see his 2023 improving either. Hard to believe he was Time’s Man of the Year in 2021!

Because of Musk, people are looking to join other networks, like Mastodon. (BTW, here’s some help on How to Make a Mastodon Account and Join the Fediverse). Some are looking to old networks, like this: the case for returning to tumblr. Some are looking at new ways to socialize online, like this.

Musk was not alone in trending this year due to being a bad guy. Let’s not forget that Kanye West trended as well due to his freakish behavior and antisemitism.

AI was another big trend this year, with things like ChatGPT and stable diffusion (here’s how you can set it up on AWS). We also had stories like this: Madison Square Garden Uses Facial Recognition to Ban Its Owner’s Enemies. Not good. What’s next for AI?  This takes a look. I think we may get an AI winter, but we have 12 months to see if that holds true.

For what it’s worth, Newsletters like Matt Yglesias’s are still going strong, though levelling off I think.

Trends and development aside, here’s some other topics I found interesting and worth being up to close the year:

Assisted death was a grim topic in 2022 in Canada. I remain glued to stories like this: We’re all implicated in Michael Fraser;s decision to die, and  this and this. It all seems like a failure, although this argues that assisted dying is working.

Here’s two good pieces on homelessness Did Billions in Spending Make a Dent in Homelessness? And ‘It’s a sin that we all had to leave’: Moving out of Meagher Park.

Need some advice for the new year? Try this: How Much and Where Are You Really Supposed to Tip? Consider this a good approach to  reading. Here’s a good approach to  slowing down, while here’s a good discussion on  Boundaries. Things to avoid:  the biggest wastes of time we regret when we get older.

Things I found interesting in sports this year:

Things I found interesting in general this year:

Finally, here’s some good advice to close out the year: Don’t Treat Your Life as a Project.

Thanks for reading this and anything else you read on this blog in 2022. I appreciate it. I managed to blog about roughly 3000 things on the internet this year. I hope you found some of them useful.

Happy New Year!

The rise and fall of Alexa and the possibility of a new A.I. winter

I recall reading this piece (What Would Alexa Do?) by Tim O’Reilly in 2016 and thinking, “wow, Alexa is really something! ” Six years later we know what Alexa would do: Alexa would kick the bucket (according to this:  Hey Alexa Are You There? ) I confess I was surprised by its upcoming demise as much as I was surprised by its ascendence.

Since reading about the fall of Alexa, I’ve looked at the new AI in a different and harsher light. So while people like Kevin Roose can write about the brilliance and weirdness of ChatGPT in The New York Times, I cannot stop wondering about the fact that as ChatGPT hits one Million users, it’s costs are eye-watering. (Someone mentioned a figure of $3M in cloud costs / day.) if that keeps up, ChatGPT may join Alexa.

So cost is one big problem the current AI has. Another is the ripping off of other people’s data. Yes, the new image generators by companies like OpenAI are cool, but they’re cool because they take art from human creators and use it as input. I guess it’s nice that some of these companies are now letting artists opt out, but it may already be too late for that.

Cost and theft are not the only problems. A third problem is garbage output. For example, this is an image generated by  Dall-E according to The Verge:

It’s garbage. DALL-E knows how to use visual elements of Vermeer without understanding anything about why Vermeer is great. As for ChatGPT, it easily turns into a bullshit generator, according to this good piece by Clive Thompson.

To summarize: bad input (stolen data), bad processing (expensive), bad output (bullshit and garbage). It’s all adds up, and not in a good way for the latest wunderkinds of AI.

But perhaps I am being too harsh. Perhaps these problems will be resolved. This piece leans in that direction. Perhaps Silicon Valley can make it work.

Or maybe we will have another AI Winter.….If you mix a recession in with the other three problems I mentioned, plus the overall decline in the reputation of Silicon Valley, a second wintry period is a possibility. Speaking just for myself, I would not mind.

The last AI winter swept away so much unnecessary tech (remember LISP machines?) and freed up lots of smart people to go on to work on other technologies, such as networking. The result was tremendous increases in the use of networks, leading to the common acceptance and use of the Internet and the Web. We’d be lucky to have such a repeat.

Hey Alexa, what will be the outcome?

UGC (user generated content) is a sucker’s game. We should resolve to be less suckers in 2023

I started to think of UGC when I read that tweet last night.

We don’t talk about UGC much anymore. We take it for granted since it is so ubiquitous. Any time we use social media we are creating UGC. But it’s not limited to site like Twitter or Instagram. Web site like Behance and GitHub are also repositories of UGC. Even Google Docs and Spotify are ways for people to generate content (a spreadsheet is UGC for Google to mine, just like a playlist is.)

When platforms came along for us to post our words and images, we embraced them. Even when we knew they were being exploited for advertising, many of us shrugged and accepted it as a deal: we get free platforms in exchange for our attention and content.

Recently though it’s gotten more exploitive. Companies like OpenAI and others are scrapping all our UGC from the web and turning it into data sets. Facial recognition software is turning our selfies into ways to track us. Never mind all the listening devices we let into our houses (“Hey Google, are you recording all my comings and goings?”…probably)

Given that, we should resolve to be smarter about our UGC in 2023. Always consider what you are sharing, and find ways to limit it if you can. Indeed give yourself some boundaries so that when the next company comes along with vowel problems (looking at you, Trackt) and asks for our data, we say no thanks.

We can’t stop companies from taking advantage of the things we share. So let’s aim to share things wisely and in a limited way.

It’s Sunday. Here are nine pieces to mull over this afternoon.

Sure you can make yourself busy on this warm summer weekend. Or you can chill for a bit and read one of these thoughtful pieces. I know which one I am going to do. 🙂

  1. Here’s a piece on the joy of Latin. Really.
  2. 100% this: The Case for Killing the Trolley Problem
  3. Worthwhile: Piketty on equality.
  4. This is a weak piece that tries to link AI to colonialism but fails to make the case:  AI colonialism.
  5. Do you have siblings? Read this:  How Your Siblings Can Make You Happier.
  6. Worth chewing on:The limits of forgiveness.
  7. On one of our oldest technologies: the importance of wood .
  8. Dive into this list of common misconceptions.
  9. Finally, this piece on  Alexa with the voice of dead people will get you thinking.

Computer memory isn’t our memory and AI isn’t our intelligence


Since the beginning of the digital age, we have referred to quickly retrievable computer storage as “memory”. It has some resemblance to memory, but it has none of the complexity of our memories and how they work. If you talked to most people about this, I don’t think there would be many who would think they are the same.

Artificial Intelligence isn’t our Intelligence, regardless of how good it gets. AI is going to have some resemblance to our intelligence, but it has none of the complexity of our intelligence and how it works. Yet you can talk to many who think that over time they will become the same.

I was thinking about that last week after the kerfuffle from the Google engineer who exclaimed their software was sentient. Many many think pieces have been written about it; I think this one is the best I read from a lay person’s perspective. If you are concerned about it or simply intrigued, read that. It’s a bit harsh on Turing’s test, but I think overall it’s worth your time.

It is impressive what leaps information technology is making. But however much it resembles us as humans, it is not human. It will not have our memories. It will not have our intelligence. It will not have the qualities that make us human, any more than a scarecrow does.

Today in good robots: reforesting drones


I’m often critical of robots and their relatives here, but these particular drones seem very good indeed. As that linked article explains:

swarms of (theese) seed-firing drones … are planting 40,000 trees a day to fight deforestation…(their) novel technology combines artificial intelligence with specially designed proprietary seed pods that can be fired into the ground from high in the sky. The firm claims that it performs 25 times faster and 80 percent cheaper compared to traditional seed-planting methodologies.

I am sure there is still a role for humans in reforestation, but the faster and cheaper it can be done, the better. A good use of technology.

You cannot learn anything from AI technology that makes moral judgements. Do this instead

books
Apparently…

Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.

What can I say? Well, for one thing, I am embarrassed for my profession that anyone takes that system seriously. It’s a joke. Anyone who has done any reading on ethics or morality can tell you very quickly that any moral decision of weight cannot be resolved with a formula. The Delphi system can’t make moral decisions. It’s like ELIZA: it could sound like a doctor but it couldn’t really help you with your mental health problem.

Too often people from IT blunder into a field, reduce the problems in them to something computational, produce a new system, and yell “Eureka!”.  The lack of humility is embarrassing.

What IT people should do is spend time reading and thinking about ethics and morality.. If they did, they’d be better off. If you are one of those people, go to fivebooks.com and search for “ethics” or “moral”. From those books you will learn something. You cannot learn anything from the Delphi system.

P.S. For more on that Delphi system, see: Can a Machine Learn Morality? – The New York Times.

(Photo by Gabriella Clare Marino on Unsplash )