On the rise of AI, and the decline of food blogging (and potentially all user generated content)

Food bloggers are seeing a drop in traffic as home cooks turn to AI, according to Bloomberg:

Across the internet, writers say their vetted recipes are hidden by the flood. Pinterest feeds are stuffed with AI-generated images of food that the attached instructions won’t achieve; Google’s AI Overviews surface error-filled cooking steps that siphon away clicks from professionals. Meanwhile, Facebook content farms use AI-generated images of supposedly delicious but impossible dishes to the top of people’s feeds, in an attempt to turn any clicks into ad revenue.

How bad can it be? To give just one example (from Fortune):

For Carrie Forrest, who runs Clean Eating Kitchen, AI has been devastating: 80% of her traffic — and her revenue — has disappeared in two years. Although the views started dropping when OpenAI’s ChatGPT was released, it wasn’t until Google launched AI Mode in search that her traffic collapsed, she said. Since then, she’s gone from employing about ten people to letting everyone go. “I’m going to have to find something else to do.”

My thoughts on this:

  • There already was several problems in the food blogging area, like too many people trying to make a go of it by writing about cooking. Not to mention that there was a problem with plagiarism within the world of food blogging. Now there is a new plagiarism problem, with AI essentially copying recipes and putting them forward as another source of cooking information.
  • AI plagiarism is not limited to food blogging. I’ve seen the same problem within the IT domain, with vapid sites posting “solutions” to technical problems that are a waste of time.
  • For people and organization that generate content hoping to get people to read it, AI will be squeezing people from both sides. Human generated content with be squeezed out by AI generated content. And with things like Google Zero, search engines will use AI to summarize that content, so that the person searching for information will not go to the next step of clicking through a link to get to another site.
  • Like the person who runs the site Clean Eating Kitchen, I suspect many smaller sites will dry up as a source of content. I think you will then have these aggregators (either Google or large AI generated content sites) plus other quality sites that can still compete, be they cooking.nytimes.com or marthastewart.com or even specialized sites like alisoneroman.com. This is not unlike Travel or Hotel sites that can still maintain a presence even though search engines like Google will scrape prices and present them.
  • Eventually, like all things in IT, there will be a continuing swing towards centralization, until the classic problems of centralization occurs. Then you will have niche sites breaking out again and things will decentralize. Until then, the rise of AI both on the front end (Google) and the back end (AI generated sites) will force out these midtier sites like Clean Eating Kitchen.
  • I am not immune to this problem. I see this site being scraped on the regular, presumably by AI. The good news for me is I write this site as much for me as for anyone else. I am happy if I can post things that help people. However, if some AI site presents the information and it helps people, I don’t care that they didn’t get it directly from me. I still benefit from writing these posts. If I were paying my bills with this site, it would be a problem. But this site is just a hobby for me.

(Image from Pixabay, which appears to be AI generated. Pixabay used to be a good source of imagery, but it has been overrun by AI generated images.  I found this by looking for “ai generated cooking” images.)

On my recent vibe coding experiences building a web site and a spotify app using Copilot and Claude, October 2025. Here’s what I did and what I learned.

Computer and code

I recently took a vibe coding approach to try and do two things:

  1. build a simple web site / blog using Microsoft Copilot
  2. write an app to extract information from Spotify using Claude from Anthropic.

Why? For some time I had these two projects in mind:

  1. Go back to the days when a blog — web log — was nothing more than a simple html page.
  2. Programatically built Spotify music playlists vs doing it in the Spotify app.

My main constraint was do it quickly: it was meant to be fun. So don’t spend all weekend getting up to speed on APIs and HTML and CSS: just see what I could do aided by A.I.

First up, to build the web site, I started with Microsoft’s A.I. Since I had some requirements of what I wanted the web log to look like, I told them to Copilot and had it built me the one page blog web site. It helps to be clear on your requirements, but I found that I only needed a few of them to start with. As I went along, new requirements would come to me (e.g. the ability to add photos from the Internet) and I would tell Copilot to now incorporate these new requirements and give me a new web site. My experience vibe coding is that there is a lot of back and forth in order to be effective. As well, there were things that I could just manually do by hand, like add a background tile and change the picture on the header, so I eventually I bailed on using Copilot and finished it by hand. You can see the result here. It’s just what I wanted.

What makes things better was that I asked Copilot to write me a python program which would allow me to easily add to the blog and then push it to AWS. That was a great new addition. Now I can just enter a line on the command line and the blog gets updated.

All in all a very successful project.

As for my Second project with Spotify, I switched from Microsoft to Anthropic. At first Claude produced great code: I asked it to build me a UI that allowed me to type in the name of three songs and then take these songs, use Spotify to build me a playlist built around those songs and lo and behold it did. Things went downhill from there. Much of the code, while long, had numerous errors. I would provide the errors to Claude and it would correct things. The code did get better, but after 30 versions, it was time to call it quits. Instead I took small chunks of the code and using VS Code, manually tried to determine why it was not working. I was able to ultimately nail it down to one Spotify API call. And why wasn’t it working? Because Spotify disabled access to it in 2024. Did Claude know that? I don’t know. It certainly didn’t act like it.

All in all a very unsuccessful project.

What did I learn for this? What would I recommend to you?

  • I have been most successful vibe coding when I get the AI to code in small chunks. Give it small requirements and see if it can successfully do them. Then build up the chunks. That was the case in Copilot. With Claude I took the big bang approach, and then spent lots of time debugging. Eventually to discover the problem, I went to the small chunk approach manually.
  • A.I. is great for grunt level coding. Writing python code to loop through input and extract data under complicated conditions is something I hate to do. A.I. does it better and quicker than me. Again, it’s like using a mixer in the kitchen instead of your arm. It’s impressive to do things with your arm, but the mixer is fine.
  • A.I. is great for fussy coding. One thing I like about coding HTML and CSS using A.I. is I do not have to remember how to get the divs done and which CSS code I do for certain colors, etc. I just tell the A.I. and it does it.
  • A.I. has replaced my templates. I used to have a fair amount of code templates, and when I would start a project, I would get out a good template. When I didn’t have a template, I would often times spend hours going through old code trying to find an example to use as a template. Now I just get A.I. to give it to me.
  • Know when to bail on using A.I. and start doing the work yourself. I think of A.I. as a power tool: it let’s you do things fast, but for the detail work, you need to get in there with the hand tools and do things manually.
  • Make LOTS of backups. Backup your prompts too if you can. I have gone down a certain path in vibe coding, forget to do a backup, and it’s been a mess. As well, at times the A.I. will start to produce bad code. If you version control things, you can go back to a copy from an hour ago that did work and start again.
  • Most LLMs do a pretty good job of coding. I’d recommend Copilot because it is easy: it’s integrated into my Microsoft tools. The results from Claude were good too. I suspect as things advance, the code that comes out of all of them will get better and better.
  • I am not afraid of forgetting how to program in python or html any more than I was afraid of forgetting how to program in assembler when I moved on to C. Os using SQL to work with data rather than hand coding PL/1 code to do things. Or using Java and JDBC. The goal for me is to get my results via some form of coding, and if I can achieve that faster with a higher level of code combined with greater abstraction, I am fine with that.
  • The better you already are at coding, the better your chances of success. I have never had A.I. completely build my code. I get to 80-90%, then do the rest by hand. I am fine with that: I literally save hours every time I do this vs my old approach of using templates and old source code. If you have to depend on A.I. to do 100% of the coding, I suspect you will have more chances of failure. Also, if the code runs successfully with some inputs but fails with other inputs, having debugging skills will make a difference.
  • YMMV. These are my experiences based on these projects. Your experience using A.I. to assist with coding your project may be wildly different than mine. I do hope you have a good/better experience.

Thanks for reading this. I hope it helps.

P.S. When I talk of vibe coding, I am using the definition used by my current employer. The opinions expressed above are mine only and not necessarily those of my employer.

 

How generative AI works (it’s not designed, at least the way you think)

How generative AI works is likely not how you think it works.

What led me to state this was two discussions I’ve had this week about the “design” of gen AI. I think the “design” conclusion that people come up with is based on emergent behaviours of the system. You can shape this behaviour in a number of ways, based on the data you feed the system or some ways you configure the software being trained. However at best you are influencing the behaviour of the system, vs designing the behaviour of the system.

In some ways it’s like taking a bucket of seeds and tossing them around a large area of a field. If you took only seeds of one or two flowers and distributed the seeds so that only these one or two flowers and grew there, you could say you designed the garden to grow these flowers. Likewise, if you divided up the land into rows and planted one type of seed in each row, you can say you designed the garden. However if you have a wide range of seeds included in your bucket and you don’t target the planting of the seeds but just toss them into the ground, it will no longer be considered designed.

That’s why I think gen AI is not really designed. It’s a alot like a big bucket of random seeds not planted in any order. What design you see there is likely how you look at it.

P.S. If you want to explore more on how gen AI works, see this. For a great example of how a gen AI system is built from the ground up, see this.

 

Some thoughts on using chatGPT to write a program to determine which foods are fresh in Ontario

It is easy to find out which foods are fresh in Ontario. There are several guides, including this one from Foodland Ontario, that can help you with this. However, I wanted a particular guide that would list for me all the fresh foods for a given month only.  And since I couldn’t find a guide like that, I decide to write a python program to make such a guide.

In the past, to write a program like that, I would go through sample code I have, pull out bits of code that were good, and cobble together something from all these bits. Lately, though, I simply ask a service like ChatGPT or others to write the code for me. I find nowadays it’s just so much faster to go that route. Call me lazy. 🙂

Since I wanted this done quickly, I pointed chatGPT at the Foodland Ontario guide and asked it to do the following:

Write a python program that strips out the text on this page https://www.ontario.ca/foodland/page/availability-guide?gad_campaignid=22412095602
and leaves just the foods and the month they are grown on. Include all food that states that is Year Round.

Did ChatGPT do that? Yes, it did. Was the program any good? No, it was not! It somehow looked at that web page and decided the values were stored in a table, even though they were not. The web page is more complex than that and so the program was a pretty failure.

After many prompts, I gave up and took an alternative approach. For this new approach, I stripped out the data manually and created a simple CSV file. I then asked ChatGPT to write a program to process the CSV file. Since it is a simpler file, ChatGPT was able to produce a workable Python program that was able to process the CSV file and output the information I needed.

Perhaps a more skilled prompt engineer could have written a better prompt to process the code. I dunno. I am finding that LLMs — not just ChatGPT — are fine with writing some straightforward code based on non-complex inputs and outputs. They are not so fine once the inputs or outputs get complex. But that’s just my experience. YMMV.

I have also concluded that even warmer months like May in Ontario do not have much in the way of fresh food. No wonder there are so many food stories on asparagus and rhubarb! 🙂 You really need to hit June or later before you get into a cornucopia of fresh produce.

If you’d like to see the end result of my coding, go here to this repo: https://github.com/blm849/ontfoods

 

On how generative AI is an accelerant and how it compares to PCs and the Web

 

People have many perspectives on generative AI.  On Bluesky in particular, it’s perceived negatively. They see it as a huge drain on environment. They see the people who develop it as IP thieves. They see it as taking away jobs.

For people who think this is the only way generative AI can be, I’d like to point them to the work my employer is doing with AI and the AI ethical guidelines they’ve published here.

Generative AI can be seen in a positive way. My opinion (not speaking for my employer) is that as the tools that sit in front of gen AI get better and the models that underline gen AI improve, we all will use it every day, in the same way we use search engines and spreadsheets every day.

I’d add that gen AI technology can be considered an accelerant. In any given social order, some participants will choose to adopt an accelerant and disrupt that order by speeding past others. It could be high skilled or low skilled participants. Those who value the current order and their place in it will try to prevent that from happening but likely will fail. This happened with previous accelerants like personal computers and the Web. People who were invested in the order before PCs and the Web were disrupted by those who adopted and exploited the capabilities of the accelerants. (Not all accelerants are technological: literacy, voting rights and access to financial services are also accelerants. I just feel more confident talking about comp sci vs poli sci.) I think this will be true for generative AI. Back in the 80s I thought that individuals and companies that invested in personal computers would leapfrog individuals and companies that ignored PCs. That turned out to be true, just as it was true for individuals and companies that embraced the Web. I think the same will hold for generative AI.

So don’t be like Linda: learn more about gen AI and do not confuse it with A1 sauce. 🙂 If it can help, I wrote a guide on it recently that could be worth your while to check out.

P.S. For anyone wondering, this post is my own and doesn’t necessarily represent IBM’s positions, strategies or opinions. For more on that, see IBM’s social media policy, which as an employee I follow.

P.S.S. I think if you are going to be speaking on AI as the Secretary for Education, you should at least know how to say it.

A guide to generative AI and LLM (large language models), February 2025


I decided to go through all my posts on AI and pull out information that would be useful to anyone wanting to learn more about generative AI (often referred to as gen AI or genAI) and the LLMs they run. If you have used chatGPT, you have used genAI. But there’s much more to the technology than what you find on that site. To see what I mean, click on any of the blue underlined text and you will be taken to a site talking about something to do with gen AI.

Enjoy!

Tutorials/Introductions: for people just getting started with gen AI, I found these links useful: how generative AI works, what is generative AI, how LLMs works,  sentence word embeddings which kinda shows  how LLM works, best practices for prompt engineering with openai api a beginners guide to tokens, a chatGPT cheat sheet,  demystifying tokens: a beginners guide to understanding AI building blocks, what are tokens and how to count them, how to build an llm rag pipeline with llama 2 pgvector and llamaindex and finally this: azure search openai demo. (Some of these are introductory for technical people – don’t worry if you don’t understand all of them.)

For people who are comfortable with github, this is a really good repo / course on generative AI for beginners. (and check out these other repositories here, too). This here on the importance of responsible AI. and here’s a step by step guide to using generative AI in your business, here.

Prompts and Prompt Engineering: if you want some guidance on how best to write prompts as you work with gen AI, I recommend this, thisthis, this, this, this, this, and this.

Finally:  Here’s the associated press AI guidelines for journalists. This here’s a piece on how the  Globe and Mail is using AI in the newsroom. Here’s a how-to on using AI for photo editing. Also, here’s some advice on writing better ChatGPT prompts. How Kevin Kelly is using  AI as an intern, as told to Austin Kleon. A good guide on  how to use AI to do practical stuff.

Note: AI (artificial intelligence) is a big field incorporating everything from vision recognition to game playing to machine learning and more. Generative AI is a part of that field. However nowadays when we talk of AI people usually mean gen AI. A few years ago it was machine learning and before that it was expert systems. Just something to keep in mind as you learn more about AI and gen AI in particular.

 

Winter and flowers: two words that should go together

Sure, gardens are great when the weather is warm, but what about when it is not. Well then you want some winter flowers. Indeed, here are 25 Winter Flowers: Cold-Loving Plants to Plant in Winter. I was amazed how many types there are. After seeing them, you may want to order some seeds and plant them too. Like those iceland poppies, shown above.

P.S. For more on cold weather gardening, see: How to garden in the winter | Smart People I Know

(Image from Eden Brothers. They sell those poppy seeds and much more)

 

 

 

 

From RAG to riches (What I find interesting in AI, Dec. 2024)


Here’s my latest post on AI with 60+ worthwhile links found since my last post on it in June. It comes in three sections: the first section is about AI technology, the middle section is on AI in the news and the oped pages, and at the end there’s a good but odd bunch worth a look. Enjoy!

Technology – LLMs:

Technology – Ollama / Flowise:

Technology – RAG:

Technology – watsonx.ai:

AI in the news / oped sections:

Last but not least:

Forget ChatGPT. Now you can build your own large language model (LLM) from scratch

Yep, it’s true. If you have some technical skill, you can download this repo from github: rasbt/LLMs-from-scratch: Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step and build your own LLM.

What I like about this is that it demystifies LLMs. LLMs aren’t magic, they aren’t Skynet and they’re not some sentient being. They’re software. That’s all.

So ignore all the hype and handwaving about LLMs and go make your own.

Prefer to read it in dead tree form? You can get the book here.

Philippe Starck reaches for a new design tool: A.I.


According to the site, Design-Milk, Philippe Starck is working with Kartell to use A.I. in the design of furniture:

Known as a progressive Italian brand that doesn’t shy away from experimentation and innovation, Kartell is becoming increasingly daring as their endeavors expand. At the moment, that means stepping into the world of artificial intelligence. In collaboration with Philippe Starck, they’ve released environmentally friendly furniture designs that merge human creativity with AI, bolstered by advancements in materials and manufacturing.

In case you are worried about them replacing designers, the site goes on to say:

Rather than replace the designer – a huge fear amongst creatives – artificial intelligence helps streamline the prototyping and planning phases, sticking to Starck’s vision and ensuring Kartell’s high quality and production standards are met. In this case, AI also contributes to calculating optimal material usage, a reduction in waste, and ultimately environmental sustainability.

Good to see! Speaking of good to see, you can see more of the furniture on the Design-Milk site, as well as the Kartell site.

Instacart recipes have a secret ingredient: AI. What cooks should know.

One convenient feature of Instacart is a listing of “Related recipes” it will provide you if you are searching for a product or ingredient. For instance, I was searching for “biscuits” and one of the recipes that appeared to me was no-milk biscuits. Hmmmm, that could be good and simple, I thought. Seemed reasonable too. Then I scrolled down the bottom to the recipe and came across the text: this recipe is powered by the magic of AI.

First off, let me say: AI is not magic. Second, this COULD be a good recipe. Or it could be something where AI hallucinated ingredients or quantities. For this recipe, it is somewhat close to this recipe for dairy free biscuits at allRecipes.com, but it is missing some ingredients and the quantities of some of the ingredients are different. I searched other recipes at other sites, and while it is close to them, it seems…lacking…in comparison.

Simply put: you would be better off getting recipes from either reputable cookbooks or websites where recipes are tested and confirmed. It is convenient to use the recipes in places like Instacart, because you can easily add the ingredients of the recipe to your shopping cart, but the results from the recipes generated by AI could leave you very disappointed. Especially when it comes to baking!

P.S. Not to pick just on Instacart: I suspect we are going to see more and more of this type of AI generated advice on websites. Keep an eye out for it.

Using AI in art making, from David Salle to Kevin Kelly (and how Walter Benjamin can help)


Using technology to make art is not new. Gerhard Richter used special software to make the work you see above (4900 colours). Before computers, artists would use lens and photographs and even craftsmen and women to help them create their final artwork.

What is new now is artists (and non-artists) are using AI to make art. Kevin Kelly talks about how he is using AI in his creative process. David Salle has dived deep into making new work using AI. NYT columnist Farhad Manjoo is using visual tools like Procreate to make AI art.

I have seen Kelly’s work, and Manjoo and Salle’s work are on display in their articles. Kelly experiments with AI to produce images in various styles. Perhaps he has changed, but there is no artist in his work that I can see. With Manjoo, you can see more of him in his drawings. And with Salle the artist’s presence comes in as an editor of the works the AI produces out of his original pieces.

In trying to assess these AI generated works, I think Walter Benjamin and his idea of an artwork having an aura can be useful here. Benjamin was thinking about how original art works have an aura that reproduced images of it do not have. I agree with that: no matter how good a reproduction of a work is, it rarely compares to the original work. There’s that something extra in the original.

I think we can extend out the idea of a work having an aura and also having a humanity. What does a work say about the person who created it? What do I recognize in it that is human and unique to that person? What ideas are there that could only come from that person in that time?

You can argue back that this is irrelevant and that AI generated images are interesting and beautiful and furthermore I cannot distinguish them from human generated images. That might be true. Maybe galleries will be filled with images and sculpture with no human involvement whatsoever, not unlike deep learning software that comes up with ways to be best at playing games like Chess and Go. Such AI artwork may be interesting and even beautiful and may seem to have that aura Benjamin talks about. They just won’t be associated with a human.

Even minimal and conceptual art has a humanity associated with it. Duchamp’s Fountain embodies Duchamp’s intelligence and wit and contrariness.  Arp’s “According to the Laws of Chance” likewise shows his interest in pushing the bounds of what is acceptable in a composition of an abstract work. A person is responsible for the work and the work is tied to them. A person is what makes the work relevant to us in a way that a wall covered with children’s collages or a shelf of toilets in a hardware store are not.

We need a new aesthetic philosophy to deal with the firehose of AI art that is coming our way. I propose we tie the art back to our humanity.

P.S. For more on Richter’s 4900 colours, you can see it here on his web site. There’s also a great view of  4900 colors, here,

 

AI scales up and out. Here’s some pieces that shows that.


While there are still prophets and pundits arguing doom and gloom regarding AI, most people and organizations have moved past them and have been adopting the technology widely. Some times that has been good, some times not. To get a sample of how it’s going, here’s a few dozen pieces on AI worth a look:

  1. The WSJ argues that you soon won’t be able to avoid AI at work or at home. It’s true, but so what?
  2. AI is being used to deal with the threat of wildfires. Good. Also good: AI allows farmers to monitor crops in real time. More good AI:  AI used to find antibodies. By the way, here’s a piece on how to turn chatgpt into a chemistry assistant.
  3. A worthwhile piece on AI lawsuits that are coming due to intellectual property rights.
  4. The New York Times has stopped Openai from crawling its site. More on that, here.
  5. Here’s the associated press AI guidelines for journalists.
  6. Students and writers, bookmark this in case you need it: what to do when you’re accused of writing with AI.
  7. Also, what can you do when AI lies about you?
  8. This is dumb: AI builds software under 7 minutes for less than a dollar.
  9. It’s not surprising hackers from lots of security holes in AI.
  10. Take this with a big grain of salt…one of the leaders from Palantir wonders if AI should be compared to atomic energy.
  11. This is bad: how facial recognition tech resulted in a false arrest.
  12. This is not good: a story on using AI to generate books and other junk here.
  13. This is different:  Microsoft Teams is pairing up with Maybelline to provide AI generated beauty filters / virtual makeup.
  14. It’s not news but it is interesting that NVIDIA is a hot company now due to AI. See more about that, here.
  15. Maybe chatgpt and other AI will just be a tool to do inefficient things efficiently.
  16. A thoughtful piece on powering generative AI and large language models with hybrid cloud with a surprise ending, from one of the senior leaders in my group at IBM.

(Photo: link to image in the NVIDIA story. By Philip Cheung for The New York Times)

AI and the shift from opinion to effect


Here’s some things I’ve been clipping out and saving concerning AI. The pattern I see emerging in my clippings is one where I am less interested in opinion on AI and more interested in the effects of AI on the world. There’s still some good think pieces on AI — I put some here — but the use of AI is accelerating in the world and we need to better understand the outcomes of that.

AI Think Pieces: For people who want to be really afraid of AI, I recommend this Guardian piece on  unknown killer robots and AI and…. well you read and decide. On the flip side of that, here’s a good piece critical of  AI alarmism.

Bill Gates chimes in on how  the risks of AI are real but manageable. My friend Clive Thompson discusses a risk of a different sort regarding AI, and that is the possibility of AI model collapse.

The mystery of how AI actually works is delved into at Vox. To me it is one of the potentially big problems AI will have in the future.

Practical AI: here’s a piece on how the  Globe and Mail is using AI in the newsroom. Also practical: How AI is working in the world of world of watches. I loved this story of how AI is being used to translate cuneiform. AI is also being used to deal with forest fires.

AI effects: This piece is on how AI’s large language models are having a big effect on the Web as we know it. To mitigate tithings, the Grammys have outlined new rules for AI use.

when it comes to writing, I think the “Five Books” series is great. They will ask an expert in an area to recommend five books in their field that people should read. So I guess it makes sense that for books on  artificial intelligence, they asked….ChatGPT. It’s well worth a read.

Not all articles written by/with AI turn out great. Ask the folks at Gizmodo.

Speaking of AI and books,  these authors have filed a lawsuit against OpenAI for unlawfully ingesting their books. Could be interesting. To add to that, the New York Times reports that “fed up with A.I. companies consuming online content without consent, fan fiction writers, actors, social media companies and news organizations are among those rebelling.”

On the topic of pushback,  China is setting out new rules concerning generative AI with an emphasis on “healthy” content and adherence to socialist values.

Asia is not a monolith, of course. Other parts of Asia have been less than keen to the EUs AI lobbying blitz. Indeed, India’s Infosys just signed a five year AI deal with 2bln target spend, and I expect lots of other India companies will be doing something similar regarding AI. Those companies have lots of smart and capable IT people, and when companies like Meta open their AI model for commercial use and throw the nascent market into flux, well, that is going to create more opportunities.

Finally, I suspect there is a lot of this going around: My Boss Won’t stop using ChatGPT.

 

 

AI AI AI AI: here’s some good, bad and scary stuff on AI


I am glad that Apple released a new device last week. It was a refreshing change from what most IT discussions are about recently. And what’s topic is most discussed? AI, of course.

And for good reason! There’s lots and lots happening in this space. New AI technology is coming out. New uses for AI are developed. It’s an exciting space. Like many, I am having a hard time keeping it with it all. But try and keep up I must. And as I do, I have found some interesting links for me (and you) to read:

Clive Thompson has a grim take on the boring apocalypse of today’s AI 

Also grim is this story in WiReD about  tessa, the eating disorder chatbot, and why it had to be suspended. Don’t leave your AI unattended!

Grimly funny: what happens when a lawyer misuses ChatGPT? Hijinx insue!

Not grim, but clever:  A Vienna museum turned to AI and cats — yes AI and cats — to lure visitors.

Also in WiReD is this thoughtful piece on how  non english languages are being left out of the AI revolution, at least for now. I see this changing really fast.

A good New York Times piece on how training chatbots on smaller language datasets could make them better.

Fascinating to see how much AI is listed in Zapier’s app tips here.

Also fascinating: Google didn’t talk about any of their old AI while discussing their new AI during their I/O 2023 event recently. I wonder why. I wonder if they’re missing an opportunity.

AI junk: Spotify has reportedly removed tens of thousands of ai generated songs. Also junk, in a way: AI interior design. Still more garbage AI uses, this time in the form of  spam books written using ChatGPT.

This seems like an interesting technology:  liquid neural networks.

What is falcon 40b? Only “the best open-source model currently available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. ” Worth a visit.

Here’s a how-to on using AI for photo editing. Also, here’s some advice on writing better ChatGPT prompts.

This is a good use of AI: accurately diagnosing tomato leaf diseases.

For those that care: deep learning pioneer Geoffrey Hinton quit Google.

Meanwhile Sam Altman is urging the US congress to regulate AI. In the same time period, he threatens to withdraw from Europe if there is too much regulation, only to back down. It seems like he is playing people here. Writers like Naomi Klein are rightly critical. Related is this piece: Inside the fight to reclaim AI from Big Tech’s control | MIT Technology Review.

Here’s another breathless piece on the AI start up scence in San Francisco. Yawn. Here’s a piece on a new startup with a new AI called Character.ai that lets you talk to famous people. I guess….

Here’s some things my company is doing with AI: Watsonx. But also: IBM to pause hiring for back office jobs that ai could kill. Let’s see about that.

Finally, this story from BlogTO on how “Josh Shiaman, a senior feature producer at TSN, set out to create a Jays ad using text-to-video AI generation, admitting that the results “did not go well.”” Not go well is an understatement! It’s the stuff of nightmares! 🙂 Go here and see.

In some ways, maybe that video is a good metaphor for AI: starts off dreamy and then turns horrific.

Or maybe not.

If you want to get a better understanding of generative AI, it pays to see what the New York Times and Bloomberg are up to

One of the problems with generative AI like ChatGPT is it makes you think it is magical. You type in some prompt, it comes back with a complex answer, the next thing you know, you are thinking this thing is smarter than a human. It doesn’t help that there is so much hype surrounding the technology. All of this can make you think it’s supernatural.

Well, it isn’t. It’s simply good IT. It consists of data, hardware and software, just like any other IT. To get a better appreciation of the ordinary nature of that, it helps to look at two recent examples: the AI the New York Times recently built and the AI Bloomberg just built.

It’s best to start with what the Times built. They used software called nanoGPT (karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs) and took the works of Jane Austen, Shakespeare and more to build a chatGPT-like program on their laptops. Then they walked through the steps of getting it working, here: Let Us Show You How GPT Works — Using Jane Austen – The New York Times. It works pretty well after much much training. Obviously it is not as massive or sophisticated as ChatGPT, but after reading the article, you will have a better sense of how this technology works, and why it’s impressive but not magical.

After that, I recommend reading more about BloombergGPT. Their press release states:

Bloomberg today released a research paper detailing the development of BloombergGPT, a new large-scale generative artificial intelligence (AI) model. This large language model (LLM) has been specifically trained on a wide range of financial data to support a diverse set of natural language processing (NLP) tasks within the financial industry.

You can find a link to that research paper, here:  BloombergGPT: A Large Language Model for Finance. What I liked about that paper is it walks through the approach they took, the data they used, and the technology deployed to make their model. Even better, they talk about how it is currently succeeding and what some of the limits of it are.

I’m happy that both these companies have been good about sharing what they are doing with this technology. I might even try and use an old laptop to build my own AI. I mean who wouldn’t benefit from tapping into the genius of Shakespeare or Jane Austen.

For more on what Bloomberg is doing, see this: Bloomberg plans to integrate GPT-style A.I. into its terminal

 

A plethora of good links on AI

There’s still an overwhelming amount of material being written on AI. Here’s a few lists of some of the ones I found most interesting:

ChatGPT: ChatGPT (3 and 4) still dominate much of the discussion I see around AI. For instance:

Using AI: people are trying to use AI for practical purposes, as those last few links showed. Here’s some more examples:

AI and imagery: not all AI is about text. There’s quite a lot going on in the visual space too. Here’s a taste:

AI and the problems it causes: there’s lots of risks with any new technology, and AI is no exception. Cases in point:

Last but not least: 

The profiles (beat-sweeteners?) of Sam Altman

Oddly (not oddly at all?) both the New York Times and the Wall Street Journal had profiles of  Sam Altman at the end of March:

  1. Sam Altman, the ChatGPT King, Is Pretty Sure It’s All Going to Be OK – The New York Times
  2. The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT – WSJ

Given the contentious nature of AI and ChatGPT, you might think think that those pieces would have asked tough questions of Altman concerning AI. Especially since Leslie Stahl did something similar to execs of Microsoft, a few weeks earlier. Perhaps the work of Stahl is why Microsoft / OpenAI wanted Altman to get out there with his story. If that was the intent, then it seemed to work. Nothing too tough in either of these profiles.

Then again, perhaps they were written as beat-sweeteners. After all, getting access is just as important for tech journalists as it is for political journalists. If you want to write more about AI in the future, being able to ring up Altman and his gang and get through to them for a comment seems like something you might want for your job. No doubt profiles like that can help with that access.

For more on the topic of beat-sweeteners, I give you this: Slate’s Beat-Sweetener Reader in Columbia Journalism Review.

 

 

 

The Gartner Hype Cycle: one good way to think about technological hype

Below is the Gartner hype cycle curve with it’s famous five phases:

For those not familiar with it, the chart below breaks it down further and helps you see it in action. Let’s examine that.

Chances are if you are not working with emerging IT and you start hearing about a hyped technology (e.g., categories like blockchain, AI), it is in the phase: Peak of Inflated Expectations. At that stage the technology starts going from discussions in places like Silicon Valley to write ups in the New York Times.  It’s also in that phase two other things happen: “Activity beyond early adopters” and “Negative press begins”.

That’s where AI — specifically generative AI — is: lots of write ups have occurred, people are playing around with it, and now the negative press occurs.

After that phase technologies like AI start to slide down into my favorite phase of the curve: the Trough of Disillusionment. It’s the place where technology goes to die. It’s the place where technology tries to cross the chasm and fails.

See that gap on Technology Adoption Lifecycle curve? If technology can get past  that gap (“The Chasm”) and get adopted by more and more people, then it will move on through the Gartner hype curve, up the Slope of Enlightenment and onto the Plateau of Productivity. As that happens, there is less talking and more doing when it comes to the tech.

That said, my belief is that most technology dies in the Trough. Most technology does not and cannot cross the chasm. Case in point, blockchain. Look at the hype curve for blockchain in 2019:

At the time people were imagining blockchain everywhere: in gaming, in government, in supply chain…you name it. Now some of that has moved on to the end of the hype cycle, but most of it is going to die in the Trough.

The Gartner Hype Curve is a useful way to assess technology that is being talked about, as is the Technology Adoption Curve. Another good way of thinking about hype can be found in this piece I wrote here. In that piece I show there are five levels of hype: Marketing Claims, Exaggerated Returns, Utopian Futures, Magical Thinking, and Othering. For companies like Microsoft talking about AI, the hype levels are at the level of Exaggerated Returns. For people writing think pieces on AI, the hype levels go from Utopian Futures to Othering.

In the end, however you assess it, its all just Hype. When a technology comes out, assess it for yourself as best as you can. Take anything being said and assign it a level of hype from 1-5. If you are trying to figure out if something will eventually be adopted, use the curves above.

Good luck!

More reasons why ChatGPT is not going to replace coders

I have been arguing recently about the limits of the current AI and why it is not going to take over the job of coding yet. I am not alone in this regard. Clive Thompson, who knows a lot about the topic, recently wrote this: Why ChatGPT Won’t Replace Coders Just Yet. Among other reasons, the “‘bullshit’ problem turns up in code, too”. I recommend you read Clive’s take on the subject. And after you read that, check out his book, “Coders”. You can order it, here, from his publisher. I think it’s a classic and one of the best things written on software.

I asked ChatGPT to do my job and write some code to built out some cloud infrastructure. How did it go?

For fun I decide to see if I am ready to be replaced with AI via ChatGPT. Part of my work is building cloud based infrastructure using a language called HCL from Hashicorp using a tool called Terraform. You write one or more program files using HCL, run it through Terraform, and voila, you have the infrastructure you need.

For this exercise I entered the following prompt:

Create a terraform file for IBM cloud that will provision a VPC, set up networking for the VPC, and provision a virtual server for VPC. It will have 1 security group and a rule for this security group to allow SSH connection to your virtual server instance, 1 subnet to enable networking in your VPC, 1 VPC virtual server instance and 1 floating IP address that you use to access your VPC virtual server instance over the public network.

A good prompt, I thought. Nothing vague or conflicting. Practically a form of code itself.

I was shocked when in a few seconds I got this response:

To provision a VPC, networking, and a virtual server with a security group and a floating IP address on IBM Cloud, you can use the following Terraform configuration file:

# Configure the IBM provider
provider “ibm” {
generation = 2
region = “us-south”
}# Create a VPC
resource “ibm_is_vpc” “vpc” {
name = “my-vpc”
address_prefix_management = “auto”
}
# Create a subnet in the VPC
resource “ibm_is_subnet” “subnet” {
name = “my-subnet”
vpc = ibm_is_vpc.vpc.id
ipv4_cidr_block = “10.240.0.0/24”
}
# Create a security group
resource “ibm_is_security_group” “security_group” {
name = “my-security-group”
vpc = ibm_is_vpc.vpc.id
description = “Allow SSH inbound traffic”
}
# Add an SSH rule to the security group
resource “ibm_is_security_group_rule” “ssh_rule” {
security_group = ibm_is_security_group.security_group.id
direction = “inbound”
protocol = “tcp”
port_min = 22
port_max = 22
remote = “0.0.0.0/0”
}
# Create a virtual server instance in the subnet
resource “ibm_is_instance” “instance” {
name = “my-instance”
image = “ibm-centos-7-10-minimal-amd64-2”
profile = “bx2-2×8”
primary_network_interface {
subnet = ibm_is_subnet.subnet.id
}
vpc = ibm_is_vpc.vpc.id
security_groups = [ibm_is_security_group.security_group.id]
}
# Create a floating IP address
resource “ibm_is_floating_ip” “floating_ip” {
name = “my-floating-ip”
}
# Associate the floating IP address with the virtual server instance
resource “ibm_is_floating_ip_attachment” “attachment” {
floating_ip = ibm_is_floating_ip.floating_ip.address
instance = ibm_is_instance.instance.id
}

Before running the above code, replace with your IBM Cloud API key and and with the appropriate values for your IBM Cloud region and availability zone.
Once you have saved this configuration file, you can initialize Terraform by running terraform init, and then create the resources by running terraform apply. After a few minutes, Terraform will create the VPC, subnet, security group, virtual server, and floating IP address. You can then use the floating IP address to SSH into your virtual server instance over the public network.

Wow! What an amazing response, I thought!  It was quick, detailed, seemingly knowledgeable. Also…wrong.

There was so much fundamentally wrong with the code that the only way to get it to work was to rewrite much of it and also add a bunch of new code to make it work. I can’t even tell you how it came up with some of the statements: they are not defined that way for Terraform specs for IBM Cloud.

I even had the AI regenerate the code to see if it could get it right the second time. Instead the new code had 9 errors in it.

Fine. My manager provided me with a prompt of his own: see if it will work on AWS. (Good prompt, Ted!)

I did try it on AWS and Azure. With Azure the response was an incomplete script. (At least the IBM script was complete, though wrong.) With AWS the script was better. I could enter terraform plan and terraform thought it looked good. But once I entered terraform apply to build the resources, it failed.

I suspect the larger problem is lack of sufficient overlapping data for the AI tools to train on. So it sort of gets the code right, but sort of isn’t really good enough.

I see people on the Internet raving about how well AI is doing writing code. Some of the examples are interesting, but I think it has a way to go. I’ll stick to doing my day job. Without AI to help. 🙂

 

Will AI tools based on large language models (LLMs) become as smart or smarter than us?

With the success and growth of tools like ChatGPT, some are speculating that the current AI could lead us to a point where AI is as smart if not smarter than us. Sounds ominous.

When considering such ominous thoughts, it’s important to step back and remember that Large Language Model (LLM) are tools based in whole or in part on machine learning technology. Despite their sophistication, they still suffer from the same limitations that other machine learning technologies suffer, namely:

    • bias
    • explainability
    • overfitting
    • learning the wrong lessons
    • brittleness

There are more problems than those for specific tools like ChatGPT, as Gary Marcus outlines here:

  • the need for retraining to get up to date
  • lack of truthfulness
  • lack of reliability
  • it may be getting worse due to data contamination (Garbage in, garbage out)

It’s hard to know if current AI technology will overcome these limitations. It’s especially hard to know when orgs like OpenAI do this.

My belief is these tools will hit a peak soon and level off or start to decline. They won’t get as smart or smarter than us. Not in their current form. But that’s based on a general set of experiences I’ve acquired from being in IT for so long. I can’t say for certain.

Remain calm. That’s my best bit of advice I have so far. Don’t let the chattering class get you fearful. In the meanwhile, check out the links provided here. Education is the antidote to fear.

Are AI and ChatGPT the same thing?

Reading about all the amazing things done by the current AI might lead you to think that: AI = ChatGPT (or DALL-E, or whatever people like OpenAI are working on). It’s true, it is currently considered AI,  but there is more to AI than that.

As this piece explains, How ChatGPT Works: The Model Behind The Bot:

ChatGPT is an extrapolation of a class of machine learning Natural Language Processing models known as Large Language Model (LLMs).

Like ChatGPT, many of the current and successful AI tools are examples of machine learning. And while machine learning is powerful, it is just part of AI, as this diagram nicely shows:

To get an idea of just how varied and complex the field of artificial intelligence is, just take a glance at this outline of AI. As you can see, AI incorporates a wide range of topics and includes many different forms of technology. Machine learning is just part of it. So ChatGPT is AI, but AI is more than ChatGPT.

Something to keep in mind when fans and hypesters of the latest AI technology make it seem like there’s nothing more to the field of AI than that.

What is AI Winter all about and why do people who’ve worked in AI tend to talk about it?

It might surprise people, but work in AI has been going on for some time. In fact it started as early as the mid-1950s. In the 50s until the 70s, “computers were solving algebra word problems, proving theorems in geometry and learning to speak English”. They were nothing like OpenAI’s ChatGPT, but they were impressive in their own way. Just like now, people were thinking the sky’s the limit.

Then three things happened: the first AI winter from 1974 until 1980, the boom years from 1980-1987, and then the next AI winter from 1987-1993. I was swept up in the second AI winter, and like the first one, there was a combination of hitting a wall in terms of what the technology could do followed by a drying up of funding.

During the boom times it seemed like there would be no stopping AI and it would eventually be able to do everything humans can do and more. It feels that way now with the current AI boom. People like OpenAI and others are saying the sky’s the limit and nothing is impossible. But just like in the previous boom eras, I think the current AI boom will hit a wall with the technology (we are seeing some of it already). At that point we may see a reduction in funding from companies like Microsoft and Google and more (just like how we are seeing a drawback from them on voice recognition technology like Alexa and Siri).

So yes, the current AI technology is exciting. And yes, it seems like there is no end to what it can do. But I think we will get another AI winter sooner than later, and during this time work will continue in the AI space but you’ll no longer be reading news about it daily. The AI effect will also occur and the work being done by people like OpenAI will just get incorporated into the everyday tools we use, just like autocorrect and image recognition is no just something we take for granted.

P.S. If you are interested in the history of the second AI winter, this piece is good.

What is the AI effect and why should you care?

Since there is so much talk about AI now, I think it is good for people to be familiar with some key ideas concerning AI. One of these is the AI effect. The cool AI you are using now, be it ChatGPT or DALL-E or something else, will eventually get incorporated into some commonplace piece of IT and you won’t even think much of it. You certainly won’t be reading about it everywhere. If anything you and I will complain about it, much like we complain about autocorrect.

So what is the AI Effect? As Wikipedia explains:

The AI effect” is that line of thinking, the tendency to redefine AI to mean: “AI is anything that has not been done yet.” This is the common public misperception, that as soon as AI successfully solves a problem, that solution method is no longer within the domain of AI. Geist credits John McCarthy giving this phenomenon its name, the “AI effect”.

McCorduck calls it an “odd paradox” that “practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the ‘failures’, the tough nuts that couldn’t yet be cracked.”[5]

It’s true. Many things over the years that were once thought of as AI are now considered simply software or hardware, if we even think of them at all.  Whether it is winning at chess, recognizing your voice, or recognizing text in an images, these things are commonplace now, but were lofty goals for AI researchers once.

The AI effect is a key idea to keep in mind when people are hyping any new AI as the thing that will change everything. If the new AI becomes useful, we will likely stop thinking it is AI.

For more on the topic, see: AI effect – Wikipedia

No, prompt engineering is not going to become a hot job. Let a former knowledge engineer explain

With the rise of AI, LLMs, ChatGPT and more, a new skill has risen. The skill involves knowing how to construct prompts for the AI software in such a way that you get an optimal result. This has led to a number of people to start saying things like this: prompt engineers is the next big job. I am here to say this is wrong. Let me explain.

I was heavily into AI in the late 20th century, just before the last AI winter. One of the hot jobs at that time was going to be knowledge engineer (KE). A big part of AI then was the development of expert systems, and the job of the KE was to take the the expertise of someone and translate it into rules that the expert system could use to make decisions. Among other things, part of my role was to be a KE.

So what happened? Well, first off, AI winter happened. People stopped developing expert systems and went and took on other roles.  Ironically, rules engines (essentially expert systems) did come back, but all the hype surrounding them was gone, and the role of KE was gone too. It wasn’t needed. A business analyst can just as easily determine what the rules are and then have a technical specialist store that in the rules engine.

Assuming tools like ChatGPT were to last, I would expect the creation of prompts for it to be taken on by business analysts and technical specialist. Business as usual, in other words. No need for a “prompt engineer”.

Also, you should not assume things like ChatGPT will last. How these tools work is highly volatile; they are not well structured things like programming languages or SQL queries. The prompts that worked on them last week may result in nothing a week later. Furthermore, there are so many problems with the new AI that I could easily see them falling into a new AI winter in the next few years.

So, no, I don’t think Prompt Engineering is a thing that will last. If you want to update your resume to say Prompt Engineer after you’ve hacked around with one of the current AI tools out there, knock yourself out. Just don’t get too far ahead of yourself and think there is going to be a career path there.

How to get more from your smart speakers


I am a fan of smart speakers, despite the privacy concerns around them. If you are ok with that and you have one or are planning to get one, read these two links to see how you can get more out of them:

  1. How to control Sonos with Google Assistant
  2. Alexa Skills That Are Actually Fun and Useful | WIRED

I use Google Assistant on my Sonos and they make a great device even better. And while I do have Google Home devices in other parts of the house, I tend to be around the Sonos most, so having it there to do more than just play music is a nice thing indeed.

On how I resolved my problems installing Big Sur on my MacBook Air

Mac keyboard
Recently I tried to upgrade my Mac from Catalina to Big Sur. I have done OS upgrades in the past without any problems. I assumed it would be the same with Big Sur. I was wrong.

I am not sure if the problem was with Big Sur or the state of my Mac. I do know my MacBook Air was down to less than 20 GB free.  When I tried to install Big Sur, my Mac first started complaining about that. However after I freed up more space (just above 20 GB) it proceeded with the install.

While it proceeded, it did not complete. No matter what I did, I could not get it to boot all the way up.  Recovery mode did not resolve the problem. Internet recovery mode would allow me to install Mac OS Mojave, but not Catalina or Big Sur.

Initially I tried installing Mojave, but after the install was complete, I got a circle with a line through it (not a good sign). I tried resetting NVRAM or PRAM and that helped me get further, but even as I logged in, I could not get the MacOS to fully boot up (it just went back to the login).

Eventually I did the following:

  1. Bought a 256 GB flash drive. Mine was from Kingston. I bought a size that matched my drive. I could have gotten away with a smaller one, but I was tired and didn’t want to risk not having enough space to use it as a backup.
  2. Put the flash drive into the Mac (I had a dongle to connect regular USB to USB-C)
  3. Booted up the mac by going into Internet recovery mode
  4. Went into disk utilities and made sure my Macintosh HD, Macintosh HD – Data and KINGSTON drive were mounted. (I used the MOUNT button to mount them if they weren’t mounted).
  5. Ran FIRST AID on all disks.
  6. Left Disk Utility. Clicked on Utilities > Terminal
  7. Copied my most important files from Macintosh HD – DATA to KINGSTON (both of them could be found in the directory /Volumes. For example, /Volumes/KINGSTON.)  The files I wanted to backup were in /Volumes/Macintosh*DATA/Users/bernie/Documents (I think).
  8. Once I copied the files onto the USB Drive — it took hours —  I checked to make sure they were there.  I then got rid of a lot more files from the Documents area on my hard drive. After careful deleting, I had about 50 GB free. At one point I was talking to AppleCare and the support person said: yeah, you need a lot more than 20 GB of free space. So I made a lot.
  9. Then I went back into Disk Utility and erased Macintosh HD
  10. This is important: I DID NOT ERASE Macintosh HD – DATA! Note: before you erase any drive using the Disk Utility, pursue other options, like contacting AppleCare.  I did not erase Macintosh HD – DATA in order to save time later on recovering files. I was only going to erase it as a very last resort. It turns out I was ok with not erasing it. The problem were all on the Macintosh HD volume, the volume I DID erase.)
  11. Once I did that, I shut down and then came up in Internet Recovery Mode again. THIS TIME, I had the option of installing Big Sur (not Mojave). I installed Big Sur. It created a new userid for me: it didn’t recognize my old one.
  12. I was able to login this time and get the typical desk top. So that was all good.
  13. Now here is the interesting part: my computer now had two Macintosh HD – Data drives: an old one and a new one. What I did was shutdown and go into Internet Recovery Mode again and mounted both drives. I also mounted the KINGSTON USB drive. Then I moved files from the old Macintosh HD – Data to the new one. (You can use the mv command in Terminal. I did, plus I also did cp -R for recursive copying).
  14. My Mac is now recovered. Kinda. I mean, there are all sort of browser stuff that needed to be recovered. I had to reinstall all my favorite apps. Etc. But it is a working MacBook.

All in all, I learned a ton when it comes to recovering a Mac. If you are reading this because your Mac is in a similar situation, I wish you success.

While I was trying to do the repair, these links were helpful:

(Photo by Charles Deluvio on Unsplash)

If you are thinking of using chatbots in your work, read this


Chatbots are relatively straightforward to deploy these days. AI providers like IBM and others provide all the technology you need. But do you really need them? And if you already have a bunch of them deployed, are you doing it right? If these questions have you wondering, I recommend you read this: Does Your Company Really Need a Chatbot?

You still may want to proceed with chatbots: they make a lot of business sense for certain types of work. But you will have a better idea when not to use them, too.

Twenty good IT links to make you smarter

Here are some good links I have been collecting over time on IT that are still worth reading. They cover AI, the IOT, containers, and more. Enjoy!

  1. How to build a supercomputer with Raspberry Pis: Fun!
  2. 6 things I’ve learned in my first 6 months using serverless: Good stuff for serverless fans
  3. Building a serverless website in AWS: More good serverless stuff
  4. The Strange Birth and Long Life of Unix: A really good history of Unix. Well written.
  5. Spring Boot Memory Performance: If you use springboot, this is worth your while
  6. The end of windows: Anything that stratechery puts out is good, including this
  7. Dockerize a Spring Boot application: Speaking of springboot, this is useful
  8. Building a Deep Neural Network to play FIFA 18: A fascinating example of using AI to play games
  9. ThinkPad 25th Anniversary Edition : A great commemoration of a fine computer
  10. GitHub Is Microsoft’s $7.5 Billion Undo Button: A good piece on the story behind this investment by Microsoft
  11. Circuito.io: Want to build circuits, but don’t know how. This killer site is for you.
  12. Effie robot claims to do all your ironing: If you like robots and hate ironing, this could be for you.
  13. How To Install and Use TensorFlow on Ubuntu 16.04: For AI fans
  14. Set up a firewall on Ubuntu: Another good tutorial from Digital Ocean
  15. Not even IBM is sure where its quantum computer experiments will lead: For IBM Quantum fans
  16. In an Era of ‘Smart’ Things, Sometimes Dumb Stuff Is Better: Why analog is sometimes better.
  17. A simple neural network with Python and Keras: A good way to dabble with NNs
  18. The Talk: A comic which wonderfully explains quantum computing
  19. Use case diagrams: For those who like UML
  20. Eating disorder and social media: Wired has a good piece on how people avoid controls

It’s Monday morning: are robots going to replace you at your job?

Possibly, but as this article argues, there are at least three areas where robots and suck at:

Creative endeavours: These include creative writing, entrepreneurship, and scientific discovery. These can be highly paid and rewarding jobs. There is no better time to be an entrepreneur with an insight than today, because you can use technology to leverage your invention.

Social interactions: Robots do not have the kinds of emotional intelligence that humans have. Motivated people who are sensitive to the needs of others make great managers, leaders, salespeople, negotiators, caretakers, nurses, and teachers. Consider, for example, the idea of a robot giving a half-time pep talk to a high school football team. That would not be inspiring. Recent research makes clear that social skills are increasingly in demand.

Physical dexterity and mobility: If you have ever seen a robot try to pick up a pencil you see how clumsy and slow they are, compared to a human child. Humans have millennia of experience hiking mountains, swimming lakes, and dancing—practice that gives them extraordinary agility and physical dexterity.

Read the entire article; there’s much more in it than that. But if your job has some element of those three qualities, chances are robots won’t be replacing you soon.

How technology can enhance work and not simply eliminate it

robot and human working together

This piece: What it’s like to be a modern engraver, the most automated job in the United States — Quartz, reminded me once again that the best use of technology is to augment the people doing the work, and not simply take away the work. Must reading for anyone who’s believes that the best way to use AI and other advanced tech is to eliminate jobs. My believe is that the best way to use AI and other advanced tech is to make jobs better, both for the employee, the employer, and the customer. The businesses that will succeed will have that belief as well.

(Image from this piece on how humans and robots can work together.)

34 good links on AI, ML, and robots (some taking jobs, some not)

If you are looking to build AI tech, or just learn about it, then you will find these interesting:

  1. Artificial intelligence pioneer says we need to start over – Axios – if Hinton says it, it is worth taking note
  2. Robots Will Take Fast-Food Jobs, But Not Because of Minimum Wage Hikes | Inverse – true. Economists need to stop making such a strong link here.
  3. Artificial Intelligence 101: How to Get Started | HackerEarth Blog – a good 101 piece
  4. Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level – MIT Technology Review – the ability of tech to learn is accelerating.
  5. Now AI Machines Are Learning to Understand Stories – MIT Technology Review – and not just accelerating, but getting deeper.
  6. Robots are coming for your job. That might not be bad news – good alternative insight from Laurie Penny.
  7. Pocket: Physicists Unleash AI to Devise Unthinkable Experiments – not surprisingly, a smart use of AI
  8. AI’s dueling definitions – O’Reilly Media – this highlights one of the problems with AI, and that it is it is a suitcase word (or term) and people fill it with what they want to fill it with
  9. A Neural Network Playground – a very nice tool to start working with AI
  10. Foxconn replaces ‘60,000 factory workers with robots’ – BBC News – there is no doubt in places like Foxconn, robots are taking jobs.
  11. 7 Steps to Mastering Machine Learning With Python – don’t be put off by this site’s design: there is good stuff here
  12. How Amazon Triggered a Robot Arms Race – Bloomberg – Amazon made a smart move with that acquisition and it is paying off
  13. When Police Use Robots to Kill People – Bloomberg this is a real moral quandary and I am certain the police aren’t the only people to be deciding on it. See also: A conversation on the ethics of Dallas police’s bomb robot – The Verge
  14. How to build and run your first deep learning network – O’Reilly Media – more good stuff on ML/DL/AI
  15. This expert thinks robots aren’t going to destroy many jobs. And that’s a problem. | The new new economy – another alternative take on robots and jobs
  16. Neural Evolution – Building a natural selection process with AI – more tutorials
  17. Uber Parking Lot Patrolled By Security Robot | Popular Science – not too long after this, one of these robots drowned in a pool in a mall. Technology: it’s not easy 🙂
  18. A Robot That Harms: When Machines Make Life Or Death Decisions : All Tech Considered : NPR – this is kinda dumb, but worth a quick read.
  19. Mathematics of Machine Learning | Mathematics | MIT OpenCourseWare – if you have the math skills, this looks promising
  20. Small Prolog | Managing organized complexity – I will always remain an AI/Prolog fan, so I am including this link.
  21. TensorKart: self-driving MarioKart with TensorFlow – a very cool application
  22. AI Software Learns to Make AI Software – MIT Technology Review – there is less here than it appears, but still worth reviewing
  23. How to Beat the Robots – The New York Times – meh. I think people need to learn to work with the technology, not try to defeat it. If you disagree, read this.
  24. People want to know: Why are there no good bots? – bot makers, take note.
  25. Noahpinion: Robuts takin’ jerbs
  26. globalinequality: Robotics or fascination with anthropomorphism – everyone is writing about robots and jobs, it seems.
  27. Valohai – more ML tools
  28. Seth’s Blog: 23 things artificially intelligent computers can do better/faster/cheaper than you can – like I said, everyone is writing about AI. Even Seth Godin.
  29. The Six Main Stories, As Identified by a Computer – The Atlantic – again, not a big deal, but interesting.
  30. A poet does TensorFlow – O’Reilly Media – artists will always experiment with new mediums
  31. How to train your own Object Detector with TensorFlow’s Object Detector API – more good tooling.
  32. Rise of the machines – the best – by far! – non-technical piece I have read about AI and robots.
  33. We Trained A Computer To Search For Hidden Spy Planes. This Is What It Found. – I was super impressed what Buzzfeed did here.
  34. The Best Machine Learning Resources – Machine Learning for Humans – Medium – tons of good resources here.

Some thoughts on the problems Facebook and Google (and even retailers) have with people being awful on their platforms

Google, Facebook, and Twitter are platforms. So are some retail sites. What does that mean? It means that they provide the means for people to use their technology to create things for themselves. Most of the time, this is a good thing. People can communicate in ways they never could before such platforms. Likewise, people can sell things to people they never could.

Now these platforms are in a bind, as you can see in this piece and in other places: Google, Facebook, and Twitter Sell Hate Speech Targeted Ads. They are in a bind partly due to their own approach, by boasting of their ability to use AI to stop such things. They should have been much more humble. AI as it currently stands will only take you so far. Instead of relying on things like AI, they need to have better governance mechanisms in place. Governance is a cost of organizations, and often times organizations don’t insert proper governance until flaws like this start to occur.

That said, this particular piece has several weaknesses. First up, this comment: “that the companies are incapable of building their systems to reflect moral values”. It would be remarkable for global companies to build systems to reflect moral values when even within individual nations there is conflicts regarding such values. Likewise the statement: “It seems highly unlikely that these platforms knowingly allow offensive language to slip through the cracks”. Again, define offensive language at a global level. To make it harder still, trying doing it with different languages and different cultures. The same thing occurs on retail sites when people put offensive images on T shirts. For some retail systems no one from the company that own the platform takes time to review every product that comes in.

And that gets to the problem. All these platforms could be mainly content agnostic, the way the telephone system is platform agnostic. However people are expecting them to insert themselves and not be content agnostic. Once that happens, they are going to be in an exceptional bind. We don’t live in a homogenous world where everyone shares the same values. Even if they converted to non-profits and spent a lot more revenue on reviewing content, there would still be limits to what they could do.

To make things better, these platforms need to be humble and realistic about what they can do and communicate that consistently and clearly with the people that use these systems. Otherwise, they are going to find that they are going to be governed in ways they are not going to like. Additionally, they need to decide what their own values are and communicate and defend them. They may lose users and customers, but the alternative of trying to be different things in different places will only make their own internal governance impossible.

 

AI is hard, China version

According to this, chatbots in China have been removed after being critical of the Chinese government. This to me is not unlike what happened to Microsoft's chat bot that became racist after being feed racist input from users. If you put AI out there and allow any form of input, then the equivalent of vandals can overtake you AI and feed it whatever they choose. I'm not certain if that was the case in China but I suspect it was.

AI researchers need to expect the worst case use cases if they allow their software to do unsupervised learning on the Internet. If they don't, it's likely that their projects will be a disaster and they will do damage to the AI community in general.

Jean-Luc Mélenchon, a candidate right out of a Philip K Dick Novel

Melenchon hologram
In France, politician Jean-Luc Mélenchon plans to be in seven places at once using  something similar to a hologram. According to Le Parisien:

Strictly speaking, these are not holograms. Jean-Luc Mélenchon will be present in seven different places thanks to … an optical illusion discovered for the first time half a century ago by an Italian physicist

Virtual Mélenchon reminds me of the politician Yance in Philip K Dick’s novel, The Penultimate Truth. We may not be far off where we get virtual candidate that look like people but behind the scenes we have AI or some combination of AI and people.

For more on the technology, see the article in Le Parisien. For more on Dick’s novel, see Wikipedia. Read up now: I think we can expect to see more of this technology in use soon.

It’s not because most developers are white that AI has hard time with non-white faces. It’s this….

An example of a neural net topology
This piece, Most engineers are white — and so are the faces they use to train software – Recode, implies that AI software doesn’t do a good job recognizing non-white faces because most engineers (i.e. software developers) are white. I’d argue that the AI does a poor job because of this: the developers aren’t very good.

Good software developers, in particular the lead developers, take an active role in ensuring they have good test data. The success of their software when it goes live is dependent on it. Anyone using training data  (i.e. using test data) in AI projects that is not using a broad set of faces is doing a poor job. Period. Regardless of whether or not they are white.

If the AI is supposed to do something (i.e. recognize all faces) and it does not, then the AI sucks. Don’t blame it on anything but technical abilities.