Tag Archives: IT

How to simply merge PDF files on a Mac for free with no additional software

If you want to merge PDF files on a Mac, you might be tempted to use a tool like www.ilovepdf.com. Worse still, you might try and do it from Adobe’s Acrobat site and end up signing up to pay $200 or more per year for the privilege!

The good news is if you are on a Mac, you don’t need to do any of that.

Instead, open your PDF files using Preview. Make sure your view shows Thumbnails of the pages in each document. Then drag the thumbnail pages of one document into another. Then save the document you added the thumbnails to and you are done.

For example, let’s say you have two PDF files: abc.pdf and xyz.pdf. You want all the pages in abc.pdf to be in xyz.pdf. You open them both using Preview, you drag the thumbnails of abc.pdf over to the thumbnail section of xyz.pdf. Then you save xyz.pdf. (You can save abc.pdf as an empty document or quit and have it revert back to how it was.)

If you want to leave abc.pdf and xyz.pdf untouched but merge them into a third document, first copy xyz.pdf and give it a name like abcxyz.pdf. Then open abc and abcxyz.pdf using Preview. Then copy the thumbnails of abc.pdf into abcxyz.pdf and save abcxyz.pdf and quit and do not save abc.pdf. Now you have three files: abc.pdf and xyz.pdf are unchanged and abcxyz.pdf are merged copies of the two of them.

Happy Birthday to Gmail, from this old Yahoo email user!

Happy birthday, Gmail! According to the Verve, you are 20 years old! The big two-oh! Sure, you had some growing pains at first. And then there was the whole period when you and your users felt snobbish about their gmail accounts and looked down on people with yahoo accounts. But that’s all water under the bridge. We’re all old now.

Google is notorious for killing off services, but it is inconceivable they’ll ever kill off you, Gmail. I expect you and your users will be around for a long long time. Heck even an old yahoo email account user like me uses Gmail from time to time. There’s no guarantees, of course, but I expect to be revisiting this post in 2034, god willing, and writing about your 30th. Until then…

The way to make your Apple Watch more useful is to change your App View

If you want to make your Apple Watch more useful, you want to change your App View. Here’s how.

On your iPhone, find the Watch app icon and click on it. Look for App View and click on it. From here you can change the view to Grid View. (Grid View looks like the watch in the photo above.) Now click on Arrangement.

Once in Arrangement, hold your finger on an icon of something you use often. Drag your finger tip and the icon to the top left. Keep doing that so all the Watch apps you will use the most are on the top rows. Once you have it the way you like it, exit the Watch app.

If you are stuck as to what to put on top, my top apps are:

  1. Stopwatch
  2. Workout
  3. IFTTT
  4. Weather
  5. Text
  6. Phone
  7. Calendar
  8. Heart rate monitor
  9. Activity
  10. Maps

I have a few dozen more Watch apps, but those are the ones I use often.

If you want to see what you can have on your Watch, go back to the Watch app on your phone and scroll down to see what apps are installed on your watch and what ones you can install.

Once you rearrange the Watch apps,  press in the crown on your Watch. You will now see the Watch apps organized the way you want. I bet you start pressing your crown more to access and use the apps you have installed.

The Apple Watch is great. Squeeze more greatness from it by taking advantage of the Watch apps you have.

It’s winter. Time to curl up with a good…list of tech links :) (What I find interesting in tech January 2024)

500Wow. I have not posted any tech links since last September. Needless to say, I’ve been doing alot of reading on the usual topics, from architecture and cloud to hardware and software. I’ve included many of them in the lists below. There’s a special shout out to COBOL of all things. Is there something on DOOM! in here? Of course there is. Let’s take a look….

Architecture: A mixed bag here, with some focus on enterprise architecture.

Cloud: a number of links on cloud object storage, plus more….

COBOL: COBOL is hot these days. Trust me.

Hardware: mostly but not exclusively on the Raspberry Pi….

Mainframe/middleware: still doing mainframe stuff, but I added on some middleware links….

Linux/Windows: mostly Linux but some of the other OS….

Software: another mixed bag of links…

Misc.:  For all the things that don’t fit anywhere else….also the most fun links….

Thanks for reading this!

Who let the (robot) dogs out? And other animated machines on the loose you should know about

A year ago I wrote: Sorry robots: no one is afraid of YOU any more. Now everyone is freaking out about AI instead. A year later and it’s still true. Despite that, robots are still advancing and moving into our lives, albeit slowly.

Drones are a form of robot in my opinion. The New York Times shows how they are shaping warfare, here. More on that, here.

Most of us know about the dog robots of Boston Dynamics. Looks like others are making them too. Still not anywhere as good as a real dog, but interesting nonetheless.

What do you get when you combine warfare and robot dogs? These here dogs being used by the US Marines.

Someone related, the NYPD has their own robot and you can get the details  here.

Not all robots are hardcore. Take the robot Turing for example (shown below). Or the ecovacs, which can mop your floors and more.

What does it all mean? Perhaps this piece on the impact of robots in our lives can shed some light.

Robots are coming: it’s just a matter of time before there are many of them everywhere.

Advent of Code: a great way for coders to celebrate this season

You’ve likely heard of Advent, but have you heard of Advent of Code? Well let the maker of the site, Advent of Code 2023, explain what it is:

Hi! I’m Eric Wastl. I make Advent of Code. I hope you like it! I also made Vanilla JS, PHP Sadness, and lots of other things. You can find me on Twitter, Mastodon, and GitHub. Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as interview prep, company training, university coursework, practice problems, a speed contest, or to challenge each other. You don’t need a computer science background to participate – just a little programming knowledge and some problem solving skills will get you pretty far. Nor do you need a fancy computer; every problem has a solution that completes in at most 15 seconds on ten-year-old hardware.

It seems like just the thing for coders of all kinds, from amateurs to professional devs. Check it out. And if you want to get involved from day 1 in 2024, make a note on your calendar (assuming Eric still does it.)

The benefits you get running Ubuntu/Linux on an old computer and why you should get one

I am a big fan of usable old computers. After you read this, you will be too.

Currently I have an old Lenovo M57p ThinkCentre M series that was made around 2007 that still works fine and is running Ubuntu 20.04 (the latest version is currently 22.04, so this is very current). Not only that, but it runs well. It never crashes, and I can download new software on it and it runs without a problem.

Here are some the benefits of having such a computer:

  • it can act as my backup computer if I have a problem with my main work one. I can read my email at Yahoo and Google. If I need to, I can use things like Google sheets to be productive. I can download software to do word processing on it too. I can attend online meetings. Most of my day to day work functions can be done if need be.
  • it can act as a test computer. I was writing a document on how to use a feature in IBM cloud, but I needed to test it out with a computer other than my work machine (which has special privileges). This old machine was perfect for that.
  • it can also act as a hobby computer. I like to do things with arduinos and Raspberry Pi computers and the Lenovo computer is great for that.
  • it can help me keep up my Unix skills. While I can get some of that by using my Mac, if I had a Windows machine for work I would especially want to have this machine for staying skilled up.
  • it can do batch processing for me. I wrote a Python program to run for days to scrape information from the Internet and I could just have this machine do that while I worked away. I didn’t need to do any fancy cloud programming to do this: I just ran the Python program and checked on it from time to time.
  • It has lots of old ports, including VGA and serial ports. Will I ever need them? Maybe! It also has a CD-ROM drive in case I need that.

As for the version of Linux, I tend to stay with Ubuntu. There’s lots of great Linux distros out there, but I like this one. Plus most times when I come across online Linux documentation, I will find it has explicit references to Ubuntu.

Now you can buy an old machine like this online from Amazon or eBay, but if I can do this on a 15 year old computer, you likely can ask around and get one for free. A free computer that can do all this? The only thing that should be stopping you is how to get started. For that, you will need these Ubuntu install instructions and a USB drive.

Good luck!

P.S. The software neofetch gave the output above. To install it, read this: How do I check my PC specs on Ubuntu 20.04.3 LTS?

If you can’t write to your USB drive on your Mac and you want to fix that, read this

If you are reading this, chances are you cannot write to your USB drive on your Mac.

To force a USB drive to be both read and writable, I did the following (note, I had a Kingston drive, so my Mac identified it as KINGSTON and I went with that. If you buy a USB drive that is not from Kingston, you may see something different):

  1. In Finder, go under Applications > Utilities and start Disk Utility
  2. Click on your USB disk on the left (E.g. KINGSTON) and then click on Erase (top right)
  3. You can change the name if you want (I left it at KINGSTON) and make Format: ExFAT
  4. Once you do that, click the Erase button to format the disk
  5. Click on Unmount (top right) to unmount the disk
  6. Open a terminal window (Open Finder. Go to Applications > Utilities > Terminal) Enter the following diskutil list command in the Terminal window and note the results:
    diskutil list
    /dev/disk2 (external, physical):
    #: TYPE NAME SIZE IDENTIFIER
    0: FDisk_partition_scheme *62.0 GB disk2
    1: Windows_NTFS KINGSTON 62.0 GB disk2s1

    Note it my case the KINGSTON drive is associated with disk2s1. (you see that on the line “1: Windows_NTFS KINGSTON 62.0 GB disk2s1”. It may be different for you. Regardless, you want the drive name that comes after the 62.0 GB.)
  7. While in the terminal window, make a corresponding directory in the /Volumes area of your machine that has the name of your drive (in my case, KINGSTON)
    sudo mkdir /Volumes/KINGSTON
  8. Also in the terminal window, you can  mount your disk as writable and attach it to the mount point sudo mount -w -t ExFAT /dev/disk2s1 /Volumes/KINGSTON

You should now be able to write to your drive as well as read it.

How to cast your Chrome tab to your TV in October 2023

IF you are a fan of using Chrome to cast one of your tabs to a TV, you may be surprised to find that the Cast option is missing. Worse, if you look in places like Chromecast Help on how to Cast a Chrome tab on your TV, you may not find that all that helpful.

Fear not. The Cast option is still there, just hidden. As before, go to the top right of your browser where the three dots are and click on them. Then click on Save and Share… and look for Cast…

Now you can Cast as you did before.

How to work with Java on your Mac, including having multiple versions of Java on your Mac

The easiest way to install Java on your Mac is by using homebrew. Honestly, if you don’t have homebrew on your Mac, I highly recommend you do that. Plus it’s easy to do. All you need is to enter the following:


$ /bin/bash -c “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)”

Now that you have homebrew installed, you can install Java by entering:
$ brew install java

That should install the latest version of it. If you want to install an older version, you can do something like this:
$ brew install java11

If you’ve done this a few times, you may have a few different version of Java installed liked me, and if you enter the following command it may look something like this:

% ls /usr/local/opt | grep openjdk
openjdk
openjdk@11
openjdk@18
openjdk@19
openjdk@20
openjdk@21
%

As you can see, I have a few different versions installed. However, if I do this:

% java --version
openjdk 11.0.20.1 2023-08-24
OpenJDK Runtime Environment Homebrew (build 11.0.20.1+0)
OpenJDK 64-Bit Server VM Homebrew (build 11.0.20.1+0, mixed mode)
%

It shows the OS thinks I have JDK version 11 running.

Why is that? Well, it turns out if I enter this:

% ls /Library/Java/JavaVirtualMachines/
jdk1.8.0_261.jdk openjdk-11.jdk
%

I can see I have two JDKs installed there. MacOS will go with the latest JDK there when you ask what version is installed, in this case openjdk-11.

If I want the OS to use a different version like openjdk 21, I can enter this symbolic link (all one line):

sudo ln -sfn /usr/local/opt/openjdk@21/libexec/openjdk.jdk /Library/Java/JavaVirtualMachines/openjdk-21.jdk

Then when I check on things, I see the following:


% java --version
openjdk 21 2023-09-19
OpenJDK Runtime Environment Homebrew (build 21)
OpenJDK 64-Bit Server VM Homebrew (build 21, mixed mode, sharing)
% ls /Library/Java/JavaVirtualMachines/
jdk1.8.0_261.jdk openjdk-11.jdk openjdk-21.jdk
%

Now the system thinks openJDK 21 is running.

If I want to reverse this and go back to openjdk 11, I can use this unlink command and see this:

% sudo unlink /Library/Java/JavaVirtualMachines/openjdk-21.jdk
% java --version
openjdk 11.0.20.1 2023-08-24
OpenJDK Runtime Environment Homebrew (build 11.0.20.1+0)
OpenJDK 64-Bit Server VM Homebrew (build 11.0.20.1+0, mixed mode)
% ls /library/Java/JavaVirtualMachines
jdk1.8.0_261.jdk openjdk-11.jdk
berniemichalik@Bernies-MacBook-Air-4 ~ %

Normally I would recommend going with the latest and greatest version of Java on your Mac. However, you may have a situation where you have some Java code that only runs on older versions of Java. This is one way to deal with that.

For more on this, here are some good links I found:

AI scales up and out. Here’s some pieces that shows that.


While there are still prophets and pundits arguing doom and gloom regarding AI, most people and organizations have moved past them and have been adopting the technology widely. Some times that has been good, some times not. To get a sample of how it’s going, here’s a few dozen pieces on AI worth a look:

  1. The WSJ argues that you soon won’t be able to avoid AI at work or at home. It’s true, but so what?
  2. AI is being used to deal with the threat of wildfires. Good. Also good: AI allows farmers to monitor crops in real time. More good AI:  AI used to find antibodies. By the way, here’s a piece on how to turn chatgpt into a chemistry assistant.
  3. A worthwhile piece on AI lawsuits that are coming due to intellectual property rights.
  4. The New York Times has stopped Openai from crawling its site. More on that, here.
  5. Here’s the associated press AI guidelines for journalists.
  6. Students and writers, bookmark this in case you need it: what to do when you’re accused of writing with AI.
  7. Also, what can you do when AI lies about you?
  8. This is dumb: AI builds software under 7 minutes for less than a dollar.
  9. It’s not surprising hackers from lots of security holes in AI.
  10. Take this with a big grain of salt…one of the leaders from Palantir wonders if AI should be compared to atomic energy.
  11. This is bad: how facial recognition tech resulted in a false arrest.
  12. This is not good: a story on using AI to generate books and other junk here.
  13. This is different:  Microsoft Teams is pairing up with Maybelline to provide AI generated beauty filters / virtual makeup.
  14. It’s not news but it is interesting that NVIDIA is a hot company now due to AI. See more about that, here.
  15. Maybe chatgpt and other AI will just be a tool to do inefficient things efficiently.
  16. A thoughtful piece on powering generative AI and large language models with hybrid cloud with a surprise ending, from one of the senior leaders in my group at IBM.

(Photo: link to image in the NVIDIA story. By Philip Cheung for The New York Times)

Forty things that have changed in IT and IBM in the last forty years (from 1983 to 2023)

If you were to ask me, on this day, what has changed with regards to computers and IT and IBM in the last 40 years, I would say it’s this:

  1. Access: Very few people had access to computers 40 years ago. Those folks used mainframes, minicomputers and the occasional personal computer from Commodore or Radio Shack or this new start up called Apple. Now everyone has access to a computer they carry around in their pocket. (We call it a smart phone, but it’s really a powerful computer that makes calls.)
  2. Ubiquity: Back in the early 80s the vision of everyone having a computer / terminal on your desk was just that: a vision. The few that did have these big monster 3277 or 3298 metal terminals or if you were lucky, a 3279-color terminal. People worked on paper.
  3. email: One of the drivers of having a terminal on your desktop was to access email. Back then IBM’s email system was called PROFS (Professional Office System) and it meant you no longer had to send you three-part memos (yes people did that with carbon paper between the memo paper, so you could give the cc (carbon copy) to someone else). You sent electronic mail instead. Everyone thought it was great. Who knew?
  4. Viruses: Viruses were new. My first was called the CHRISMA exec. In those days every Christmas people would send around runnable scripts (ie. Execs) and they would be the equivalent of digital Christmas cards. Digital Christmas card came from outside IBM. It read your address book and send itself to all the people you knew. Sounds like fun. In fact it overwhelmed the IBM networks and IBMers around the world had to shut most things down to try to purge the network of this thing. It took days. Not fun.
  5. Networks: Companies sometimes had their own networks: IBM had one called VNET. VNET connected all of IBM’s computers worldwide, and it had connection points with outside networks like BITNET too, which is where the CHRISTMA exec was. There was no Internet per se.
  6. Network size: IBM’s VNET had over 1000 VM computers all connected to each other. All of them had an id called OP which was what system operators used to sometimes control the VM mainframe. Once on second shift another system operator and I wrote a program to messages all 1000+ ops in the world the equalivant of “hi hows it going”. To our surprised many of them wrote back! We manually started messaging them back and even became friends with some of them over time. It was like twitter before twitter or gchat before gcchat, etc.
  7. Documentation: Computer documentation was hard to come by, and if you had any, you might hide it in your desk so no one else could take it. The operators had a special rack of documentation next to where they worked. I was thrilled in the 90s when you could walk into a bookstore and actually buy books that explained how things worked rather than having to get permission from your manager to order a Redbook from IBM publishing in the US.
  8. Education: In the 80s you could get a job in IBM operations with a high school diploma. Universities in Canada were just ramping up degree programs in computer science. By the start of the 90s most new hires I knew had at least a university degree and more likely a comp sci or engineering degree.
  9. Software: We take Microsoft’s dominance in software for granted, but back then Lotus’s 123 was the spreadsheet program we used, and others used Wordstar or Wordperfect for word processing. Microsoft worked very hard to dominate in that space, but in 1984 when the ads for Macintosh came out, Gates was just one of three people in the ad touting that their software ran on a Mac.
  10. Minicomputers: In between the time of the mainframe and PC, there was the rise of minicomputers. DEC in particular had minicomputers like the VAX systems that gave IBM a run for the money. IBM countered with machines like the 4300 series and the AS/400. All that would be pushed to the site by….
  11. IBM’s PC: The first truly personal computer that had mass adoption was the IBM PC. A rather massive metal box with a small TV on top, it could run the killer apps like Lotus 123. Just as importantly, it could run a terminal emulator, which meant you could get rid of old terminals like the 3270 series and just give everyone a PC instead. Soon everyone I worked with had a PC on their desk.
  12. Modems: modems in the 1980s were as big as a suitcase. If a client wanted one, an IBM specialist would go their location and install one for you. In the 90s people got personal modems from companies that sent data at 9600 bps or 14000 bps or even 56 kbps! Today people have devices the size of a book sitting at home and providing them with speeds unthinkable back then.
  13. Answering machines: The other thing people used to have on their desks besides a PC was an answering machine. Before that every office had a secretary. If you weren’t at your desk the call would go to them and they would take the message. If you had been away for a time you would stop by their desk and get any slips of paper with the name and numbers of people to call back. Answering machines did away with all that.
  14. Paper planners: Once you did call someone back, you would get out your day runner / planner and try to arrange a meeting time with them. Once a year you would buy new paper for it so you could keep track of things for the new year. In its heyday your planner was a key bit of information technology: it was just in paper form.
  15. Ashtrays and offices: it may seem hard to believe but back then smoking in the office was common, and many people smoked at their desk. It was a long and hard process to eliminate this. First there was smokeless ashtrays, then smoking areas, then finally smokers had to smoke outside, then smoke in areas well away from the main door. Likewise people worked in cubicles. It was miles away from working at places like Google or WeWork, never mind working from home.
  16. The rise of Microsoft and the decline of IBM: The success of the IBM PC lead to the success of Microsoft. The adoption of MS-DOS as the operating system for the IBM PC was a stroke of luck for Microsoft and Bill Gates. It could have easily been CP/M or some other OS. With the rise of Microsoft and the personal computer, IBM started to lose its dominance. IBM’s proprietary technologies like OS/2 and TokenRing were no match for DOS / Windows or Ethernet. IBM did better than some computer companies like Wang, but it’s days of being number one were to be over.
  17. The role of the PC: for a time in the 80s you could be a company and not have computers. Paper and phones were all you needed. We used to say that companies that used computers would beat any competitors not using computers. And that became the case by the end of the decade.
  18. The rise and fall of AI: now AI is hot, but in the late 80s and early 90s it was also hot. Back then companies were building AI using languages like LISP and Prolog, or using specialized software like IBM’s Expert Systems Environment to build smart tech. It all seemed so promising until it wasn’t.
  19. LANs: all these PCs sitting on people’s desks needed a way to talk to each other. Companies like Microsoft released technology like Windows for Workgroups to interconnect PCs. Office had servers and server rooms with shared disks where people could store files.
  20. The rise of Ethernet: there were several ways to set up local networks back then. IBM had its token ring technology. So did others. It didn’t matter. Eventually Ethernet became dominant and everywhere.
  21. Email for everyone: just as everyone got PCs and network access, in the 90s eventually everyone got mail. Companies ditched physical mail and FAXes for the speed and ease of electronic mail, be it from AOL or Compuserve or someone else.
  22. Network computers: one thing that made personal computers more cost effective in the 90s for people was a specialized computer: the network computer. It was a small unit that was not unlike a terminal, and it was much cheaper for business than a PC. To compete, the prices of PCs soon dropped dramatically and the demand for the network computer died off.
  23. EDI: another thing that was big for a time in the 90s was EDI. IBM had a special network that ran special software that allowed companies to share information with each other using EDI. At one point IBM charged companies $10/hour to use it. Then the Internet rose up and ISPs charged companies $30/month and suddenly EDI could not compete with a PC using a dialup modem and FTP software provided by their ISP.
  24. Electronic banking: with personal computers and modems becoming common in homes, banks wanted to offer electronic banking to them. Some banks like the Bank of Montreal even established a specialized bank, mbanx, that was only online. Part of my job in the 90s was to help banks create the software they would give out to allow their customers to do banking via a private network. While most banks kept their branches, most day to day banking now happened online.
  25. The Internet and the web: if the PC changed everything in the 80s, the Internet changed everything in the 90s. Suddenly ISPs were springing up everywhere. Even IBM was an ISP for a time. People were scrambling to get software to allow them to connect their PC and US Robotics 14.4 kbps modems to access FTP sites and Usenet and more. No sooner did this happen than the World Wide Web and browsers bust on the scene. For many people, the Web was the Internet. So long Gopher; goodbye WAIS.
  26. Google: finding things on the Internet was no easy thing. It only got worse as web sites shot up everywhere. Google changed the Web and made it usable. They changed email too. Sites like Yahoo! Wanted to make you pay for more storage: Google gave people more storage than they could ever need.
  27. From desktops to laptops: with home networks in place, people wanted to be able to bring home their computers to work remotely. I used to have a luggable computer that weighed 40 pounds that I would bring back and forth daily. As more people did this, computer companies got smart and made the portable computers smaller and better. Apple was especially good at this, but so was IBM with their Thinkpad models. As time went by, the computer you used at work became a laptop you use to work everywhere.
  28. The Palm Pilot: the Palm Pilot succeeded where Apple and others had failed. They had come up with a device you could use to track your calendar, take notes, and more. All you had to do was put it in a cradle and press the sync button and everything would be loaded onto your PC. Bye bye paper planners. Hello Personal Digital Assistant.
  29. IBM Services: One time IBM gave away its services. By the 90s they had a full one line of business devoted to providing their people to clients to help them with their business. People like me moved from helping run IBM’s data centers to going around to our clients helping them run their data centers and more.
  30. Y2K: if Y2K was a non-event, it was only because of the countless hours put in by techies to make it one. Even me. I was shocked to discover that EDI software I wrote for a Quebec bank in 1992 was still running on PC/DOS computers in 1999. It was quickly rewritten before the deadline to keep running on January 1, 2000. Just like countless software worldwide.
  31. E-business: if PCs changed business in a big way in the 80s, e-business changed them in a big way in the 90s. Even with the dot com era crash, there was no going back. With e-banking your retail branch was open 24/7; with e-business, the same was true of your favorite local (or non-local) business.
  32. The resurrection of Apple and Steve Jobs: two things transformed IT and made it cool: one was the Web and two was the return of Jobs to Apple.  Boring beige boxes were out: cool colored Macs made for the Internet were in. People were designing beautiful web sites with red and yellow and blue iMacs.And the success of those iMac led the way to the success of the iPod, and the success of the iPod led to so much more.
  33. Blackberry and dominance of smartphones: if the Palm Pilot got mobile computing started, the Blackberry accelerated that. Email, texting, and more meant that just like online banking and e-business, you were reachable 24/7. And not just reachable the way you were with a pager/beeper. Now you could reply instantly. All the computer you needed fit in your hand.
  34. The decline of analog: with the rise of all this computing came the decline of anything analog. I used to buy a newspaper every day I would commute to work. People would bring magazines or books to read. If you wanted to watch a film or listen to a song, it depended on something physical. No longer.
  35. The rise of Unix/Linux: you use Unix/Linux every day, you just don’t know it. The web servers you use, the Android device you make calls on, the Mac you write emails on: they all depend on Unix/Linux. Once something only highly technical people would use on devices like Sun computers or IBM pSeries machines is now on every device and everywhere.
  36. Open Source: in the 90s if you wanted software to run a web server, you might pay Netscape $10,000 for the software licence you needed. Quickly most people switched to the free and open source Apache web server software to do the same job. This happened over and over in the software world. Want to make a document or a spreadsheet? You could get a free version of that somewhere. For any type of software, there is an open source version of it somewhere.
  37. Outsourcing/offshore: if people could work from anywhere now, then the work that was done locally could now be done anywhere. And it increasingly was. No one locally does the job I did when I first started in the computer industry: it’s all done offshore.
  38. The Cloud: if work could be done anywhere by anyone, then the computers needed to do it could be the same. Why run your own data center when Amazon or Microsoft or IBM or Google could do it better than most? Why buy a computer when you only need it for an hour or a day? Why indeed?
  39. The return of AI: finally, AI has returned after a long time being dormant, and this time it’s not going to be something used by a few. Now everyone can use it and be more productive, smarter. Like the PC or the Internet before it, AI could be the next big thing.
  40. Web 2.0/Social Media: One thing to insert in between the Internet and AI in terms of groundbreaking changes in IT is Social Media. Both public social media like this and private social media like Slack and Microsoft Teams. Without social media I couldn’t share this with you.

In 40 years the devices have gotten smaller, the networks have gotten bigger, and the software has gotten smarter. Plus it’s all so much cheaper. If I had to sum it up, I’d say that sums up all the changes that have happened in the last 40 years. And we are just getting started.

You should set up two-factor authentication (2FA) on Instagram. And you should use an authenticator app

You might think: no one is going to hack my Instagram account. And you might be right. But here’s the thing: if someone does hack your account, you have next to no chance of getting someone at Instagram to restore it. Rather than make it easy for hackers to take over your account, spam your friends and delete years of photos, you should use 2FA. To do so, read this article: How to Turn on Two-Factor Authentication on Instagram.

While you can use SMS, I recommend using an authenticator app. That article explains how you can do it either way. Authenticator apps are more secure than SMS and are the way to go these days. For more on that, see PCMag.

IBM Cloud tip: use Multifactor authentication (MFA) also called 2-Factor Authentication (2FA) with your account

If you are using IBM Cloud technology, I recommend you consider setting up MFA for your login account. MFA makes your access more secure, and it’s easy to do. To see how easy it is, go here: IBMid – Verifying your identity and configuring MFA. It’s a well laid out description about how to do it.

You can use either a verification app or email to get a verification code. I recommend an app. While email works, it can take several minutes to get the code, while with an app you get a code instantly. As for apps, I use IBM’s verify app, but you can use Google’s and likely Microsoft’s.  They all work fine. Just go to your favorite app store and download one. (Make sure it comes from IBM or Google or Microsoft, not from some developer with a lookalike app.)

 

 

 

 

 

If you use two/multifactor authentication, make sure you have a backup

Multi-Factor authentication is great. There is only one downside: you lose your phone. The way to deal with that is to have a backup. To set that up, either read this if you use Microsoft’s authenticator: Back up and recover account credentials in the Authenticator app from Microsoft Support or this if you use something else for authentication: Make Sure You Have a Backup for Two-Factor Authentication.

 

 

 

My IT Beach Reads this summer :) (What I find interesting in tech September 2023)


Yes, this is the stuff I read for fun. Not on the beach, but at least in a comfy chair out in the hot sunny weather. 🙂

Architecture links: mostly my IT architecture reading was AWS related this summer, but not all of it.

Cloud links: a mixed bag of things, all good.

Ops links: I’ve been consulting with clients on operations work, among other things, so here’s  pieces on AIOps, DevOps and more that I thought were good:

Software links: mostly dashboard related, since I was working on…dashboards.

Finally: here’s a mix bag of things, quantum and otherwise, that I enjoyed.

Will there be Doom? (What I find interesting in hardware/software in tech Jul 2023)

While my last few posts on IT have been work related, most of these are on hardware and software and tend to be more hobby and fun related.

Hardware links:

Software links:

Hope something there was useful! As always, thanks for reading!

P.S. Before I forget… here’s a piece on how a hacker brought Doom to a payment terminal. Love it!

 

Silicon Valley is full of not serious people and it’s time to treat them accordingly

I really like this piece by Dave Karpf on how not enough people are making fun of Balaji Srinivasan right now. While he goes on the skewer Srinivasan for a stupid bet/stunt he did recently, he touches on a broader topic:

2023 is shaping up to be a big year for recognizing that the titans of Silicon Valley actually have very little clue how the financial system works. That’s essentially what capsized Silicon Valley Bank: the venture capitalist crowd was long on self-confidence and short on basic-understanding-of-how-things-work.

At some point with characters like Balaji, you have to ask yourself whether he’s putting on a show or whether he really is a fool. There are a lot of guys at the heights of Silicon Valley who put on a similar performance. (*cough* David Sacks *coughcough* Jason Calcanis.) They have money, and they speak with such confidence. For years, they’ve been taken them seriously. This ought to be the year when that presumption of omnicompetence withers away.

I think that quote  of how 2023 is going to be “a big year for recognizing that the titans of Silicon Valley actually have very little clue how the financial system works” really can apply to anything, not just the financial system. As Karpf notes, all these leaders in Silicon Valley “have money, and they speak with such confidence” and people take them seriously.

So when Marc Andreessen bloviates on how AI will save the world and how it’s the best technology EVAH, no one says he’s full of crap. They don’t look at how he went long on crypto when others were getting out, for example, and say “yeah maybe he’s not the best guy to listen to on this stuff”.

And that’s too bad. I think we should mock these people more often. We should mock the vapidity of Bill Gates’s recent commencement speech. We should cheer when companies like Hindenburg Research go after Jack Dorsey and block for what a crappy company it is. We should recognize how fraudulent people like Tony Hsieh or Elizabeth Holmes are. We should recognize that these people do not deserve our attention. And if they get it, they should be scrutinized and at the very least, mocked. I mean Elon Musk and Mark Zuckerberg are talking about fighting in a cage match.

These are not serious people. We should stop acting like they are.

P.S. The fraudster  Elizabeth Holmes finally went to prison after trying in vain to convince people she should not. Did silicon Valley learn anything from this? Not much, if this story on how recently the company Grail told 400 patients incorrectly that  they may have cancer.

As for Tony Hsieh, you can read here how he used companies like ResultSource to make his book Delivering Happiness into a “best seller” (not to mention giving it away). Just another form of fraud. Here’s a good takedown of Tony Hsieh and the emptiness of the tech mogul.

Finally the New York Times has a rundown of the recent high tech phonies and the trouble they are in.

 

Reflecting on the Apple Watch while reading how the Apple VisionPro might flop

How is the Apple Watch doing, you might wonder? Well according to this piece, pretty pretty pretty good. Check out these stats:

Pretty much on every measure it is a big success, especially on the annual sales side.

Looking at those numbers, you might find it hard to believe that when the Apple Watch first came out, it was…a dud. As the same piece shows:

(The) First Apple Watch, announced on September 9th, 2014, and released on April 24th, 2015, was initially a flop, with an 85.7% drop in sales from April 2015 to July 2015. The reason was that the Apple Watch Series 0 simply wasn’t good enough. It was neither fashionable nor performed well as a fitness watch. Apple, later on, shifted to focus on fitness features instead of simply making their watch look good. By the time Apple released Watch Series 3, people were already hooked.

Yep. I was hopeful for the Watch back then, but many people were dismissive. It was too complicated, too big, too expensive, etc.

I was reminded of all this as I was reading some “nervous nellie” reaction from Yanko Design and the New York Times about the Vision Pro. They hedge their bets (and they should), but the focus is on how it could fail.

And it could fail! Or more likely, it could be a dud. It could be like the Homepod or Apple TV. Remember TVOS? I thought people would jump on that and start developing apps for it. Well other than Apple, I don’t see too much happening with that device. Both those devices are…fine, but not game changers.

That said, I think the Apple Vision devices will be game changers. I suspect Apple will play the long game, just like they did with the Apple Watch. Watch this blog as we track it’s progress. 🙂

P.S. More on the Apple Watch written by me, here. More on the history of the Apple Watch from others here and here.

 

On spatial computing and VisionOS

While people talked a lot about the hardware of Apple’s new Vision Pro device launched last week, I’ve thought a lot about Apple’s emphasis on spatial computing. What’s that all about, you might ask? I am going to turn to this piece at Yanko Design to explain:

“Vision Pro is a new kind of Computer,” says Tim Cook as he reveals the mixed reality headset for the very first time. “It’s the first Apple product you look through, and not at,” he adds, marking Apple’s shift to Spatial Computing. What’s Spatial Computing, you ask? Well, the desktop was touted as the world’s first Personal Computer, or PC as we so ubiquitously call it today. The laptop shrank the desktop to a portable format, and the phone shrank it further… all the way down to the watch, that put your personal computer on your wrist. Spatial Computing marks Apple’s first shift away from Personal Computing, in the sense that you’re now no longer limited by a display – big or small. “Instead, your surroundings become a canvas,” Tim summarizes, as he hands the stage to VP of Design, Alan Dye. Spatial Computing marks a new era of computing where the four corners of a traditional display don’t pose any constraints to your working environment. Instead, your real environment becomes your working environment, and just like you’ve got folders, windows, and widgets on a screen, the Vision Pro lets you create folders, windows, and widgets in your 3D space. Dye explains that in Spatial Computing, you don’t have to minimize a window to open a new one. Just simply drag one window to the side and open another one. Apple’s VisionOS turns your room and your visual periphery into an OS, letting you create multiple screens/windows wherever you want, move them around, and resize them. Think Minority Report or Tony Stark’s holographic computer… but with a better, classier interface.

Spatial computing is something bigger than the new hardware from Apple. It’s talking about changing the way we do computing.

You see, since the 1980s we’ve been stuck with the WIMP paradigm in computing: windows, icon, menus, pointer. We have it for so long we don’t even think about it anymore. Even when we went from desktop computing to smartphones and tablets, we more or less kept this paradigm.

With spatial computing, we can think out of the box. Get away from the desktop. You are no longer looking AT a computer: you are IN a computer.

Apple is still kinda stuck with the WIMP in some of the demos they have for Vision Pro. I get that: it’s going to take some time for all of us to make the shift. Even Apple. But the shift will come.

The shift may not even come primarily from Apple the software company. I believe one of the reasons Apple launched the device the way it did — limited and at WWCD — is to get developers excited about it. Already some big name software companies have signed on. And if I read this piece correctly, then there could be a rush of developers from everywhere to come out with software for the device. Perhaps much of that could be non-WIMP software.

Much of this will depend on Vision OS and what it is capable of supporting. But from everything I read, it sound like it provides spatial computing flawlessly with the Vision Pro.

And perhaps spatial computing is not just for the Vision Pro. Currently Apple allows you to do handoffs from one device to another. I could see that happening with the Vision Pro, your Mac, and your phone. You might be working on something on your Mac that you want to take a break from, so you put on your Vision Pro to play a game. Then you get an idea, so you work on it in the Vision Pro, rather than taking off your goggles. Likewise, you may need to take a break from the Vision Pro, so you do a handoff to your Mac or your Apple TV to watch the rest of a movie from that device.

I can also see bits of VisionOS creeping into MacOS and iOS and even WatchOS. If VisionOS breaks the WIMP paradigm virtually, perhaps it could do the same thing physically. All of Apple’s devices could be spatial computing devices.

Spatial computing promises to be a new big thing in computing. I’m excited for this. I hope Apple and others can bring it to fruition. (Pun intended.)

P.S. For more on how impressive the Vision Pro is, I recommend this: Every Single Sensor inside the Apple Vision Pro and What It’s Individually Designed To Do – Yanko Design

What I find interesting in Mainframes and CI/CD (tech update June 2023)

I’ve so many good pieces on IT, I’ve got to break them down into subcategories. Last month I’ve shared things on cloud tech. This month the focus is on mainframes and CI/CD (things I’ve been working on over the last year or more).

Mainframes: I’ve been doing work on mainframe modernization, which has me focusing on tools around that, among other things.

z/D&T is one of those tools. Here’s a good overview of it. Here’s a piece on deploying IBM mainframe z/OS application on AWS with IBM z/D&T. This is a good IBM zDT Guide and Reference. More on zD&T there. I like it.

DBB is another tool in use. Here’s an intro. Here’s something on using it to migrate data sets.

Not all these tools are IBM related. Endevor is another tool to study. Here’s something on how Endevor software ‘s change manager bridges enterprise git. Here’s something on mapping strategies using endevor bridge for git. How to create a package in Endevor and how to review and approve a package in Endevor review approve package. Also something on on Endevor pricing and setup.

Linux is kinda a tool (I guess?) on mainframes. This is a good explainer on Linux on IBM Z. More on IBM z linux here.

Finally, here’s a good article on mainframe modernization patterns. More on mainframe application modernization beyond banking from IBM. Still more on mainframe modernation. Also: using collaboration not migration to modernize your mainframe applications with the cloud.

CI/CD: I’ve also been focusing on work around CI/CD. So there’s been lots of work using Jenkins. Here’s a piece on how to create a ci/cd pipeline with kubernetes and jenkins. Also a tutorial for installing jenkins on IBM Cloud. Check out this tutorial on setting up a ci/cd pipeline with jenkins to deploy multi arch image on ocp on linuxone and x86. That was especially good.  Here’s something on  blue/green deployment with docker github and jenkins.

Here’s a side by side of github actions vs jenkins which should you consider. This helps if you want to know if you should use a jenkinsfile or not. Check out this good jenkins ci/cd review. More on Configuring a jenkins agent on openshift.  Here’s how to add z/OS to your jenkins build farm. This was a  good jenkins groovy tutorial.

Related to the Jenkins work is work around IBM’s Urbancode Deploy (UCD). Here’s A tutorial on ibm urbancode deploy. Another tutorial on how to build a pipeline with jenkins dependency based build on UCD. Something on how to integrate UCD with jenkins for continuous integration is here.

Lastly, here’s some things to consider re installing UCD. More on how integrate UCD and jenkins for ci/cd. Here’s what’s needed in terms of system requirements for UCD.

Finally, here’s some more on continuous testing in devops. More on ci/cd pipelines. And last, another mainframe tool for CI/CD:  Workflow.

 

 

 

 

 

Blackberry: a device once loved, now a film (and a great one)

I loved this film, just like I use to love my Blackberrys. If you loved yours, or the era of the Blackberry, or just want to see a great film, I recommend you see “Blackberry”.

There’s a number of ways you can watch this film. You can watch it just as a story of that weird era from the 90s until the early 2000s. Or as a story about the tech industry in general. Or a story about Canada. It’s all those stories, and more.

To see what I mean, here’s a piece in the CBC with a Canadian angle: New film BlackBerry to explore rise and fall of Canadian smartphone. While this one talks about the tech industry as well as the cultural elements of it: ‘BlackBerry’ Is a Movie That Portrays Tech Dreams Honestly—Finally | WIRED

But besides all that, it’s a great character study of the three main characters: Mike Lazaridis (Jay Baruchel ), Jim Balsillie (Glenn Howerton) and Doug Fregin (Matt Johnson). The arc of Lazaridis in the movie was especially good, as he moves from the influence of Fregin to Balsillie in his quest to make a great device. It’s perhaps appropriate that Balsillie has devil horns in the poster above, because he does tempt Lazaridis with the idea of greatness. And Lazaridis slowly succumbs and physically transforms in the film from a Geek to a Suit.

That’s not to say Balsillie is a caricature. Under all his rage and manipulation, you can see a human also struggling with ambition and is who is aware of the great risks he is taking. His arc might not be as dramatic as Lazaridis in the movie, but it is a rise and fall of significance.

As for Fregin, his character is important but he doesn’t change the way Lazaridis and Balsillie do. But if Balsillie is the devil on the shoulder of Lazaridis, then Fregin is the angel. He provides a reminder throughout the film of what Lazaridis lost in his transformation. (And the description of his life at the end of the film is *chef’s kiss* good.)

The film is a dramatization, but it gets so much right.  Lazaridis and Balsillie were crushed in the end, just like in the film. Balsillie lost his dream of NHL ownership, and Lazaridis lost his claim of making the best smartphone in the world. There’s a part of the film when Balsillie asks: I thought you said these were the best engineers in the world?? and Lazaridis replies: I said they were the best engineers in Canada. That part is a transition in the film, but also sums up the film and the device in many ways.  Their ambition and hubris allowed them to soar, but eventually they met their own nemeses whether they came in the form of Apple or the NHL Board of Directors or the SEC.

As an aside to all that, it’s fascinating to see the depiction of Blackberry defeating Palm/US Robotics. In the early 90s Palm and US Robotics (who later merged) were dominant tech players. Blackberry surpassed them and left them in the dust. Just like Apple left RIM/Blackberry in the dust when they launched the iPhone. (Google also contributed to that with Android.)

Speaking of Apple, it was interesting to see how backdating stock options helped sink Balsillie. He was not alone in such financial maneuvering. Apple and Jobs also got into trouble for backdating options. I assume this practice might have been more common and less black and white than it comes across in the film.

In the film, there is a certain prejudice Lazaridis has about cheap devices, especially those from China.  It’s just that, though: a prejudice. That prejudice was once held against Japan and Korea too, because those countries made cheap devices for Western markets at first. But Japan and Korea went on to produce high end technology and China has too. The Blackberry Storm from China might have been substandard, but Apple has done quite fine sourcing their products from that country. Something to keep in mind.

I suspect I will watch the film many times in my lifetime. Heck, a good part of my life IS in the film as someone involved with the tech industry at the time. That business is my business. That culture is my culture. That country is my country.

None of that has to apply to you, though. If you want to watch a superb film, grab “Blackberry”.

 

 

 

 

What I find interesting in cloud tech, May 2023

It’s long past time to write about IT stuff I’ve been working on. So much so I’ve too much material to provide, and rather than make an endless post, I’ll focus on cloud. I’ve mostly been doing work on IBM cloud, but I have some good stuff on AWS and Azure (Sorry GCP, no love for you this time.)

IBM Cloud: most of the work I’ve been doing on IBM cloud has been hands on, as you can tell from these links:

Other clouds: Not so much hands on, but interesting.

What’s cool? The interactive Open Infrastructure Map is cool

I can write what the Open Infrastructure Map is by using the words of its creator:

Open Infrastructure Map is a view of the world’s infrastructure mapped in the OpenStreetMap database. This data isn’t exposed on the default OSM map, so I built Open Infrastructure Map to visualise it.

But the best thing to do is tell you to head over to it and zoom in on areas you know. Being from Cape Breton, I did just that, and I was wonderfully surprised by how much detail was there. I think you will feel the same.

Highly recommended.

A plethora of good links on AI

There’s still an overwhelming amount of material being written on AI. Here’s a few lists of some of the ones I found most interesting:

ChatGPT: ChatGPT (3 and 4) still dominate much of the discussion I see around AI. For instance:

Using AI: people are trying to use AI for practical purposes, as those last few links showed. Here’s some more examples:

AI and imagery: not all AI is about text. There’s quite a lot going on in the visual space too. Here’s a taste:

AI and the problems it causes: there’s lots of risks with any new technology, and AI is no exception. Cases in point:

Last but not least: 

The Gartner Hype Cycle: one good way to think about technological hype

Below is the Gartner hype cycle curve with it’s famous five phases:

For those not familiar with it, the chart below breaks it down further and helps you see it in action. Let’s examine that.

Chances are if you are not working with emerging IT and you start hearing about a hyped technology (e.g., categories like blockchain, AI), it is in the phase: Peak of Inflated Expectations. At that stage the technology starts going from discussions in places like Silicon Valley to write ups in the New York Times.  It’s also in that phase two other things happen: “Activity beyond early adopters” and “Negative press begins”.

That’s where AI — specifically generative AI — is: lots of write ups have occurred, people are playing around with it, and now the negative press occurs.

After that phase technologies like AI start to slide down into my favorite phase of the curve: the Trough of Disillusionment. It’s the place where technology goes to die. It’s the place where technology tries to cross the chasm and fails.

See that gap on Technology Adoption Lifecycle curve? If technology can get past  that gap (“The Chasm”) and get adopted by more and more people, then it will move on through the Gartner hype curve, up the Slope of Enlightenment and onto the Plateau of Productivity. As that happens, there is less talking and more doing when it comes to the tech.

That said, my belief is that most technology dies in the Trough. Most technology does not and cannot cross the chasm. Case in point, blockchain. Look at the hype curve for blockchain in 2019:

At the time people were imagining blockchain everywhere: in gaming, in government, in supply chain…you name it. Now some of that has moved on to the end of the hype cycle, but most of it is going to die in the Trough.

The Gartner Hype Curve is a useful way to assess technology that is being talked about, as is the Technology Adoption Curve. Another good way of thinking about hype can be found in this piece I wrote here. In that piece I show there are five levels of hype: Marketing Claims, Exaggerated Returns, Utopian Futures, Magical Thinking, and Othering. For companies like Microsoft talking about AI, the hype levels are at the level of Exaggerated Returns. For people writing think pieces on AI, the hype levels go from Utopian Futures to Othering.

In the end, however you assess it, its all just Hype. When a technology comes out, assess it for yourself as best as you can. Take anything being said and assign it a level of hype from 1-5. If you are trying to figure out if something will eventually be adopted, use the curves above.

Good luck!

Two exciting new things from Apple

First up, the new iphone 14 plus in yellow. Love it! Apple is wise to assign unique colours to new hardware. It’s a smart way to attract people to a new product, and all those new selfies with the new yellow phone is likely to drive up more sales. (I have been known to fall for this sales approach. :))

Also new is Apple Music Classical. I confess, I didn’t understand why Apple was splitting off Classical music this way. After I read more about it, it makes sense. I hope it will lead to people listening to more classical music.

Good work, Apple!

Paul Kedrosky & Eric Norlin of SKV know nothing about software and you should ignore them

Last week Paul Kedrosky & Eric Norlin of SKV wrote this piece, Society’s Technical Debt and Software’s Gutenberg Moment, and several smart people I follow seemed to like this and think it something worthwhile. It’s not.

It’s not worthwhile because Kedrosky and Norlin seem to know little if anything about software. Specifically, they don’t seem to know anything about:

  • software development
  • the nature of programming or coding
  • technical debt
  • the total cost of software

Let me wade through their grand and woolly pronouncements and focus on that.

They don’t understand software development: For Kedrosky and Norlin, what software engineers do is predictable and grammatical. (See chart, top right).

To understand why that is wrong, we need to step back. The first part of software development and software engineering should start with requirements. It is a very hard and very human thing to gather those requirements, analyze them, and then design a system around them that meets the needs of the person(s) with the requirements. See where architects are in that chart? In the Disordered and Ad hoc part in the bottom left. Good IT architects and business analysts and software engineers also reside there, at least in the first phase of software development. To get to the predictable and grammatical section which comes in later phases should take a lot of work. It can be difficult and time consuming. That is why software development can be expensive. (Unless you do it poorly: then you get a bunch of crappy code that is hard to maintain or has to be dramatically refactored and rewritten because of the actual technical debt you incurred by rushing it out the door.)

Kedrosky and Norlin seem to exclude that from the role of software engineering. For them, software engineering seems to be primarily writing software. Coding in other words. Let’s ignore the costs of designing the code, testing the code, deploying the code, operating the code, and fixing the code. Let’s assume the bulk of the cost is in writing the code and the goal is to reduce that cost to zero.

That not just my assumption: it seems to be their assumption, too. They state: “Startups spend millions to hire engineers; large companies continue spending millions keeping them around. And, while markets have clearing prices, where supply and demand meet up, we still know that when wages stay higher than comparable positions in other sectors, less of the goods gets produced than is societally desirable. In this case, that underproduced good is…software”.

Perhaps that is how they do things in San Francisco, but the rest of the world has moved on from that model ages ago. There are reasons that countries like India have become powerhouses in terms of software development: they are good software developers and they are relatively low cost. So when they say: “software is chugging along, producing the same thing in ways that mostly wouldn’t seem vastly different to developers doing the same things decades ago….(with) hands pounding out code on keyboards”, they are wrong because the nature of developing software has changed. And one of the way it has changed is that the vast majority of software is written in places that have the lowest cost software developers. So when they say “that software cannot reach its fullest potential without escaping the shackles of the software industry, with its high costs, and, yes, relatively low productivity”, they seem to be locked in a model where software is written they way it is in Silicon Valley by Stanford educated software engineers. The model does not match the real world of software development. Already the bulk of the cost in writing code in most of the world has been reduced not to zero, but to a very small number compared to the cost of writing code in Silicon Valley or North America. Those costs have been wrung out.

They don’t understand coding: Kedrosky and Norlin state:A software industry where anyone can write software, can do it for pennies, and can do it as easily as speaking or writing text, is a transformative moment”. In their piece they use an example of AI writing some Python code that can “open a text file and get rid of all the emojis, except for one I like, and then save it again”. Even they know this is “a trivial, boring and stupid example” and say “it’s not complex code”.

Here’s the problem with writing code at least with the current AI. There are at least three difficulties that AI code generators suffers from: triviality, incorrectness, and prompt skill.

First, the problem of triviality. It’s true: AI is good at making trivial code. It’s hard to know how machine learning software produces this trivial code, but it’s likely because there are lots of examples of such code on the Internet for them to train on. If you need trivial code, AI can quickly produce it.

That said, you don’t need AI to produce trivial code. The Internet is full of it. (How do you think the AI learned to code?) If someone who is not a software developer wants to learn how to write trivial code they can just as easily go to a site like w3schools.com and get it. Anyone can also copy and paste that code and it too will run. And with a tutorial site like w3schools.com the explanation for the code you see will be correct, unlike some of the answers I’ve received from AI.

But what about non-trivial code? That’s where we run into the problem of  incorrectness. If someone prompts AI for code (trivial or non-trivial) they have no way of knowing it is correct, short of running it. AI can produce code quickly and easily for you, but if it is incorrect then you have to debug it. And debugging is a non-trivial skill. The more complex or more general you make your request, the more buggy the code will likely be, and the more effort and skill you have to contribute to make it work.

You might say: incorrectness can be dealt with by better prompting skills. That’s a big assumption, but let’s say it’s true. Now you get to the third problem. To get correct and non-trivial outputs — if you can get it at all, you have to craft really good prompts. That’s not a skill anyone will have. You will have to develop specific skills — prompt engineering skills — to be able to have the AI write python or Go or whatever computer language you need. At that point the prompt to produce that code is a form of code itself.

You might push back and say: sure, the prompts might be complex, but it is less complicated than the actual software I produce. And that leads to the next problem: technical debt.

They don’t understand technical debt: when it comes to technical debt, Kedrosky and Norlin have two problems. First, they don’t understand the idea of technical debt! In the beginning of their piece they state: “Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt.”

That’s not how those of us in the IT community define it.  Technical debt is not a lack of software supply. Even Wikipedia knows better: “In software development, technical debt (also known as design debtor code debt) is the implied cost of future reworking required when choosing an easy but limited solution instead of a better approach that could take more time”. THAT is technical debt.

One of the things I do in my work is assess technical debt, either in legacy systems or new systems. My belief is that once AI can produce code that is non-trivial and correct and based on prompts, we are going to get an explosion of technical debt. We are going to get code that appears to solve a problem and do so with a volume of python (or Java or Go or what have you) that the prompt engineer generated and does not understand. It will be like copy and paste code amplified. Years from now people will look at all this AI generated code and wonder why it is the way it is and why it works the way it does. It will take a bunch of counter AI to translate this code into something understandable, if that will even be possible. Meanwhile companies will be burdened with higher levels of technical debt accelerated by the use of AI developed software. AI is going to make things much worse, if anything.

They don’t understand the total cost of software:  Kedrosky and Norlin included this fantasy chart in their piece.

First off, most people or companies purchase software, not software engineers. That’s the better comparison to hardware.  And if you do replace “Software engineers” with software, then in certain areas of software this chart has already happened. The cost of software has been driven to zero.

What drove this? Not AI. Two big things that drove this are open source and app stores.

In many cases, open source eliminated the (licensing) cost of software to zero. For example, when the web first took off in the 90s, I recall Netscape sold their web server software for $10,000. Now? You can download and run free web server software like nginx on a Raspberry Pi for free. Heck can write your own web server using node.js.

Likewise with app stores. If you wanted to buy software for your PC in the 80s or 90s, you had to pay significantly more than 99 cents for it. It certainly was not free. But the app stores drove the expectation people had that software should be free or practically free. And that expectation drove down the cost of software.

Yet despite developments like open source and app stores driving the cost of software close to zero, people are organizations are still paying plenty for the “free” software. And you will too with AI software, whether it’s commercial software or software for your personal use.

I believe that if you have AI generating tons of free personal software, then you will get a glut of crappy apps and other software tools. If you think it’s hard to determine good personal software now, wait until that happens. There will still be good software, but to develop that will cost money, and that money will be recovered somehow, just like it is today with free apps with in app purchases or apps that steal your personal information and sell it to others. And people will still pay for software from companies like Adobe. They are paying for quality.

Likewise with commercial software. There is tons of open source software out there. Most of it is wisely avoided in commercial settings. However the good stuff is used and it is indeed free to licence and use.

However the total cost of software is more than the licencing cost. Bad AI software will need more capacity to run and more people to support, just like bad open source does. And good AI software will need people and services to keep it going, just like good open source does. Some form of operations, even if it is AIOps (another cost), will need expensive humans to insure the increasing levels of quality required.

So AI can churn out an tons of free software. But the total cost of such software will go elsewhere.

To summarize, producing good software is hard. It’s hard to figure out what is required, and it is hard to design and built and run it to do what is required.  Likewise, understanding software is hard. It’s called code for a reason. Bad code is tough to figure out, but even good code that is out of date or used incorrectly can have problems and solving those problems is hard. And last, free software has other costs associated with it.

P.S. It’s very hard to keep up and counter all the hot takes on what AI is going to do for the world. Most of them I just let slide or let others better than me deal with. But I wanted to address this piece in particular, since it seemed influential and un-countered.

P.S.S. Beside all that above, they also made some statements that just had me wondering what they were thinking. For example, when they wrote: “This technical debt is about to contract in a dramatic, economy-wide fashion as the cost and complexity of software production collapses, releasing a wave of innovation.” Pure hype.

Or this : “Software is misunderstood. It can feel like a discrete thing, something with which we interact. But, really, it is the intrusion into our world of something very alien. It is the strange interaction of electricity, semiconductors, and instructions, all of which somehow magically control objects that range from screens to robots to phones, to medical devices, laptops, and a bewildering multitude of other things.” I mean, what is that all about?

And this:  “The current generation of AI models are a missile aimed, however unintentionally, directly at software production itself”. Pure bombast.

Or this hype: “They are “toys” in that they are able to produce snippets of code for real people, especially non-coders, that one incredibly small group would have thought trivial, and another immense group would have thought impossible. That. Changes. Everything.”

And this is flat up wrong: “This is just the beginning (and it will only get better). It’s possible to write almost every sort of code with such technologies, from microservices joining together various web services (a task for which you might previously have paid a developer $10,000 on Upwork) to an entire mobile app (a task that might cost you $20,000 to $50,000 or more).”

 

 

 

What is AI Winter all about and why do people who’ve worked in AI tend to talk about it?

It might surprise people, but work in AI has been going on for some time. In fact it started as early as the mid-1950s. In the 50s until the 70s, “computers were solving algebra word problems, proving theorems in geometry and learning to speak English”. They were nothing like OpenAI’s ChatGPT, but they were impressive in their own way. Just like now, people were thinking the sky’s the limit.

Then three things happened: the first AI winter from 1974 until 1980, the boom years from 1980-1987, and then the next AI winter from 1987-1993. I was swept up in the second AI winter, and like the first one, there was a combination of hitting a wall in terms of what the technology could do followed by a drying up of funding.

During the boom times it seemed like there would be no stopping AI and it would eventually be able to do everything humans can do and more. It feels that way now with the current AI boom. People like OpenAI and others are saying the sky’s the limit and nothing is impossible. But just like in the previous boom eras, I think the current AI boom will hit a wall with the technology (we are seeing some of it already). At that point we may see a reduction in funding from companies like Microsoft and Google and more (just like how we are seeing a drawback from them on voice recognition technology like Alexa and Siri).

So yes, the current AI technology is exciting. And yes, it seems like there is no end to what it can do. But I think we will get another AI winter sooner than later, and during this time work will continue in the AI space but you’ll no longer be reading news about it daily. The AI effect will also occur and the work being done by people like OpenAI will just get incorporated into the everyday tools we use, just like autocorrect and image recognition is no just something we take for granted.

P.S. If you are interested in the history of the second AI winter, this piece is good.

What is the AI effect and why should you care?

Since there is so much talk about AI now, I think it is good for people to be familiar with some key ideas concerning AI. One of these is the AI effect. The cool AI you are using now, be it ChatGPT or DALL-E or something else, will eventually get incorporated into some commonplace piece of IT and you won’t even think much of it. You certainly won’t be reading about it everywhere. If anything you and I will complain about it, much like we complain about autocorrect.

So what is the AI Effect? As Wikipedia explains:

The AI effect” is that line of thinking, the tendency to redefine AI to mean: “AI is anything that has not been done yet.” This is the common public misperception, that as soon as AI successfully solves a problem, that solution method is no longer within the domain of AI. Geist credits John McCarthy giving this phenomenon its name, the “AI effect”.

McCorduck calls it an “odd paradox” that “practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the ‘failures’, the tough nuts that couldn’t yet be cracked.”[5]

It’s true. Many things over the years that were once thought of as AI are now considered simply software or hardware, if we even think of them at all.  Whether it is winning at chess, recognizing your voice, or recognizing text in an images, these things are commonplace now, but were lofty goals for AI researchers once.

The AI effect is a key idea to keep in mind when people are hyping any new AI as the thing that will change everything. If the new AI becomes useful, we will likely stop thinking it is AI.

For more on the topic, see: AI effect – Wikipedia

How good is the repairable phone from Nokia?

Nokia has a new phone out, the G22, which you can repair on your own. When I heard that, I thought: finally. And if you read what Nokia writes, you might even get excited, like I initially did. If you feel that way too, I recommend you read this in Ars Technica: the Nokia G22 pitches standard low end phone design as repairable . Key thing they note: “The G22 is a cheap phone that isn’t water-resistant and has a plastic back.” They goes on: “But if you ask, “What deliberate design decisions were made to prioritize repair?” you won’t get many satisfying answers.”

I get the sense that Nokia has made this phone for a certain niche audience, as well as for regulators demanding repairable phones. I hope I am wrong. I hope that Nokia and others strive to make repairability a key quality of their future phones. That’s what we need.

No, prompt engineering is not going to become a hot job. Let a former knowledge engineer explain

With the rise of AI, LLMs, ChatGPT and more, a new skill has risen. The skill involves knowing how to construct prompts for the AI software in such a way that you get an optimal result. This has led to a number of people to start saying things like this: prompt engineers is the next big job. I am here to say this is wrong. Let me explain.

I was heavily into AI in the late 20th century, just before the last AI winter. One of the hot jobs at that time was going to be knowledge engineer (KE). A big part of AI then was the development of expert systems, and the job of the KE was to take the the expertise of someone and translate it into rules that the expert system could use to make decisions. Among other things, part of my role was to be a KE.

So what happened? Well, first off, AI winter happened. People stopped developing expert systems and went and took on other roles.  Ironically, rules engines (essentially expert systems) did come back, but all the hype surrounding them was gone, and the role of KE was gone too. It wasn’t needed. A business analyst can just as easily determine what the rules are and then have a technical specialist store that in the rules engine.

Assuming tools like ChatGPT were to last, I would expect the creation of prompts for it to be taken on by business analysts and technical specialist. Business as usual, in other words. No need for a “prompt engineer”.

Also, you should not assume things like ChatGPT will last. How these tools work is highly volatile; they are not well structured things like programming languages or SQL queries. The prompts that worked on them last week may result in nothing a week later. Furthermore, there are so many problems with the new AI that I could easily see them falling into a new AI winter in the next few years.

So, no, I don’t think Prompt Engineering is a thing that will last. If you want to update your resume to say Prompt Engineer after you’ve hacked around with one of the current AI tools out there, knock yourself out. Just don’t get too far ahead of yourself and think there is going to be a career path there.

Some thoughts on Palm and the rise of the handheld computer

This tweet yesterday got me thinking:

Two big tech things happened in the late 90s: one was the adoption of the Web, and two was the adoption of handheld computers. While Apple and its Newton may have been the first to go big in this area, it was Palm and its Pilot device that was truly successful. The Newton came out in 1993 and was killed by Jobs in 1998, while the Palm came out in 1997 and sold like gang busters. (Interestingly the Blackberry came out in the late 90s too.)

To appreciate why the Palm Pilot was so successful, it helps to know how things were back then. In the 90s we were still in the era of rolodexes and Dayrunners. Every year I would head down to the local paper shop (in Toronto I went to the Papery on Cumberland) and get my latest paper refills for the year and manually update my calendar and pencil things in. (And god forbid you ever lost it.) The Palm Pilot promised to get rid of all that. You could enter it all in the hand held device and then sync it up with your computer. It solved so many problems.

It also avoided the problems the Newton had. Unlike the Newton, it’s recognition of handwriting was simpler, which made it better. It was relatively cheap and much cheaper than the Newton. And it worked on the PC.  All those things also helped with its success.

What did not help Palm was a deluge of competition in this space, with everyone from Sony to Microsoft to RIM to deal with. They continued to make good devices like the Tungsten, but by then I was already moved over to the Blackberry. I wasn’t alone in this regard.

I still have a Palm Pilot. It’s a well designed device, even if the functionality it possesses seems quaint now. But back then, it was a force of change. It led the revolution in computing whereby instead of sitting in front of a computer, we carried one around in our hands. I would not have guessed it at the time, as I looked up my calendar or made my notes. I thought it was just a personal digital assistant: it turned out to be a world changer.

 

 

 

On Fake quitting, real layoffs, and worker unhappiness

It’s been a tumultuous time when it comes to the current workplace, or at least business writers think so. From quiet quitting to the Great Resignation, writers can’t stop coining terms about pseudo quitting. So we have pieces on quit quitting, on rage applying and my new favorite, calibrated contributing. Even places like the WSJ join in with this piece on High-Earning Men Who Are Cutting Back on Their Working Hours. It’s as if readers of business magazines and websites can not get enough pieces on worker unhappiness.

That was before times though. Now workers, at least IT workers, have something to be truly unhappy about: being laid off.  You can read about it everywhere, from the Verge to the New York Times. It seemed like every IT company was suddenly shedding workers, from Facebook/Meta, to Microsoft, to Salesforce, to Google……even IBM, which had a decent year compared to the rest of the list. The reasons for the layoffs were varied. Facebook/Meta continues to have a bad business model. Others like Microsoft went on a hiring bender and the layoffs are almost a hangover. There’s also been talk that some of the companies were just following the others and trying to look tough or something. One tech company that did not lay anyone off: Apple.

Layoffs suck. If you get caught up in a layoff program, you can find many guides as to what to do. Here is one layoff guide: What to do before during and after getting laid off.

If you only pay attention to the tech job market, you may guess it applies to the job market in general. But if you read this, Mass Layoffs or Hiring Boom? What’s Actually Happening in the Jobs Market, you get a different picture. The job market is a jumble now due to the fallout of the pandemic. I suspect it is going to take another year to settle down.

In the meantime, good luck with your work. Things aren’t as bad as they may appear. Despite all the think pieces and the tech layoffs. Stay positive.

Fake beaches! Fake lawyers! ChatGPT! and more (what I find interesting in AI, Feb 2023)


There is so much being written about AI that I decided to blog about it separately from other tech. Plus AI is so much more than just tech. It touches on education, art, the law, medicine…pretty much anything you can think of. Let me show you.

Education: there’s been lots said about how students can (are?) using ChatGPT to cheat on tests. This piece argues that this is a good time to reassess education as a result. Meanwhile, this Princeton Student built GPTZero to detect AI-written essays, so I suspect some people will also just want to crack down on the use of AI. Will that stop the use of AI? I doubt it. Already companies like Microsoft are looking to add AI technology to software like Word. Expect AI to flood and overwhelm education, just like calculators once did.

Art: artists have been adversely affected by AI for awhile. Some artists decided to rise up against it by creating anti-AI protest work. You can read about that, here. It’s tough for artists to push back on AI abuses: they don’t have enough clout. One org that will not have a problem with clout is Getty Images. They’ve already started to fight back against AI with a lawsuit. Good.

Is AI doing art a bad thing? I’ve read many people saying it will cause illustrators and other professional artists to lose their jobs. Austin Kleon has an interesting take on that. I think he is missing the point for some artists, but it’s worth reading.

Work: beside artists losing their jobs, others could as well. The NYPost did a piece on how ChatGPT could make this list of jobs obsolete . That may be shocking to some, but for people like me who have been in IT for some time, it’s just a fact that technology takes away work. Many of us embrace that, so that when AI tools come along and do coding, we say “Yay!”. In my experience, humans just move on to provide business value in different ways.

The law: one place I wish people would be more cautious with using AI is in the law. For instance, we had this happen: an AI robot lawyer was set to argue in court. Real lawyers shut it down. I get it: lawyers are expensive and AI can help some people, but that’s not the way to do it. Another example is this, where you have AI generating wills. Needless to say, it has a way to go.  An even worse example: Developers Created AI to Generate Police Sketches. Experts Are Horrified. Police are often the worse abusers of AI and other technology, sadly.

Medicine: AI can help with medicine, as this shows. Again, like the law, doctors need to be careful. But that seems more promising.

The future and the present: if you want an idea of where AI is going, I recommend this piece in technologyreview and this piece in WaPo.

Meanwhile in the present Microsoft and Google will be battling it out in this year. Microsoft is in the lead so far, but reading this, I am reminded of the many pitfalls ahead: Microsoft’s new AI Prometheus didn’t want to talk about the Holocaust. Yikes. As for Google, reading this blogpost of theirs on new AI tool Bard had me thinking it would be a contender. Instead it was such a debacle even Googlers were complaining about it! I am sure they will get it right, but holy smokes.

Finally: this what AI thinks about Toronto. Ha! As for that beach I mentioned, you will want to read here:  This beach does not exist.

(Image above: ChatGPT logo from Wikipedia)

 

The rise and fall of the iPod

Last week I wrote about the Lisa and the rise of the Macintosh. While I was doing that, I came across this list of iPod models, which included these fun facts:

iPods …were once the largest generator of revenue for Apple Computer. After the introduction of the iPhone, the iOS-based iPod touch was the last remaining model of the product line until it was discontinued on May 10, 2022.

It’s remarkable that something that was once the leading generator of revenue is now dead. Blame the iPhone. More accurately, blame streaming. Whatever the real reason, a once great set of products are now gone.

I loved all the iPods I had, from the smallest Shuffle to an iPod Touch that was all but an iPhone. Of all the technologies that I’ve owned, they were among my favorites. Thanks for the songs and the memories, iPod.

(Image of 1st gen iPod Shuffle in its packaging. Via Wikipedia.)

 

Whatever happened to Pascal (the programming language)

In reading and writing about The Lisa computer yesterday, I was reminded of the Pascal programming language. As part of the development of the Lisa, one of the engineers (Larry Tesler), who was working on the user interface…

 …created an object-oriented variant of Pascal, called “Clascal,” that would be used for the Lisa Toolkit application programming interfaces. Later, by working with Pascal creator Niklaus Wirth, Clascal would evolve into the official Object Pascal.

Likely very few if any devs think about Pascal these days. Even I don’t think about it much. But back in the 70s and 80s it was a big deal. As Wikipedia explains:

Pascal became very successful in the 1970s, notably on the burgeoning minicomputer market. Compilers were also available for many microcomputers as the field emerged in the late 1970s. It was widely used as a teaching language in university-level programming courses in the 1980s, and also used in production settings for writing commercial software during the same period. It was displaced by the C programming language during the late 1980s and early 1990s as UNIX-based systems became popular, and especially with the release of C++.

When I was studying computer science in the early 80s, Pascal was an integral part of the curriculum. Once I started working at IBM, I moved on to develop software in other languages, but I had expected it to become a big deal in the field. Instead, C and then variant languages like C++ and Java went on to dominate computer programming. I’m not sure why. My belief at the time was universities had to pay big bucks for operating systems and Pascal compilers but they did not have to pay anything for Unix and C, and that’s what caused the switch. I can’t believe they switched from Pascal to C because C was a better language.

Forty years later, if you search for the top 20 programming languages, Pascal is towards the bottom of this list from IEEE, somewhere between Lisp and Fortran.  It’s very much a niche language in 2022 and it has been for some time.

For more on Pascal, I recommend the Wikipedia article: it’s extensive. If you want to play around with it, there’s a free version of it you can download.

(Image is an Apple Lisa 2 screenshot.  Photo Courtesy of David T. Craig. Computer History Museum Object ID 500004666)

It’s Lisa’s 40th birthday. Let’s celebrate!


The great Lisa has just turned 40! Apple’s Lisa, that is. To celebrate, the Computer History Museum (CHM) has done two great things. First, they have released the source code to the Lisa software. You can find it here. Second, they have published this extensive history on the ground breaking machine, The Lisa: Apple’s Most Influential Failure.

Like the NeXT computer, the Lisa computer was a machine that tried to do too much too soon. And while it was not the success that Apple had hoped, it did lead to great success later.  That definitely comes across in that CHM piece.

It’s fascinating to compare the picture above with the one below (both from CHM). In the one above you can see the original Lisa (1) with “Twiggy” floppy drive that was unreliable and ditched in the later models, seen below. You can also see how the machine on the left (the original Macintosh) would come to take over from the machine on the right (the Lisa 2). It has many of the same features but at a much reduced price.

When you think of Apple computers, you likely think of one of more of those found in this List of Macintosh models. While not a Mac, the Lisa was the precursor of all those machines that came later, starting with the original Mac. It was the birth of a new form of personal computing.

Happy birthday, Lisa! You deserve to be celebrated.

For more on this, see this Hackday piece on  Open-Sourcing The Lisa Mac’s Bigger Sister.

 

Sorry robots: no one is afraid of YOU any more. Now everyone is freaking out about AI instead


It seems weird to think there are trends when it comes to fearing technology. But thinking about it, there seems to be. For awhile my sources of information kept providing me stories of how fearful robots were. Recently that has shifted, and the focus moved to how fearful AI is. Fearing robots is no longer trendy.

Well, trendy or not, here are some stories about robots that have had people concerned. If you have any energy left from being fearful of AI, I recommend them. 🙂

The fact that a city is even contemplating this is worrying: San Francisco Supervisors Vote To Allow Killer Robots. Relatedly, Boston Dynamics pledges not to weaponize its robots.

Not that robots need weapons to be dangerous, as this showed: chess robot breaks childs finger russia tournament. I mean who worries about a “chess robot”??

Robots can harm in other ways, as this story on training robots to be racist and sexist showed.

Ok, not all the robot stories were frightening. These three are more just of interest:

This was a good story on sewer pipe inspection that uses cable-tethered robots. I approve this use of robots, though there are some limitations.

I am not quite a fan of this development:  Your Next Airport Meal May Be Delivered By Robot. I can just see these getting in the way and making airports that much harder to get around.

Finally, here’s a  327 Square Foot Apartment With 5 Rooms Thanks to Robot Furniture. Robot furniture: what will they think of next?

(Image is of the sewer pipe inspection robot.)

 

My notes on falling to build a Mastodon server in AWS (they might help you)

Introduction: I have tried three times to set up a Mastodon server and failed. Despite abandoning this project, I thought I would do a write up since some people might benefit from my failure.

Background: during the recent commotion with Twitter, there was a general movement of people to Mastodon. During this movement, a number of people said they didn’t have a Mastodon server to move to. I didn’t either. When I read that Dan Sinker built his own, I thought I’d try that too. I’ve built many servers on multiple cloud environments and installed complex software in these environments. I figured it was doable.

Documentation: I had two main sources of documentation to help me do this:
Doc 1: docs.joinmastodon.org/admin/install/
Doc 2 gist.github.com/johnspurlockskymethod/da09c82e2ed8fabd5f5e164d942ce37c

Doc 1 is the official Mastodon documentation on how to build your own server. Doc 2 is a guide to installing a minimal Mastodon server on Amazon EC2.

Attempt #1: I followed Doc 2 since I was building it on an EC2 instance. I did not do the AWS pre-reqs advised other than create the security groups since I was using Mailgun for smtp and my domain was elsewhere at namecheap.

I did launch an minimal Ubuntu 22.x server that was a t2.micro, I think (1 vCPU, 1 GiB of memory). It was in the free tier. I did create a swap disk.

I ran into a number of problems during this install. Some of the problems I ran into had to do with versions of the software that were backlevelled compared to doc 1 (e.g. Ruby). Also I found that I could not even get the server to start, likely because there just is not enough memory, even with the swap space. I should have entered “sudo -I” from the start, rather than putting sudo in from of the commands. Doing that in future attempts made things easier. Finally, I deleted the EC2 instance.

Attempt #2: I decided to do a clean install on a new instance. I launched a new EC2 instance than was not free and had 2 vCPU and 2 GiB of memory. I also used doc 1 and referred to doc 2 as a guide. This time I got further. Part of the Mastodon server came up, but I did not get the entire interface. When I checked the server logs (using: journalctl -xf -u mastodon-*) I could see error messages, but despite searching for them, I couldn’t see anything conclusive. I deleted this EC2 instance also.

Attempt #3: I wanted to see if my problems in the previous attempts were due to capacity limitations. I created a third EC2 instance that had 4 vCPU and 8 GiB of memory. This installation went fast and clean. However despite that, I had the same type of errors as the second attempt. At this point I deleted this third instance and quit.

Possible causes of the problem(s) and ways to determine that and resolve them:
– Attempt the installation process on a VM/instance on another cloud provider (Google Cloud, Azure, IBM Cloud). If the problem resolves, the cause could be something to do with AWS.
– Attempt this on a server running Ubuntu 20.04 or Debian 11, either on the cloud or a physical machine. If this resolves, it could be a problem with the version of Ubuntu I was running (22.x): that was the only image available to me on AWS.
– Attempt it using the Docker image version, either on my desktop or in the cloud.
– Attempt to run it on a much bigger instance. Perhaps even a 4 x 8 machine is not sufficient.
– See if the problem is due to my domain being hosted elsewhere in combination with an elastic IP address by trying to use a domain hosted on AWS.

Summary: There are other things I could do to resolve my problems and get the server to work, but in terms of economics: the Law of Diminishing Returns has set in, there are opportunity costs to consider, the sunk costs are what they are, and the marginal utility remaining for me is 0. I learned a lot from this, but even if I got it working, I don’t want to run a Mastodon server long term, nor do I want to pay AWS for the privilege. Furthermore, I don’t want to spend time learning more about Ruby, which I think is where the problem may originate. It’s time for me to spend my precious time on technologies that are personally and professionally better rewarding.

Lessons Learned: What did I learned from this?

– Mastodon is a complicated beast. Anyone installing it must have an excellent understanding of Linux/Unix. If you want to install it on AWS for free, you really must be knowledgeable. Not only that, it consists of not only its own software, but nginx, Postgres, Redis and Ruby. Plus you need to be comfortable setting up SSL. If everything goes according to the doc, you are golden. If not, you really need an array of deep skills to solve any issues you have.

– Stick with the official documentation when it comes to installing Mastodon. Most of the many other pages I reviewed were out of date or glossed over things of note.

– Have all the information you need at hand. I did not have my Mailgun information available for the first attempt. Having it available for the second attempt helped.

– The certbot process in the official document did not work for me. I did this instead:
1) systemctl stop nginx.service
2) certbot certonly –standalone -d example.com (I used my own domain and my personal email and replied Y to other prompts.)
3)  systemctl restart nginx.service

– Make sure you have port 80 open: you need it for certbot. I did not initially for attempt 3 and that caused me problems. I needed to adjust my security group. (Hey, there are a lot of steps: you too will likely mess up on one or two. :))

– As I mentioned earlier, go from the beginning with: sudo -i

– Make sure the domain you set up points to your EC2 instance. Mine did not initially.

Finally: good luck with your installation. I hope it goes well.

P.S. In the past I would have persevered, because like a lot of technical people, I think: what will people think of me if I can’t get this to work?? Maybe they think I am no good??? 🙂 It seems silly, but plenty of technical people are motivated that way. I am still somewhat motivated that way. But pouring more time in this is like pouring more money into an old car you know you should just give up on vs continuing to try and fix.

P.S.S. Here’s a bunch of Mastodon links that you may find helpful:
http://www.nginx.com/blog/using-free-ssltls-certificates-from-lets-encrypt-with-nginx/
app.mailgun.com/app/sending/domains/sandbox069aadff8bc44202bbf74f02ff947b5f.mailgun.org
gist.github.com/AndrewKvalheim/a91c4a4624d341fe2faba28520ed2169
mstdn.ca/public/local
http://www.howtoforge.com/how-to-install-mastodon-social-network-on-ubuntu-22-04/
http://www.followchain.org/create-mastodon-server/
github.com/mastodon/mastodon/issues/10926