Tag Archives: IT

Blackberry: a device once loved, now a film (and a great one)

I loved this film, just like I use to love my Blackberrys. If you loved yours, or the era of the Blackberry, or just want to see a great film, I recommend you see “Blackberry”.

There’s a number of ways you can watch this film. You can watch it just as a story of that weird era from the 90s until the early 2000s. Or as a story about the tech industry in general. Or a story about Canada. It’s all those stories, and more.

To see what I mean, here’s a piece in the CBC with a Canadian angle: New film BlackBerry to explore rise and fall of Canadian smartphone. While this one talks about the tech industry as well as the cultural elements of it: ‘BlackBerry’ Is a Movie That Portrays Tech Dreams Honestly—Finally | WIRED

But besides all that, it’s a great character study of the three main characters: Mike Lazaridis (Jay Baruchel ), Jim Balsillie (Glenn Howerton) and Doug Fregin (Matt Johnson). The arc of Lazaridis in the movie was especially good, as he moves from the influence of Fregin to Balsillie in his quest to make a great device. It’s perhaps appropriate that Balsillie has devil horns in the poster above, because he does tempt Lazaridis with the idea of greatness. And Lazaridis slowly succumbs and physically transforms in the film from a Geek to a Suit.

That’s not to say Balsillie is a caricature. Under all his rage and manipulation, you can see a human also struggling with ambition and is who is aware of the great risks he is taking. His arc might not be as dramatic as Lazaridis in the movie, but it is a rise and fall of significance.

As for Fregin, his character is important but he doesn’t change the way Lazaridis and Balsillie do. But if Balsillie is the devil on the shoulder of Lazaridis, then Fregin is the angel. He provides a reminder throughout the film of what Lazaridis lost in his transformation. (And the description of his life at the end of the film is *chef’s kiss* good.)

The film is a dramatization, but it gets so much right.  Lazaridis and Balsillie were crushed in the end, just like in the film. Balsillie lost his dream of NHL ownership, and Lazaridis lost his claim of making the best smartphone in the world. There’s a part of the film when Balsillie asks: I thought you said these were the best engineers in the world?? and Lazaridis replies: I said they were the best engineers in Canada. That part is a transition in the film, but also sums up the film and the device in many ways.  Their ambition and hubris allowed them to soar, but eventually they met their own nemeses whether they came in the form of Apple or the NHL Board of Directors or the SEC.

As an aside to all that, it’s fascinating to see the depiction of Blackberry defeating Palm/US Robotics. In the early 90s Palm and US Robotics (who later merged) were dominant tech players. Blackberry surpassed them and left them in the dust. Just like Apple left RIM/Blackberry in the dust when they launched the iPhone. (Google also contributed to that with Android.)

Speaking of Apple, it was interesting to see how backdating stock options helped sink Balsillie. He was not alone in such financial maneuvering. Apple and Jobs also got into trouble for backdating options. I assume this practice might have been more common and less black and white than it comes across in the film.

In the film, there is a certain prejudice Lazaridis has about cheap devices, especially those from China.  It’s just that, though: a prejudice. That prejudice was once held against Japan and Korea too, because those countries made cheap devices for Western markets at first. But Japan and Korea went on to produce high end technology and China has too. The Blackberry Storm from China might have been substandard, but Apple has done quite fine sourcing their products from that country. Something to keep in mind.

I suspect I will watch the film many times in my lifetime. Heck, a good part of my life IS in the film as someone involved with the tech industry at the time. That business is my business. That culture is my culture. That country is my country.

None of that has to apply to you, though. If you want to watch a superb film, grab “Blackberry”.

 

 

 

 

Advertisement

What I find interesting in cloud tech, May 2023

It’s long past time to write about IT stuff I’ve been working on. So much so I’ve too much material to provide, and rather than make an endless post, I’ll focus on cloud. I’ve mostly been doing work on IBM cloud, but I have some good stuff on AWS and Azure (Sorry GCP, no love for you this time.)

IBM Cloud: most of the work I’ve been doing on IBM cloud has been hands on, as you can tell from these links:

Other clouds: Not so much hands on, but interesting.

What’s cool? The interactive Open Infrastructure Map is cool

I can write what the Open Infrastructure Map is by using the words of its creator:

Open Infrastructure Map is a view of the world’s infrastructure mapped in the OpenStreetMap database. This data isn’t exposed on the default OSM map, so I built Open Infrastructure Map to visualise it.

But the best thing to do is tell you to head over to it and zoom in on areas you know. Being from Cape Breton, I did just that, and I was wonderfully surprised by how much detail was there. I think you will feel the same.

Highly recommended.

A plethora of good links on AI

There’s still an overwhelming amount of material being written on AI. Here’s a few lists of some of the ones I found most interesting:

ChatGPT: ChatGPT (3 and 4) still dominate much of the discussion I see around AI. For instance:

Using AI: people are trying to use AI for practical purposes, as those last few links showed. Here’s some more examples:

AI and imagery: not all AI is about text. There’s quite a lot going on in the visual space too. Here’s a taste:

AI and the problems it causes: there’s lots of risks with any new technology, and AI is no exception. Cases in point:

Last but not least: 

The Gartner Hype Cycle: one good way to think about technological hype

Below is the Gartner hype cycle curve with it’s famous five phases:

For those not familiar with it, the chart below breaks it down further and helps you see it in action. Let’s examine that.

Chances are if you are not working with emerging IT and you start hearing about a hyped technology (e.g., categories like blockchain, AI), it is in the phase: Peak of Inflated Expectations. At that stage the technology starts going from discussions in places like Silicon Valley to write ups in the New York Times.  It’s also in that phase two other things happen: “Activity beyond early adopters” and “Negative press begins”.

That’s where AI — specifically generative AI — is: lots of write ups have occurred, people are playing around with it, and now the negative press occurs.

After that phase technologies like AI start to slide down into my favorite phase of the curve: the Trough of Disillusionment. It’s the place where technology goes to die. It’s the place where technology tries to cross the chasm and fails.

See that gap on Technology Adoption Lifecycle curve? If technology can get past  that gap (“The Chasm”) and get adopted by more and more people, then it will move on through the Gartner hype curve, up the Slope of Enlightenment and onto the Plateau of Productivity. As that happens, there is less talking and more doing when it comes to the tech.

That said, my belief is that most technology dies in the Trough. Most technology does not and cannot cross the chasm. Case in point, blockchain. Look at the hype curve for blockchain in 2019:

At the time people were imagining blockchain everywhere: in gaming, in government, in supply chain…you name it. Now some of that has moved on to the end of the hype cycle, but most of it is going to die in the Trough.

The Gartner Hype Curve is a useful way to assess technology that is being talked about, as is the Technology Adoption Curve. Another good way of thinking about hype can be found in this piece I wrote here. In that piece I show there are five levels of hype: Marketing Claims, Exaggerated Returns, Utopian Futures, Magical Thinking, and Othering. For companies like Microsoft talking about AI, the hype levels are at the level of Exaggerated Returns. For people writing think pieces on AI, the hype levels go from Utopian Futures to Othering.

In the end, however you assess it, its all just Hype. When a technology comes out, assess it for yourself as best as you can. Take anything being said and assign it a level of hype from 1-5. If you are trying to figure out if something will eventually be adopted, use the curves above.

Good luck!

Two exciting new things from Apple

First up, the new iphone 14 plus in yellow. Love it! Apple is wise to assign unique colours to new hardware. It’s a smart way to attract people to a new product, and all those new selfies with the new yellow phone is likely to drive up more sales. (I have been known to fall for this sales approach. :))

Also new is Apple Music Classical. I confess, I didn’t understand why Apple was splitting off Classical music this way. After I read more about it, it makes sense. I hope it will lead to people listening to more classical music.

Good work, Apple!

Paul Kedrosky & Eric Norlin of SKV know nothing about software and you should ignore them

Last week Paul Kedrosky & Eric Norlin of SKV wrote this piece, Society’s Technical Debt and Software’s Gutenberg Moment, and several smart people I follow seemed to like this and think it something worthwhile. It’s not.

It’s not worthwhile because Kedrosky and Norlin seem to know little if anything about software. Specifically, they don’t seem to know anything about:

  • software development
  • the nature of programming or coding
  • technical debt
  • the total cost of software

Let me wade through their grand and woolly pronouncements and focus on that.

They don’t understand software development: For Kedrosky and Norlin, what software engineers do is predictable and grammatical. (See chart, top right).

To understand why that is wrong, we need to step back. The first part of software development and software engineering should start with requirements. It is a very hard and very human thing to gather those requirements, analyze them, and then design a system around them that meets the needs of the person(s) with the requirements. See where architects are in that chart? In the Disordered and Ad hoc part in the bottom left. Good IT architects and business analysts and software engineers also reside there, at least in the first phase of software development. To get to the predictable and grammatical section which comes in later phases should take a lot of work. It can be difficult and time consuming. That is why software development can be expensive. (Unless you do it poorly: then you get a bunch of crappy code that is hard to maintain or has to be dramatically refactored and rewritten because of the actual technical debt you incurred by rushing it out the door.)

Kedrosky and Norlin seem to exclude that from the role of software engineering. For them, software engineering seems to be primarily writing software. Coding in other words. Let’s ignore the costs of designing the code, testing the code, deploying the code, operating the code, and fixing the code. Let’s assume the bulk of the cost is in writing the code and the goal is to reduce that cost to zero.

That not just my assumption: it seems to be their assumption, too. They state: “Startups spend millions to hire engineers; large companies continue spending millions keeping them around. And, while markets have clearing prices, where supply and demand meet up, we still know that when wages stay higher than comparable positions in other sectors, less of the goods gets produced than is societally desirable. In this case, that underproduced good is…software”.

Perhaps that is how they do things in San Francisco, but the rest of the world has moved on from that model ages ago. There are reasons that countries like India have become powerhouses in terms of software development: they are good software developers and they are relatively low cost. So when they say: “software is chugging along, producing the same thing in ways that mostly wouldn’t seem vastly different to developers doing the same things decades ago….(with) hands pounding out code on keyboards”, they are wrong because the nature of developing software has changed. And one of the way it has changed is that the vast majority of software is written in places that have the lowest cost software developers. So when they say “that software cannot reach its fullest potential without escaping the shackles of the software industry, with its high costs, and, yes, relatively low productivity”, they seem to be locked in a model where software is written they way it is in Silicon Valley by Stanford educated software engineers. The model does not match the real world of software development. Already the bulk of the cost in writing code in most of the world has been reduced not to zero, but to a very small number compared to the cost of writing code in Silicon Valley or North America. Those costs have been wrung out.

They don’t understand coding: Kedrosky and Norlin state:A software industry where anyone can write software, can do it for pennies, and can do it as easily as speaking or writing text, is a transformative moment”. In their piece they use an example of AI writing some Python code that can “open a text file and get rid of all the emojis, except for one I like, and then save it again”. Even they know this is “a trivial, boring and stupid example” and say “it’s not complex code”.

Here’s the problem with writing code at least with the current AI. There are at least three difficulties that AI code generators suffers from: triviality, incorrectness, and prompt skill.

First, the problem of triviality. It’s true: AI is good at making trivial code. It’s hard to know how machine learning software produces this trivial code, but it’s likely because there are lots of examples of such code on the Internet for them to train on. If you need trivial code, AI can quickly produce it.

That said, you don’t need AI to produce trivial code. The Internet is full of it. (How do you think the AI learned to code?) If someone who is not a software developer wants to learn how to write trivial code they can just as easily go to a site like w3schools.com and get it. Anyone can also copy and paste that code and it too will run. And with a tutorial site like w3schools.com the explanation for the code you see will be correct, unlike some of the answers I’ve received from AI.

But what about non-trivial code? That’s where we run into the problem of  incorrectness. If someone prompts AI for code (trivial or non-trivial) they have no way of knowing it is correct, short of running it. AI can produce code quickly and easily for you, but if it is incorrect then you have to debug it. And debugging is a non-trivial skill. The more complex or more general you make your request, the more buggy the code will likely be, and the more effort and skill you have to contribute to make it work.

You might say: incorrectness can be dealt with by better prompting skills. That’s a big assumption, but let’s say it’s true. Now you get to the third problem. To get correct and non-trivial outputs — if you can get it at all, you have to craft really good prompts. That’s not a skill anyone will have. You will have to develop specific skills — prompt engineering skills — to be able to have the AI write python or Go or whatever computer language you need. At that point the prompt to produce that code is a form of code itself.

You might push back and say: sure, the prompts might be complex, but it is less complicated than the actual software I produce. And that leads to the next problem: technical debt.

They don’t understand technical debt: when it comes to technical debt, Kedrosky and Norlin have two problems. First, they don’t understand the idea of technical debt! In the beginning of their piece they state: “Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt.”

That’s not how those of us in the IT community define it.  Technical debt is not a lack of software supply. Even Wikipedia knows better: “In software development, technical debt (also known as design debtor code debt) is the implied cost of future reworking required when choosing an easy but limited solution instead of a better approach that could take more time”. THAT is technical debt.

One of the things I do in my work is assess technical debt, either in legacy systems or new systems. My belief is that once AI can produce code that is non-trivial and correct and based on prompts, we are going to get an explosion of technical debt. We are going to get code that appears to solve a problem and do so with a volume of python (or Java or Go or what have you) that the prompt engineer generated and does not understand. It will be like copy and paste code amplified. Years from now people will look at all this AI generated code and wonder why it is the way it is and why it works the way it does. It will take a bunch of counter AI to translate this code into something understandable, if that will even be possible. Meanwhile companies will be burdened with higher levels of technical debt accelerated by the use of AI developed software. AI is going to make things much worse, if anything.

They don’t understand the total cost of software:  Kedrosky and Norlin included this fantasy chart in their piece.

First off, most people or companies purchase software, not software engineers. That’s the better comparison to hardware.  And if you do replace “Software engineers” with software, then in certain areas of software this chart has already happened. The cost of software has been driven to zero.

What drove this? Not AI. Two big things that drove this are open source and app stores.

In many cases, open source eliminated the (licensing) cost of software to zero. For example, when the web first took off in the 90s, I recall Netscape sold their web server software for $10,000. Now? You can download and run free web server software like nginx on a Raspberry Pi for free. Heck can write your own web server using node.js.

Likewise with app stores. If you wanted to buy software for your PC in the 80s or 90s, you had to pay significantly more than 99 cents for it. It certainly was not free. But the app stores drove the expectation people had that software should be free or practically free. And that expectation drove down the cost of software.

Yet despite developments like open source and app stores driving the cost of software close to zero, people are organizations are still paying plenty for the “free” software. And you will too with AI software, whether it’s commercial software or software for your personal use.

I believe that if you have AI generating tons of free personal software, then you will get a glut of crappy apps and other software tools. If you think it’s hard to determine good personal software now, wait until that happens. There will still be good software, but to develop that will cost money, and that money will be recovered somehow, just like it is today with free apps with in app purchases or apps that steal your personal information and sell it to others. And people will still pay for software from companies like Adobe. They are paying for quality.

Likewise with commercial software. There is tons of open source software out there. Most of it is wisely avoided in commercial settings. However the good stuff is used and it is indeed free to licence and use.

However the total cost of software is more than the licencing cost. Bad AI software will need more capacity to run and more people to support, just like bad open source does. And good AI software will need people and services to keep it going, just like good open source does. Some form of operations, even if it is AIOps (another cost), will need expensive humans to insure the increasing levels of quality required.

So AI can churn out an tons of free software. But the total cost of such software will go elsewhere.

To summarize, producing good software is hard. It’s hard to figure out what is required, and it is hard to design and built and run it to do what is required.  Likewise, understanding software is hard. It’s called code for a reason. Bad code is tough to figure out, but even good code that is out of date or used incorrectly can have problems and solving those problems is hard. And last, free software has other costs associated with it.

P.S. It’s very hard to keep up and counter all the hot takes on what AI is going to do for the world. Most of them I just let slide or let others better than me deal with. But I wanted to address this piece in particular, since it seemed influential and un-countered.

P.S.S. Beside all that above, they also made some statements that just had me wondering what they were thinking. For example, when they wrote: “This technical debt is about to contract in a dramatic, economy-wide fashion as the cost and complexity of software production collapses, releasing a wave of innovation.” Pure hype.

Or this : “Software is misunderstood. It can feel like a discrete thing, something with which we interact. But, really, it is the intrusion into our world of something very alien. It is the strange interaction of electricity, semiconductors, and instructions, all of which somehow magically control objects that range from screens to robots to phones, to medical devices, laptops, and a bewildering multitude of other things.” I mean, what is that all about?

And this:  “The current generation of AI models are a missile aimed, however unintentionally, directly at software production itself”. Pure bombast.

Or this hype: “They are “toys” in that they are able to produce snippets of code for real people, especially non-coders, that one incredibly small group would have thought trivial, and another immense group would have thought impossible. That. Changes. Everything.”

And this is flat up wrong: “This is just the beginning (and it will only get better). It’s possible to write almost every sort of code with such technologies, from microservices joining together various web services (a task for which you might previously have paid a developer $10,000 on Upwork) to an entire mobile app (a task that might cost you $20,000 to $50,000 or more).”

 

 

 

What is AI Winter all about and why do people who’ve worked in AI tend to talk about it?

It might surprise people, but work in AI has been going on for some time. In fact it started as early as the mid-1950s. In the 50s until the 70s, “computers were solving algebra word problems, proving theorems in geometry and learning to speak English”. They were nothing like OpenAI’s ChatGPT, but they were impressive in their own way. Just like now, people were thinking the sky’s the limit.

Then three things happened: the first AI winter from 1974 until 1980, the boom years from 1980-1987, and then the next AI winter from 1987-1993. I was swept up in the second AI winter, and like the first one, there was a combination of hitting a wall in terms of what the technology could do followed by a drying up of funding.

During the boom times it seemed like there would be no stopping AI and it would eventually be able to do everything humans can do and more. It feels that way now with the current AI boom. People like OpenAI and others are saying the sky’s the limit and nothing is impossible. But just like in the previous boom eras, I think the current AI boom will hit a wall with the technology (we are seeing some of it already). At that point we may see a reduction in funding from companies like Microsoft and Google and more (just like how we are seeing a drawback from them on voice recognition technology like Alexa and Siri).

So yes, the current AI technology is exciting. And yes, it seems like there is no end to what it can do. But I think we will get another AI winter sooner than later, and during this time work will continue in the AI space but you’ll no longer be reading news about it daily. The AI effect will also occur and the work being done by people like OpenAI will just get incorporated into the everyday tools we use, just like autocorrect and image recognition is no just something we take for granted.

P.S. If you are interested in the history of the second AI winter, this piece is good.

What is the AI effect and why should you care?

Since there is so much talk about AI now, I think it is good for people to be familiar with some key ideas concerning AI. One of these is the AI effect. The cool AI you are using now, be it ChatGPT or DALL-E or something else, will eventually get incorporated into some commonplace piece of IT and you won’t even think much of it. You certainly won’t be reading about it everywhere. If anything you and I will complain about it, much like we complain about autocorrect.

So what is the AI Effect? As Wikipedia explains:

The AI effect” is that line of thinking, the tendency to redefine AI to mean: “AI is anything that has not been done yet.” This is the common public misperception, that as soon as AI successfully solves a problem, that solution method is no longer within the domain of AI. Geist credits John McCarthy giving this phenomenon its name, the “AI effect”.

McCorduck calls it an “odd paradox” that “practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the ‘failures’, the tough nuts that couldn’t yet be cracked.”[5]

It’s true. Many things over the years that were once thought of as AI are now considered simply software or hardware, if we even think of them at all.  Whether it is winning at chess, recognizing your voice, or recognizing text in an images, these things are commonplace now, but were lofty goals for AI researchers once.

The AI effect is a key idea to keep in mind when people are hyping any new AI as the thing that will change everything. If the new AI becomes useful, we will likely stop thinking it is AI.

For more on the topic, see: AI effect – Wikipedia

How good is the repairable phone from Nokia?

Nokia has a new phone out, the G22, which you can repair on your own. When I heard that, I thought: finally. And if you read what Nokia writes, you might even get excited, like I initially did. If you feel that way too, I recommend you read this in Ars Technica: the Nokia G22 pitches standard low end phone design as repairable . Key thing they note: “The G22 is a cheap phone that isn’t water-resistant and has a plastic back.” They goes on: “But if you ask, “What deliberate design decisions were made to prioritize repair?” you won’t get many satisfying answers.”

I get the sense that Nokia has made this phone for a certain niche audience, as well as for regulators demanding repairable phones. I hope I am wrong. I hope that Nokia and others strive to make repairability a key quality of their future phones. That’s what we need.

No, prompt engineering is not going to become a hot job. Let a former knowledge engineer explain

With the rise of AI, LLMs, ChatGPT and more, a new skill has risen. The skill involves knowing how to construct prompts for the AI software in such a way that you get an optimal result. This has led to a number of people to start saying things like this: prompt engineers is the next big job. I am here to say this is wrong. Let me explain.

I was heavily into AI in the late 20th century, just before the last AI winter. One of the hot jobs at that time was going to be knowledge engineer (KE). A big part of AI then was the development of expert systems, and the job of the KE was to take the the expertise of someone and translate it into rules that the expert system could use to make decisions. Among other things, part of my role was to be a KE.

So what happened? Well, first off, AI winter happened. People stopped developing expert systems and went and took on other roles.  Ironically, rules engines (essentially expert systems) did come back, but all the hype surrounding them was gone, and the role of KE was gone too. It wasn’t needed. A business analyst can just as easily determine what the rules are and then have a technical specialist store that in the rules engine.

Assuming tools like ChatGPT were to last, I would expect the creation of prompts for it to be taken on by business analysts and technical specialist. Business as usual, in other words. No need for a “prompt engineer”.

Also, you should not assume things like ChatGPT will last. How these tools work is highly volatile; they are not well structured things like programming languages or SQL queries. The prompts that worked on them last week may result in nothing a week later. Furthermore, there are so many problems with the new AI that I could easily see them falling into a new AI winter in the next few years.

So, no, I don’t think Prompt Engineering is a thing that will last. If you want to update your resume to say Prompt Engineer after you’ve hacked around with one of the current AI tools out there, knock yourself out. Just don’t get too far ahead of yourself and think there is going to be a career path there.

Some thoughts on Palm and the rise of the handheld computer

This tweet yesterday got me thinking:

Two big tech things happened in the late 90s: one was the adoption of the Web, and two was the adoption of handheld computers. While Apple and its Newton may have been the first to go big in this area, it was Palm and its Pilot device that was truly successful. The Newton came out in 1993 and was killed by Jobs in 1998, while the Palm came out in 1997 and sold like gang busters. (Interestingly the Blackberry came out in the late 90s too.)

To appreciate why the Palm Pilot was so successful, it helps to know how things were back then. In the 90s we were still in the era of rolodexes and Dayrunners. Every year I would head down to the local paper shop (in Toronto I went to the Papery on Cumberland) and get my latest paper refills for the year and manually update my calendar and pencil things in. (And god forbid you ever lost it.) The Palm Pilot promised to get rid of all that. You could enter it all in the hand held device and then sync it up with your computer. It solved so many problems.

It also avoided the problems the Newton had. Unlike the Newton, it’s recognition of handwriting was simpler which made it better. It was relatively cheap and much cheaper than the Newton. And it worked on the PC.  All those things also helped with its success.

What did not help Palm was a deluge of competition in this space, with everyone from Sony to Microsoft to RIM to deal with. They continued to make good devices like the Tungsten, but by then I was already moved over to the Blackberry. I wasn’t alone in this regard.

I still have a Palm Pilot. It’s a well designed device, even if the functionality it possesses seems quaint now. But back then, it was a force of change. It led the revolution in computing whereby instead of sitting in front of a computer, we carried one around in our hands. I would not have guessed it at the time, as I looked up my calendar or made my notes. I thought it was just a personal digital assistant. It turned out to be a world changer.

 

 

 

On Fake quitting, real layoffs, and worker unhappiness

It’s been a tumultuous time when it comes to the current workplace, or at least business writers think so. From quiet quitting to the Great Resignation, writers can’t stop coining terms about pseudo quitting. So we have pieces on quit quitting, on rage applying and my new favorite, calibrated contributing. Even places like the WSJ join in with this piece on High-Earning Men Who Are Cutting Back on Their Working Hours. It’s as if readers of business magazines and websites can not get enough pieces on worker unhappiness.

That was before times though. Now workers, at least IT workers, have something to be truly unhappy about: being laid off.  You can read about it everywhere, from the Verge to the New York Times. It seemed like every IT company was suddenly shedding workers, from Facebook/Meta, to Microsoft, to Salesforce, to Google……even IBM, which had a decent year compared to the rest of the list. The reasons for the layoffs were varied. Facebook/Meta continues to have a bad business model. Others like Microsoft went on a hiring bender and the layoffs are almost a hangover. There’s also been talk that some of the companies were just following the others and trying to look tough or something. One tech company that did not lay anyone off: Apple.

Layoffs suck. If you get caught up in a layoff program, you can find many guides as to what to do. Here is one layoff guide: What to do before during and after getting laid off.

If you only pay attention to the tech job market, you may guess it applies to the job market in general. But if you read this, Mass Layoffs or Hiring Boom? What’s Actually Happening in the Jobs Market, you get a different picture. The job market is a jumble now due to the fallout of the pandemic. I suspect it is going to take another year to settle down.

In the meantime, good luck with your work. Things aren’t as bad as they may appear. Despite all the think pieces and the tech layoffs. Stay positive.

Fake beaches! Fake lawyers! ChatGPT! and more (what I find interesting in AI, Feb 2023)


There is so much being written about AI that I decided to blog about it separately from other tech. Plus AI is so much more than just tech. It touches on education, art, the law, medicine…pretty much anything you can think of. Let me show you.

Education: there’s been lots said about how students can (are?) using ChatGPT to cheat on tests. This piece argues that this is a good time to reassess education as a result. Meanwhile, this Princeton Student built GPTZero to detect AI-written essays, so I suspect some people will also just want to crack down on the use of AI. Will that stop the use of AI? I doubt it. Already companies like Microsoft are looking to add AI technology to software like Word. Expect AI to flood and overwhelm education, just like calculators once did.

Art: artists have been adversely affected by AI for awhile. Some artists decided to rise up against it by creating anti-AI protest work. You can read about that, here. It’s tough for artists to push back on AI abuses: they don’t have enough clout. One org that will not have a problem with clout is Getty Images. They’ve already started to fight back against AI with a lawsuit. Good.

Is AI doing art a bad thing? I’ve read many people saying it will cause illustrators and other professional artists to lose their jobs. Austin Kleon has an interesting take on that. I think he is missing the point for some artists, but it’s worth reading.

Work: beside artists losing their jobs, others could as well. The NYPost did a piece on how ChatGPT could make this list of jobs obsolete . That may be shocking to some, but for people like me who have been in IT for some time, it’s just a fact that technology takes away work. Many of us embrace that, so that when AI tools come along and do coding, we say “Yay!”. In my experience, humans just move on to provide business value in different ways.

The law: one place I wish people would be more cautious with using AI is in the law. For instance, we had this happen: an AI robot lawyer was set to argue in court. Real lawyers shut it down. I get it: lawyers are expensive and AI can help some people, but that’s not the way to do it. Another example is this, where you have AI generating wills. Needless to say, it has a way to go.  An even worse example: Developers Created AI to Generate Police Sketches. Experts Are Horrified. Police are often the worse abusers of AI and other technology, sadly.

Medicine: AI can help with medicine, as this shows. Again, like the law, doctors need to be careful. But that seems more promising.

The future and the present: if you want an idea of where AI is going, I recommend this piece in technologyreview and this piece in WaPo.

Meanwhile in the present Microsoft and Google will be battling it out in this year. Microsoft is in the lead so far, but reading this, I am reminded of the many pitfalls ahead: Microsoft’s new AI Prometheus didn’t want to talk about the Holocaust. Yikes. As for Google, reading this blogpost of theirs on new AI tool Bard had me thinking it would be a contender. Instead it was such a debacle even Googlers were complaining about it! I am sure they will get it right, but holy smokes.

Finally: this what AI thinks about Toronto. Ha! As for that beach I mentioned, you will want to read here:  This beach does not exist.

(Image above: ChatGPT logo from Wikipedia)

 

The rise and fall of the iPod

Last week I wrote about the Lisa and the rise of the Macintosh. While I was doing that, I came across this list of iPod models, which included these fun facts:

iPods …were once the largest generator of revenue for Apple Computer. After the introduction of the iPhone, the iOS-based iPod touch was the last remaining model of the product line until it was discontinued on May 10, 2022.

It’s remarkable that something that was once the leading generator of revenue is now dead. Blame the iPhone. More accurately, blame streaming. Whatever the real reason, a once great set of products are now gone.

I loved all the iPods I had, from the smallest Shuffle to an iPod Touch that was all but an iPhone. Of all the technologies that I’ve owned, they were among my favorites. Thanks for the songs and the memories, iPod.

(Image of 1st gen iPod Shuffle in its packaging. Via Wikipedia.)

 

Whatever happened to Pascal (the programming language)

In reading and writing about The Lisa computer yesterday, I was reminded of the Pascal programming language. As part of the development of the Lisa, one of the engineers (Larry Tesler), who was working on the user interface…

 …created an object-oriented variant of Pascal, called “Clascal,” that would be used for the Lisa Toolkit application programming interfaces. Later, by working with Pascal creator Niklaus Wirth, Clascal would evolve into the official Object Pascal.

Likely very few if any devs think about Pascal these days. Even I don’t think about it much. But back in the 70s and 80s it was a big deal. As Wikipedia explains:

Pascal became very successful in the 1970s, notably on the burgeoning minicomputer market. Compilers were also available for many microcomputers as the field emerged in the late 1970s. It was widely used as a teaching language in university-level programming courses in the 1980s, and also used in production settings for writing commercial software during the same period. It was displaced by the C programming language during the late 1980s and early 1990s as UNIX-based systems became popular, and especially with the release of C++.

When I was studying computer science in the early 80s, Pascal was an integral part of the curriculum. Once I started working at IBM, I moved on to develop software in other languages, but I had expected it to become a big deal in the field. Instead, C and then variant languages like C++ and Java went on to dominate computer programming. I’m not sure why. My belief at the time was universities had to pay big bucks for operating systems and Pascal compilers but they did not have to pay anything for Unix and C, and that’s what caused the switch. I can’t believe they switched from Pascal to C because C was a better language.

Forty years later, if you search for the top 20 programming languages, Pascal is towards the bottom of this list from IEEE, somewhere between Lisp and Fortran.  It’s very much a niche language in 2022 and it has been for some time.

For more on Pascal, I recommend the Wikipedia article: it’s extensive. If you want to play around with it, there’s a free version of it you can download.

(Image is an Apple Lisa 2 screenshot.  Photo Courtesy of David T. Craig. Computer History Museum Object ID 500004666)

It’s Lisa’s 40th birthday. Let’s celebrate!


The great Lisa has just turned 40! Apple’s Lisa, that is. To celebrate, the Computer History Museum (CHM) has done two great things. First, they have released the source code to the Lisa software. You can find it here. Second, they have published this extensive history on the ground breaking machine, The Lisa: Apple’s Most Influential Failure.

Like the NeXT computer, the Lisa computer was a machine that tried to do too much too soon. And while it was not the success that Apple had hoped, it did lead to great success later.  That definitely comes across in that CHM piece.

It’s fascinating to compare the picture above with the one below (both from CHM). In the one above you can see the original Lisa (1) with “Twiggy” floppy drive that was unreliable and ditched in the later models, seen below. You can also see how the machine on the left (the original Macintosh) would come to take over from the machine on the right (the Lisa 2). It has many of the same features but at a much reduced price.

When you think of Apple computers, you likely think of one of more of those found in this List of Macintosh models. While not a Mac, the Lisa was the precursor of all those machines that came later, starting with the original Mac. It was the birth of a new form of personal computing.

Happy birthday, Lisa! You deserve to be celebrated.

For more on this, see this Hackday piece on  Open-Sourcing The Lisa Mac’s Bigger Sister.

 

Sorry robots: no one is afraid of YOU any more. Now everyone is freaking out about AI instead


It seems weird to think there are trends when it comes to fearing technology. But thinking about it, there seems to be. For awhile my sources of information kept providing me stories of how fearful robots were. Recently that has shifted, and the focus moved to how fearful AI is. Fearing robots is no longer trendy.

Well, trendy or not, here are some stories about robots that have had people concerned. If you have any energy left from being fearful of AI, I recommend them. 🙂

The fact that a city is even contemplating this is worrying: San Francisco Supervisors Vote To Allow Killer Robots. Relatedly, Boston Dynamics pledges not to weaponize its robots.

Not that robots need weapons to be dangerous, as this showed: chess robot breaks childs finger russia tournament. I mean who worries about a “chess robot”??

Robots can harm in other ways, as this story on training robots to be racist and sexist showed.

Ok, not all the robot stories were frightening. These three are more just of interest:

This was a good story on sewer pipe inspection that uses cable-tethered robots. I approve this use of robots, though there are some limitations.

I am not quite a fan of this development:  Your Next Airport Meal May Be Delivered By Robot. I can just see these getting in the way and making airports that much harder to get around.

Finally, here’s a  327 Square Foot Apartment With 5 Rooms Thanks to Robot Furniture. Robot furniture: what will they think of next?

(Image is of the sewer pipe inspection robot.)

 

My notes on falling to build a Mastodon server in AWS (they might help you)

Introduction: I have tried three times to set up a Mastodon server and failed. Despite abandoning this project, I thought I would do a write up since some people might benefit from my failure.

Background: during the recent commotion with Twitter, there was a general movement of people to Mastodon. During this movement, a number of people said they didn’t have a Mastodon server to move to. I didn’t either. When I read that Dan Sinker built his own, I thought I’d try that too. I’ve built many servers on multiple cloud environments and installed complex software in these environments. I figured it was doable.

Documentation: I had two main sources of documentation to help me do this:
Doc 1: docs.joinmastodon.org/admin/install/
Doc 2 gist.github.com/johnspurlockskymethod/da09c82e2ed8fabd5f5e164d942ce37c

Doc 1 is the official Mastodon documentation on how to build your own server. Doc 2 is a guide to installing a minimal Mastodon server on Amazon EC2.

Attempt #1: I followed Doc 2 since I was building it on an EC2 instance. I did not do the AWS pre-reqs advised other than create the security groups since I was using Mailgun for smtp and my domain was elsewhere at namecheap.

I did launch an minimal Ubuntu 22.x server that was a t2.micro, I think (1 vCPU, 1 GiB of memory). It was in the free tier. I did create a swap disk.

I ran into a number of problems during this install. Some of the problems I ran into had to do with versions of the software that were backlevelled compared to doc 1 (e.g. Ruby). Also I found that I could not even get the server to start, likely because there just is not enough memory, even with the swap space. I should have entered “sudo -I” from the start, rather than putting sudo in from of the commands. Doing that in future attempts made things easier. Finally, I deleted the EC2 instance.

Attempt #2: I decided to do a clean install on a new instance. I launched a new EC2 instance than was not free and had 2 vCPU and 2 GiB of memory. I also used doc 1 and referred to doc 2 as a guide. This time I got further. Part of the Mastodon server came up, but I did not get the entire interface. When I checked the server logs (using: journalctl -xf -u mastodon-*) I could see error messages, but despite searching for them, I couldn’t see anything conclusive. I deleted this EC2 instance also.

Attempt #3: I wanted to see if my problems in the previous attempts were due to capacity limitations. I created a third EC2 instance that had 4 vCPU and 8 GiB of memory. This installation went fast and clean. However despite that, I had the same type of errors as the second attempt. At this point I deleted this third instance and quit.

Possible causes of the problem(s) and ways to determine that and resolve them:
– Attempt the installation process on a VM/instance on another cloud provider (Google Cloud, Azure, IBM Cloud). If the problem resolves, the cause could be something to do with AWS.
– Attempt this on a server running Ubuntu 20.04 or Debian 11, either on the cloud or a physical machine. If this resolves, it could be a problem with the version of Ubuntu I was running (22.x): that was the only image available to me on AWS.
– Attempt it using the Docker image version, either on my desktop or in the cloud.
– Attempt to run it on a much bigger instance. Perhaps even a 4 x 8 machine is not sufficient.
– See if the problem is due to my domain being hosted elsewhere in combination with an elastic IP address by trying to use a domain hosted on AWS.

Summary: There are other things I could do to resolve my problems and get the server to work, but in terms of economics: the Law of Diminishing Returns has set in, there are opportunity costs to consider, the sunk costs are what they are, and the marginal utility remaining for me is 0. I learned a lot from this, but even if I got it working, I don’t want to run a Mastodon server long term, nor do I want to pay AWS for the privilege. Furthermore, I don’t want to spend time learning more about Ruby, which I think is where the problem may originate. It’s time for me to spend my precious time on technologies that are personally and professionally better rewarding.

Lessons Learned: What did I learned from this?

– Mastodon is a complicated beast. Anyone installing it must have an excellent understanding of Linux/Unix. If you want to install it on AWS for free, you really must be knowledgeable. Not only that, it consists of not only its own software, but nginx, Postgres, Redis and Ruby. Plus you need to be comfortable setting up SSL. If everything goes according to the doc, you are golden. If not, you really need an array of deep skills to solve any issues you have.

– Stick with the official documentation when it comes to installing Mastodon. Most of the many other pages I reviewed were out of date or glossed over things of note.

– Have all the information you need at hand. I did not have my Mailgun information available for the first attempt. Having it available for the second attempt helped.

– The certbot process in the official document did not work for me. I did this instead:
1) systemctl stop nginx.service
2) certbot certonly –standalone -d example.com (I used my own domain and my personal email and replied Y to other prompts.)
3)  systemctl restart nginx.service

– Make sure you have port 80 open: you need it for certbot. I did not initially for attempt 3 and that caused me problems. I needed to adjust my security group. (Hey, there are a lot of steps: you too will likely mess up on one or two. :))

– As I mentioned earlier, go from the beginning with: sudo -i

– Make sure the domain you set up points to your EC2 instance. Mine did not initially.

Finally: good luck with your installation. I hope it goes well.

P.S. In the past I would have persevered, because like a lot of technical people, I think: what will people think of me if I can’t get this to work?? Maybe they think I am no good??? 🙂 It seems silly, but plenty of technical people are motivated that way. I am still somewhat motivated that way. But pouring more time in this is like pouring more money into an old car you know you should just give up on vs continuing to try and fix.

P.S.S. Here’s a bunch of Mastodon links that you may find helpful:
http://www.nginx.com/blog/using-free-ssltls-certificates-from-lets-encrypt-with-nginx/
app.mailgun.com/app/sending/domains/sandbox069aadff8bc44202bbf74f02ff947b5f.mailgun.org
gist.github.com/AndrewKvalheim/a91c4a4624d341fe2faba28520ed2169
mstdn.ca/public/local
http://www.howtoforge.com/how-to-install-mastodon-social-network-on-ubuntu-22-04/
http://www.followchain.org/create-mastodon-server/
github.com/mastodon/mastodon/issues/10926

The rise and fall of Alexa and the possibility of a new A.I. winter

I recall reading this piece (What Would Alexa Do?) by Tim O’Reilly in 2016 and thinking, “wow, Alexa is really something! ” Six years later we know what Alexa would do: Alexa would kick the bucket (according to this:  Hey Alexa Are You There? ) I confess I was surprised by its upcoming demise as much as I was surprised by its ascendence.

Since reading about the fall of Alexa, I’ve looked at the new AI in a different and harsher light. So while people like Kevin Roose can write about the brilliance and weirdness of ChatGPT in The New York Times, I cannot stop wondering about the fact that as ChatGPT hits one Million users, it’s costs are eye-watering. (Someone mentioned a figure of $3M in cloud costs / day.) if that keeps up, ChatGPT may join Alexa.

So cost is one big problem the current AI has. Another is the ripping off of other people’s data. Yes, the new image generators by companies like OpenAI are cool, but they’re cool because they take art from human creators and use it as input. I guess it’s nice that some of these companies are now letting artists opt out, but it may already be too late for that.

Cost and theft are not the only problems. A third problem is garbage output. For example, this is an image generated by  Dall-E according to The Verge:

It’s garbage. DALL-E knows how to use visual elements of Vermeer without understanding anything about why Vermeer is great. As for ChatGPT, it easily turns into a bullshit generator, according to this good piece by Clive Thompson.

To summarize: bad input (stolen data), bad processing (expensive), bad output (bullshit and garbage). It’s all adds up, and not in a good way for the latest wunderkinds of AI.

But perhaps I am being too harsh. Perhaps these problems will be resolved. This piece leans in that direction. Perhaps Silicon Valley can make it work.

Or maybe we will have another AI Winter.….If you mix a recession in with the other three problems I mentioned, plus the overall decline in the reputation of Silicon Valley, a second wintry period is a possibility. Speaking just for myself, I would not mind.

The last AI winter swept away so much unnecessary tech (remember LISP machines?) and freed up lots of smart people to go on to work on other technologies, such as networking. The result was tremendous increases in the use of networks, leading to the common acceptance and use of the Internet and the Web. We’d be lucky to have such a repeat.

Hey Alexa, what will be the outcome?

UGC (user generated content) is a sucker’s game. We should resolve to be less suckers in 2023

I started to think of UGC when I read that tweet last night.

We don’t talk about UGC much anymore. We take it for granted since it is so ubiquitous. Any time we use social media we are creating UGC. But it’s not limited to site like Twitter or Instagram. Web site like Behance and GitHub are also repositories of UGC. Even Google Docs and Spotify are ways for people to generate content (a spreadsheet is UGC for Google to mine, just like a playlist is.)

When platforms came along for us to post our words and images, we embraced them. Even when we knew they were being exploited for advertising, many of us shrugged and accepted it as a deal: we get free platforms in exchange for our attention and content.

Recently though it’s gotten more exploitive. Companies like OpenAI and others are scrapping all our UGC from the web and turning it into data sets. Facial recognition software is turning our selfies into ways to track us. Never mind all the listening devices we let into our houses (“Hey Google, are you recording all my comings and goings?”…probably)

Given that, we should resolve to be smarter about our UGC in 2023. Always consider what you are sharing, and find ways to limit it if you can. Indeed give yourself some boundaries so that when the next company comes along with vowel problems (looking at you, Trackt) and asks for our data, we say no thanks.

We can’t stop companies from taking advantage of the things we share. So let’s aim to share things wisely and in a limited way.

IBM Cloud tip: be careful with security groups allow_all when setting up a server

Security groups are a great way to limit access to your server in IBM Cloud. However, if you are just setting up your server, make sure you don’t inadvertently block traffic so that you can’t do anything.

Case in point: you may set allow_all in a security group. You might think that would allow all traffic in and out of your server. However, allow_all will block some traffic still from leaving your server. I was not able to ping 8.8.8.8 or reach other traffic on my Windows VSI when I had this setting.

According to IBM support: “When setting security groups for servers you need to have an equal relationship of ingress (inbound) and egress (outbound) traffic in order to succeed in a proper connection. You would need the allow_all and the allow_outbound group to achieve this.”

What I find interesting in tech and can tell you about, Dec 2022


Time once again to show what IT stuff I’ve been doing in the last few months. Some of it I can’t include here due to confidentiality reasons, and there’s some things I want to write about separately. The rest is below and worth checking out.

Software: I’ve been doing some python programming lately, so I found these useful: How you can get your browser history via a python library. Also How to Create Your Own Google Chrome Extension...I’ve been wanting to do this. I used this tutorial recently to build a simple stopwatch With Javascript. Relatedly, here’s a  Free Countdown Timer for Your Website.

I’ve been looking into PyQT for a number of reasons so I found these good: How to Install PyQt for Python in MacOS?, and Python PyQt5 Tutorial – Example and Applications, and pyqt statusbar – Python Tutorial.

Here’s some useful git stuff: Merge Strategies in Git and  best practices on rolling out code scanning at enterprise scale.

Now useful but  cool: matrix webcam. Also this is a cool shell.

A thoughtful piece on DevOps metrics. As for this, DevOps is Bullshit, can’t say I agree.

Finally, I’ve been getting into Neo4J and found this helpful:  Neo4j and graph databases: Getting started.

Hardware/Pi: is this the next new thing: stretchable display? This fast charger is also fun. Game fans, take note: Steam is coming to Chromebooks. 

This is good:  iphone 14 is the most repairable since iphone 7. This is awesome: this retro punk nixie wristwatch actually uses authentic nixie tubes to tell the time. This is handy:  13 great arduino projects to try.

I love this:  Nerdy Hanukkah Card! I also love the idea of making a Raspberry Pi-powered radio. More on that here: at Instructables. Also a good project: How to use Google Assistant on the Raspberry Pi.

Cloud: Here’s some AWS help:  choosing an aws container service to run your modern application, and pointing your Namecheap Domain Name to AWS Linux, and db2 and amazon web services better together.

Some IBM Cloud help: share resources across your ibm cloud accounts, and migrating a large database into ibm cloud databases. Also: Get started with IBM Wazi as a Service.

Some other interesting essays on cloud:

Misc: RIP Kathleen Booth, inventor of Assembly Language. Same for another giant, Frederick P Brooks.

The future is weird:  bereal app gets real roasted with memes and gifs are cringe and for boomers giphy claims.

This surprised me:  amazon alexa is a colossal failure on pace to lose 10 billion this year. In other news, here’s a review of amazon halo rise.

Stratechery by Ben Thompson is always worth a read. Here’s they are on Microsoft Full Circle.

Here’s three stories. One on Zoho (how zoho became 1b company without a dime of external investment), one on Uber (uber says compromised credentials of a contractor led to data breach) and one on Sobeys (inside turmoil sobeys ransomware attack)

 

The history of people asking: is technology X going to replace programmers?

Recently Nature (of all publications) asked the clickbait-y question:
Are ChatGPT and AlphaCode going to replace programmers?  It then quickly states: “OpenAI and DeepMind systems can now produce meaningful lines of code, but software engineers shouldn’t switch careers quite yet.” Then why even ask the question? It goes on to say: Deepmind, a part of Google, “published its results in Science, showing that AlphaCode beat about half of humans at code competitions”.

Regardless of what you think about that article in Nature, here’s the thing to always keep in mind: technology X has been coming along to replace programmers forever.  We had machine code that was replaced with assembler language. We had Assembler language replaced with higher level languages like Fortran. We had a wealth of more sophisticated programming languages come on to the scene. In addition, programming tools like IDEs have come along to save the day and make programming easier for programmers. Yet we still have not lost the need for programmers to write programs.

Programming is still about taking what people want the computer to do and codifying it in a language that the computer understands. In other words: programming. As long as that code is required for computers to do what humans want, there will always be programmers.

Here’s another thing to consider: code is a more efficient way to communicate to a computer. I can write in English “count all the entries in this column from row 2 to 100 if the entry equals the word ‘arts'” or I can write in (Excel) code “=countif(A2:A100,”arts”)”. That efficiency will likely mean that coding will be around for quite some time yet. And people doing that coding will be programming, even if they don’t consider themselves programmers.

So no, Alphacode is not going to replace programmers and programming. It might replace some of the task of what some of them currently do. But programmers and programming will be around for some time to come.

(I like the image above because it captures how the software design and development process is a complex thing, and not just a matter of writing a bunch of code. A lot of thought goes into developing something like a smart phone application, and that thought results in code that results in a good app.)

Would you pay $200,000 for a Mac SE??

You might not but I bet someone might. Because it’s not just any old SE….it’s Steve Jobs’ Macintosh. Uncreate has the details:

(Jobs) didn’t stop using Apple products, though, instead working on this Macintosh SE until 1994. Amazingly, it still has files from its days on Jobs’ desk on its drive, and as an incredibly desirable artifact from his “Wilderness Years”, is expected to bring over $200,000 at auction.

For rich fans of Apple, this would be some crazy good thing to have in your collection.

PlantUML: not just for UML. Also good for Gantt Charts, Mindmaps, etc


If you are an IT architect or specialist, you may have used PlantUML. I have and I really like it: It makes doing technical diagrams dead easy.

What I would like you to know about are non-UML capabilities of the tool. PlantUML has the capability to draw Gantt charts and Mindmaps. You can quickly write out your plans and ideas and PlantUML will convert them into the diagram you want. It’s fantastic and I highly recommend it. If you use Visual Studio Code from Microsoft, plug PlantUML into it and you can get your diagrams made that way. But the PlantUML website can also do the job.

For more on this, go to the sections on Gantt charts or mindmaps.

 

 

 

Paper Macs! Doom on Doom! Build a Voight-Kampff machine! And more (What I find interesting in tech, Sept. 2022)

Here’s 70+ links of things I have found interesting in tech in the last while. It’s a real mix this time, but still contains a good chunk on cloud, hardware and software. Some good stuff on UML, Pi and Doom as well. (Love Doom.) Dig in!

Cloud: here’s a dozen good pieces I recommend on cloud computing…

  1. I think hybrid cloud is the future of cloud computing for big orgs, and IBM does too:  IBM doubles down on hybrid cloud
  2. Not to be confused with multicloud: Multicloud Explained: A cheat sheet | TechRepublic
  3. Speaking of that, here’s 3 multicloud lessons for cloud architects | InfoWorld
  4. Relatedly, Vendors keep misusing the “cloud native” label. Customers may not care. You should care, though.
  5. Cloud Foundry used to be the future, but now it’s time for this:  Migrating off of cloud foundry.
  6. I always find these RCAs good:  Details of the Cloudflare outage on July 2 2019
  7. Speaking of outages: Heat waves take out cloud data centers
  8. Google Gsuite: now with a fee. Good luck with that, Google.
  9. Is your app resilient? Consider this four step approach to verifying the resiliency of cloud native applications
  10. If you are an AWS/Oracle user:  using aws backup and oracle rman for backup restore of oracle databases on amazon ec2.
  11. Good tips:  How to add a custom domain to GitHub Pages with Namecheap – Focalise
  12. Good argument:  Rural carriers: We need more subsidies to build 5G

Software: here’s a mix of software pieces, from how to write good bash to how to run good scrums….

  1. Is Internet Explorer dead? Nope!  IE lives! In Korea.
  2. For bootstrap noobs:  Bootstrap tutorials
  3. Fun to consider:  How is computer programming different today than 20 years ago?
  4. Helpful:  Using Loops In Bash – Earthly Blog
  5. More bash goodness:  Bash – Earthly Blog
  6. Related:  Good SED advice
  7. Some python help:  Automate Internet Life With Python | Hackaday
  8. More python:  Analyze Your Amazon Data with Python.
  9. I found this useful indeed:  Google API’s and python
  10. Load testing vs. stress testing: What are the main differences? Don’t confuse them.
  11. Good IFTTT guide:  Send me new jobs available every Monday – IFTTT
  12. Intriguing:   marcoarment/S3.php 
  13. Deploy any static site to GitHub Pages
  14. For fans of either: Visual studio and Terraform
  15. My friend Carl wrote this and it’s good:  The basics of scrum 

UML: I’ve been doing solution architecture lately, and as a result I have been using Visio and PlantUML. I love the latter and found some good links regarding it.

  1. I love PlantUML. Here’s some links on how to use it with Microsoft’s Visual Studio Code:  PlantUML – Visual Studio Marketplace.
  2. and here  UML Made Easy with PlantUML & VS Code – CodeProject
  3. PlantUML and YAML:  https://plantuml.com/yaml
  4. PlantUML and Sequence Diagrams
  5. More on  Sequence Diagram syntax and features

Hardware: here’s some good (and not so good) hardware stories….

  1. This is cool:  teenage engineering google pixel pocket operator
  2. Also cool:  paper thin retro macintosh comes with an e ink display and runs on a raspberry pi (Image on Top of this post!)
  3. Robots:  Roomba Amazon Astro and the future of home robots
  4. Macbook problems:  Macbook Air m2 slow ssd read write speeds testing benchmark 
  5. More Macbook problems:  Macbook repair program: FAIL
  6. Not great:  Starlink loses its shine
  7. A really dumb idea: the switchbot door lock
  8. Finally:  The 20 Most Influential PCs of the Past 40 Years


Pi: I still love the Raspberry Pi, and I want to do more with them soon.

  1. Nice to see this:Raspberry Pi Pico W: your $6 IoT platform – Raspberry Pi
  2. Related:  How to Connect Your Raspberry Pi Pico W to Twitter via IFTTT | Tom’s Hardware
  3. How cool is this?  LISP on Raspberry Pi
  4. Awesome: make your own VK Machine:  Cool Pi Project (image above)

Sensors: one thing I was going to do with a Pi is build a CO2 meter to check on air flow. However the sensor most used for this, the MQ-135, is not all that great. It’s a problem with cheap sensors in general: you just don’t get good results. To see what I mean, read these links:

  1. BUILD YOUR HOME CO2 METER
  2. MQ-135 Gas Sensor with Arduino Code and Circuit Diagram
  3. Measure CO2 with MQ-135 and Arduino Uno – Rob’s blog
  4. Measuring CO2 with MQ135
  5. Air Pollution Monitoring and Alert System Using Arduino and MQ135

Doom! I love stories of how people port the game DOOM onto weird devices. Stories like these….

  1. So many different ports!  Weird devices that run DOOM
  2. Cool!  Even DOOM Can Now Run DOOM! | Hackaday
  3. More on that:  Run Doom inside Doom!

Kubernetes: Still keeping up my reading on K8S. For example:

  1. You’ve written a kubernetes native application here is how openshift helps you to run develop build and deliver it securely.
  2. Benefits of Kubernetes 

Twitter: I don’t know about you, but I’ve gotten tired of the drama around Elon Musk wanting to buy twitter. However I had a recent spasm where I was reading somewhat on it. Here’s what I read:

  1. Twitter, Musk and Mudge
  2. More on Zatko
  3. Also  Zatko
  4. More on Twitter
  5. Whistleblower: Twitter misled investors FTC and underplayed spam issues. Ok, that’s enough.

Finally: 

  1. Beware Tiktok!  TikTok’s In-App Browser Includes Code That Can Monitor Your Keystrokes. These special browsers have to go.
  2. A bad use of AI in France:  taxing pool owners with hidden pools. It’s bad because the success rate is poor.
  3. Lots of good tech articles at Earthly Blog
  4. Lots of good tutorials at Earthly Blog too.
  5. How do I link my domain to GitHub Pages – Domains – Namecheap.com
  6. Mark Zuckerberg braces Meta employees for “intense period”. That’s a shame, said no on.
  7. Updated: Hardware vendor differences led to Rogers outage says Rogers CTO. More on that Rogers outage.
  8. How to:  Fine-Tune and Prune Your Phone’x Contacts List from The New York Times. Useful
  9. Also useful:  4 iPhone and Android Tricks You May Not Know About – The New York Times
  10. Good to know:  How Updates in iOS 16 and Android 13 Will Change Your Phone – The New York Times
  11. Charge your phone differently:  Phone charging.
  12. Canadian orgs struggle with  Ransomware still.
  13. Apple expands commitment to protect users from mercenary spyware. Good.
  14. Related:  84 scam apps still active on App Store’s steal over $100 million annually

On the new Apple watches, from SE to Ultra

So Apple released its latest round of products recently, including the new Apple Watches. My two cents? They seem to be going after a bigger market with the watch, for on one hand (wrist?) you have the new high end Ultra while on the other you have the new low cost SE. Maybe there’s only so much of a market for such digital devices: Apple is looking to see just how big that is. Good on them. I can’t ever see me getting the high end version, but I’ve always been a fan of Apple’s SE products so maybe that watch is in my future.

For more on things Apple, here’s something on Apple Watch cases. Here’s a piece on the psychology of Apple packaging.

For fans of all things Apple, here’s a story on the design tools of John Ive.

Finally, for those of you with old iPhones, you will want to read about this on
new security patches.

P.S. I’ve been writing about the Apple Watch since it came out in 2014 (?). You can read more here.

Two more signs of the ongoing crypto winter, from Minecraft and Tesla


Actions speak louder than think pieces. So these recent actions by Tesla and Mojang are just  one of many signs of the great implosion of crypto/NFTs/Web3/etc.

First off, Minecraft developer Mojang won’t allow NFTs in the game and second, Tesla just did a big crypto sell-off.

I especially liked what Mojang had to say. Essentially NFTs are anathema to the experience that they try and provide with Minecraft. They put their finger on what is wrong with all of that technology. Good for them.

As for Tesla, they were huge proponents for cryptocurrency until recently. For them to dump most of their holdings is a sign — among many signs out there — that crypto winter has set in and will likely stay that way for some time to come.

Thanks to The Verge for both of those pieces.

On the recent Rogers outage, some modest thoughts

With regards to the recent Rogers outage, I have to say I have great sympathy for the IT staff who had to deal with it, and unlike many, I don’t have any great solution to it. I have even greater sympathy for the millions of users like myself who were taken offline that day.

In the short term, the mandate given by Minister Champagne for the telcos to produce clear resiliency plan in 60 days is a good start. At a minimum, certain services like 911 should never be allowed to fail for anyone. As for other services, that is up to the telcos to make proposals. Perhaps failproof low bandwidth services like telephony could be taken up as well. We will see. As always, there will be cost/benefit tradeoffs.

Some people were saying that the problem is with concentration of services with only a few providers. In fact there are other provides besides Bell and Rogers, as BlogTO pointed out. I use one of them: Teksavvy. Despite good price points and good service, they hold only a small fraction of the market. If people want to vote for more diversity, they can do it with their dollars. I suspect they won’t.

In the end, people want low cost, easy of use, simple telco services. Rogers and Bell offer that. That said, I would advise people at a minimum to have their phone service and their internet service on two different providers. Heck if it is really important, get a landline. But at a minimum, split your cell phone service and your internet service. If your cell phone provider goes down, you can still contact people using the internet. You can even get a service like Fongo that will let you make phone calls. And if your Internet service goes down, you can use your phone as a hotspot to access the Internet. Will it cost more? Of course. Higher availability always costs more.

We are going to have these outages every few years, I suspect. Most companies, the telcos included, have a few big and complex network devices at the heart of their network. Those devices depend on specific software to run, and sometimes upgrading that software will fail. When it does, it may cause these outages. Just like it did to these companies in 2018.

Telecommunications is different than other utilities. In order to offer new services regularly (e.g. 5G, high speed Internet), they need to continually upgrade their technology. Electricity, water, and gas are all commodities: telecommunication services are not. The need telcos have to make improvements will always put them and us at risk.

This is not to absolve them: I think Rogers and the other telcos need to follow up on this outage with better plans to be more reliable, and the Government needs to oversee this both from a technology and regulatory viewpoint. This in the end will benefit everyone, in my humble opinion.

(All opinions expressed here are mine, not my employers. I have no inside knowledge of the services or technology provided by Rogers, other than what I read in the media, like everyone else. My opinions are based on working in IT networking since the early 1990s.)

The Drone-Jellyfish wars and more (today in robot news)

Man, what I would have given to have these Jellyfish drones monitoring my beach when I was a kid. I hated jellyfish! Still do. Kudos for the folks who came up with this. (I don’t think the drones actually fight the jellyfish, in case you were concerned about this due to my misleading title. :))

Here on land, if you are keen to have a Boston Dynamics dog robot of your own, now you can. Click on that link for more details.

Devs! Could your next online database be a spreadsheet?

If the thought of your next online database being a spreadsheet sounds ridiculous, consider this. Yes, I know, there are times when the only thing that will do the job for you is a highly scalable, highly available relational database. Certainly, there are other times when a NoSQL database with millions of records is the only way to go. That aside, there is likely many times when you need to store one table with hundreds of records or less. In that case, consider using an online spreadsheet from someone like Google.

If you write code to store data in a spreadsheet, one of the key advantages is that you and others can then interact with that data via spreadsheet software. You don’t have to run special ETL programs to get that data there. You have all the power you need. Plus the code to interact with something like Google Sheets is much simpler than the code to interact with something like AWS’s DynamoDB. I know…I have done both.

For more on this, check this out:Google Sheets API using Python : Complete 2021 Guide. It could be just the thing you need.

How to Download Apps on Your Old iPad and iPhone in 2022

If you happen to have an old iPad and you are thinking of using it, you will find this post of interest.

Like you, I have a very old iPad. It still works fine. However, one of the problems with old iPads is that Apple limits them in terms of upgrading the iPads operating system (iOS). My device cannot upgrade past iOS 9.

The problem with having an older version of iOS is this: if you try and download apps for it from the App Store, you will get message after message saying this application needs a later iOS to download. There are a few apps that you can still download directly, but not many, and not the common ones you likely use and want.

There is a work around for this problem. (I found out about it through the video below.) First, you download the apps you want on a iOS device that has a new OS. I did this on my iPhone. Then you go to your old iPad and look for apps you purchased. Voila, the app you just downloaded is there. NOW, when you try to download it, the App Store will say you don’t have the right iOS, BUT it will ask if you want to download a backlevelled version. You say YES and now you have the app running on your iPad.

This will only work for apps that have been around for a long time. So I was able to download apps like Twitter and CNN, but not Disney+. Still you can get quite a few apps downloaded that way, and suddenly mine (and soon your) iPad is much more useful.

For more on this, watch the video.

Thanks, Jishan.

 

Was the Long Tail a Lie? Ted Gioia’s thoughts and mine

I can’t say if it was a lie. Maybe it was a fairytale. Something too good to be true but something many of us including me wanted to believe in. Whatever your thoughts,  I recommend you read this strong critique on it: Where Did the Long Tail Go? by Ted Gioia. If you are a true believer, Gioia will get you rethinking it.

As for me, I think part of the problem is that online services nudge or even push us to the short tail. There are advantages to them when it comes to selling us more of the short part of the curve in red. We need services and aggregators to get our attention to spiral outwards and look at things we never considered before. Spotify still does that to an extent when it builds me playlists.

Another part of the problem is the willingness of people to get out of their comfort zone and explore the long tail. Again, Spotify will recommend music lists to me, but I often find myself sticking with the tried and true. Services need to better encourage people to try new things or make it easier to try new things.  Give people options, but in a smart way. I know it can be done. I hope it will be done.

 

Kubernetes and Clouds and much more (What I find interesting in tech, July 2022)


Since April, here are a ton of links I found useful while doing my work. Lots of good stuff on Kubernetes and Cloud (both IBM’s and AWS’s); some cool hardware links; some worthwhile software links. Plus other things! Check it out.

Kubernetes: plenty of good things here to explore if you are doing things with Kubernetes like I was:

Terraform: Relatedly, I was doing work with Terraform and these were useful:

IBM Cloud: one of the two clouds I have been working with. Alot of the work was Kubernetes on IBM Cloud so you’ll see some overlap:

AWS: I work on alot of cloud providers. Mostly IBM Cloud but others like AWS

Software: some of these were work related, but some are more hobby oriented.

Hardware: the pickings are few here

Finally: here are an odd assortment of things worthwhile:

Computer memory isn’t our memory and AI isn’t our intelligence


Since the beginning of the digital age, we have referred to quickly retrievable computer storage as “memory”. It has some resemblance to memory, but it has none of the complexity of our memories and how they work. If you talked to most people about this, I don’t think there would be many who would think they are the same.

Artificial Intelligence isn’t our Intelligence, regardless of how good it gets. AI is going to have some resemblance to our intelligence, but it has none of the complexity of our intelligence and how it works. Yet you can talk to many who think that over time they will become the same.

I was thinking about that last week after the kerfuffle from the Google engineer who exclaimed their software was sentient. Many many think pieces have been written about it; I think this one is the best I read from a lay person’s perspective. If you are concerned about it or simply intrigued, read that. It’s a bit harsh on Turing’s test, but I think overall it’s worth your time.

It is impressive what leaps information technology is making. But however much it resembles us as humans, it is not human. It will not have our memories. It will not have our intelligence. It will not have the qualities that make us human, any more than a scarecrow does.

On Apple, the Newton, the 90s and me

500

For people in this time, it’s may be hard to imagine Apple as anything other than a tremendously successful company. But in the 90s, it was the opposite. Under John Sculley and others, it was a company in major decline and all but at death’s door before Steve Jobs came back.

In some ways the Newton you see above was emblematic of that time. It was a device that Apple tried to use to regain the magic that it once had. It failed, but in some ways it was a glorious failure. (The Powerbook also came out at that time and it was a fine machine but the problems of Apple were baked into it.)

I’ve always had a fondness for the Newton, and wanted one for a long time, even though I could not justify getting one. And then Jobs returned and tossed it in the bin like so much crumpled paper. It was a smart decision, but a sad one for me.

That’s why I was really interested to read this recently: The Newton at 30. It’s a great rundown on that device. Reading it, I was happy to see that some of the original ideas found in the Newton later made their way into other mobile products from Apple. Good ideas deserve a home, even if they were never going to find that home in the Newton.

In the 90s I had a small role in developing IBM software that ran on Macs and that allowed our customers to access our IBM Global Network via a Mac. I loved building Apple Software, even if it was a nightmare at times. (Writing software for a rapidly declining company is no easy thing.) At the time I got to work on the Powerbook 1400 and 3400 and hang out at Apple and play around with the emate 300. It was a good time despite the difficulty. I never got a Newton then, thought I got close.

Later in the second decade of the 21st century I finally got to buy my own Newton! Mint condition, from Kijiji. 100+ bucks! Funny, a device that was so cutting edge when it first came out seems so limited now! It was a good reminder how fast technology moves. I was still glad to have it. It’s a wonderfully collectable device.

For more on the Newton, click that link.

Happy Anniversary, Newton. You were truly ahead by a century.

On the new Google Glass(es), 2022 edition

In 2013, Google gave the world Google Glass. While their high tech glasses seemed cool at first, eventually it was revealed to be terrible technology, and people sporting it became known as “glassholes”. Not good.

Google did not give up, though, and have unveiled a new version of there glasses. at their recent annual convention on all things Google:

After announcing a whole new catalog of products, including the Pixel 6A, Pixel 7 and 7 Pro, Pixel Buds Pro, Pixel Tablet, and Pixel Watch, Google gave us a taste of an AR Glasses prototype they’ve been working on (labeled Proto 29) that combines natural language processing and transcription to provide subtitles to the real world. Wear the glasses and, in theory, you can understand any language. The glasses pick up audio and visual cues, translating them into text that gets displayed on your lens, right in your line of vision. These virtual subtitles overlay on your vision of the world, providing a contextual, USEFUL augmented reality experience that’s leaps and bounds ahead of what the Google Glass was designed to do in 2013.

I know, they’re still a prototype. But it’s exciting to think about! I could see how they could even show you a potential response, just like they do when you use Gmail and they suggest potential responses. Quite an amazing tool for those who travel to places with different languages.

Among other things, this shows that tech still has ways to be innovative and useful in ways we haven’t even thought of. Good job, Google, for not giving up on this technology. Looking forward to the day when these go from prototypes to the real thing.

Today in good robots: reforesting drones


I’m often critical of robots and their relatives here, but these particular drones seem very good indeed. As that linked article explains:

swarms of (theese) seed-firing drones … are planting 40,000 trees a day to fight deforestation…(their) novel technology combines artificial intelligence with specially designed proprietary seed pods that can be fired into the ground from high in the sky. The firm claims that it performs 25 times faster and 80 percent cheaper compared to traditional seed-planting methodologies.

I am sure there is still a role for humans in reforestation, but the faster and cheaper it can be done, the better. A good use of technology.

A great article / repo to learn about Kubernetes, Terraform, IBM Cloud, Scripting, and more

If you are looking for a way to gain knowledge in a lot of different ways (Kubernetes including ingress, services, and COS as a way to holding information, plus Terraform and more) then I recommend this article.

It has a link to a repo you can use that had 2 issues at the time, so I forked a copy and in the meantime to fix the issues. You can get it here.

What’s nice about this is it comes with some shell scripts that use terraform to build and configure the cluster. It’s a good way to learn many things at the same time. Recommended.