Monthly Archives: March 2023

The biOrd Air60 Terrarium is a very cool way to add plants to your place

As someone who developed a love of indoor plants over the pandemic, I have to say I was blown away by the biOrb Air 60 Terrarium. Sure, it’s pricey, and even a little big, but for anyone who has a few bucks and some space, it could be a cool addition to your home.

Check out that link to Design Milk and get more details on it. It’s cool.

 

Advertisement

Paul Kedrosky & Eric Norlin of SKV know nothing about software and you should ignore them

Last week Paul Kedrosky & Eric Norlin of SKV wrote this piece, Society’s Technical Debt and Software’s Gutenberg Moment, and several smart people I follow seemed to like this and think it something worthwhile. It’s not.

It’s not worthwhile because Kedrosky and Norlin seem to know little if anything about software. Specifically, they don’t seem to know anything about:

  • software development
  • the nature of programming or coding
  • technical debt
  • the total cost of software

Let me wade through their grand and woolly pronouncements and focus on that.

They don’t understand software development: For Kedrosky and Norlin, what software engineers do is predictable and grammatical. (See chart, top right).

To understand why that is wrong, we need to step back. The first part of software development and software engineering should start with requirements. It is a very hard and very human thing to gather those requirements, analyze them, and then design a system around them that meets the needs of the person(s) with the requirements. See where architects are in that chart? In the Disordered and Ad hoc part in the bottom left. Good IT architects and business analysts and software engineers also reside there, at least in the first phase of software development. To get to the predictable and grammatical section which comes in later phases should take a lot of work. It can be difficult and time consuming. That is why software development can be expensive. (Unless you do it poorly: then you get a bunch of crappy code that is hard to maintain or has to be dramatically refactored and rewritten because of the actual technical debt you incurred by rushing it out the door.)

Kedrosky and Norlin seem to exclude that from the role of software engineering. For them, software engineering seems to be primarily writing software. Coding in other words. Let’s ignore the costs of designing the code, testing the code, deploying the code, operating the code, and fixing the code. Let’s assume the bulk of the cost is in writing the code and the goal is to reduce that cost to zero.

That not just my assumption: it seems to be their assumption, too. They state: “Startups spend millions to hire engineers; large companies continue spending millions keeping them around. And, while markets have clearing prices, where supply and demand meet up, we still know that when wages stay higher than comparable positions in other sectors, less of the goods gets produced than is societally desirable. In this case, that underproduced good is…software”.

Perhaps that is how they do things in San Francisco, but the rest of the world has moved on from that model ages ago. There are reasons that countries like India have become powerhouses in terms of software development: they are good software developers and they are relatively low cost. So when they say: “software is chugging along, producing the same thing in ways that mostly wouldn’t seem vastly different to developers doing the same things decades ago….(with) hands pounding out code on keyboards”, they are wrong because the nature of developing software has changed. And one of the way it has changed is that the vast majority of software is written in places that have the lowest cost software developers. So when they say “that software cannot reach its fullest potential without escaping the shackles of the software industry, with its high costs, and, yes, relatively low productivity”, they seem to be locked in a model where software is written they way it is in Silicon Valley by Stanford educated software engineers. The model does not match the real world of software development. Already the bulk of the cost in writing code in most of the world has been reduced not to zero, but to a very small number compared to the cost of writing code in Silicon Valley or North America. Those costs have been wrung out.

They don’t understand coding: Kedrosky and Norlin state:A software industry where anyone can write software, can do it for pennies, and can do it as easily as speaking or writing text, is a transformative moment”. In their piece they use an example of AI writing some Python code that can “open a text file and get rid of all the emojis, except for one I like, and then save it again”. Even they know this is “a trivial, boring and stupid example” and say “it’s not complex code”.

Here’s the problem with writing code at least with the current AI. There are at least three difficulties that AI code generators suffers from: triviality, incorrectness, and prompt skill.

First, the problem of triviality. It’s true: AI is good at making trivial code. It’s hard to know how machine learning software produces this trivial code, but it’s likely because there are lots of examples of such code on the Internet for them to train on. If you need trivial code, AI can quickly produce it.

That said, you don’t need AI to produce trivial code. The Internet is full of it. (How do you think the AI learned to code?) If someone who is not a software developer wants to learn how to write trivial code they can just as easily go to a site like w3schools.com and get it. Anyone can also copy and paste that code and it too will run. And with a tutorial site like w3schools.com the explanation for the code you see will be correct, unlike some of the answers I’ve received from AI.

But what about non-trivial code? That’s where we run into the problem of  incorrectness. If someone prompts AI for code (trivial or non-trivial) they have no way of knowing it is correct, short of running it. AI can produce code quickly and easily for you, but if it is incorrect then you have to debug it. And debugging is a non-trivial skill. The more complex or more general you make your request, the more buggy the code will likely be, and the more effort and skill you have to contribute to make it work.

You might say: incorrectness can be dealt with by better prompting skills. That’s a big assumption, but let’s say it’s true. Now you get to the third problem. To get correct and non-trivial outputs — if you can get it at all, you have to craft really good prompts. That’s not a skill anyone will have. You will have to develop specific skills — prompt engineering skills — to be able to have the AI write python or Go or whatever computer language you need. At that point the prompt to produce that code is a form of code itself.

You might push back and say: sure, the prompts might be complex, but it is less complicated than the actual software I produce. And that leads to the next problem: technical debt.

They don’t understand technical debt: when it comes to technical debt, Kedrosky and Norlin have two problems. First, they don’t understand the idea of technical debt! In the beginning of their piece they state: “Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt.”

That’s not how those of us in the IT community define it.  Technical debt is not a lack of software supply. Even Wikipedia knows better: “In software development, technical debt (also known as design debtor code debt) is the implied cost of future reworking required when choosing an easy but limited solution instead of a better approach that could take more time”. THAT is technical debt.

One of the things I do in my work is assess technical debt, either in legacy systems or new systems. My belief is that once AI can produce code that is non-trivial and correct and based on prompts, we are going to get an explosion of technical debt. We are going to get code that appears to solve a problem and do so with a volume of python (or Java or Go or what have you) that the prompt engineer generated and does not understand. It will be like copy and paste code amplified. Years from now people will look at all this AI generated code and wonder why it is the way it is and why it works the way it does. It will take a bunch of counter AI to translate this code into something understandable, if that will even be possible. Meanwhile companies will be burdened with higher levels of technical debt accelerated by the use of AI developed software. AI is going to make things much worse, if anything.

They don’t understand the total cost of software:  Kedrosky and Norlin included this fantasy chart in their piece.

First off, most people or companies purchase software, not software engineers. That’s the better comparison to hardware.  And if you do replace “Software engineers” with software, then in certain areas of software this chart has already happened. The cost of software has been driven to zero.

What drove this? Not AI. Two big things that drove this are open source and app stores.

In many cases, open source eliminated the (licensing) cost of software to zero. For example, when the web first took off in the 90s, I recall Netscape sold their web server software for $10,000. Now? You can download and run free web server software like nginx on a Raspberry Pi for free. Heck can write your own web server using node.js.

Likewise with app stores. If you wanted to buy software for your PC in the 80s or 90s, you had to pay significantly more than 99 cents for it. It certainly was not free. But the app stores drove the expectation people had that software should be free or practically free. And that expectation drove down the cost of software.

Yet despite developments like open source and app stores driving the cost of software close to zero, people are organizations are still paying plenty for the “free” software. And you will too with AI software, whether it’s commercial software or software for your personal use.

I believe that if you have AI generating tons of free personal software, then you will get a glut of crappy apps and other software tools. If you think it’s hard to determine good personal software now, wait until that happens. There will still be good software, but to develop that will cost money, and that money will be recovered somehow, just like it is today with free apps with in app purchases or apps that steal your personal information and sell it to others. And people will still pay for software from companies like Adobe. They are paying for quality.

Likewise with commercial software. There is tons of open source software out there. Most of it is wisely avoided in commercial settings. However the good stuff is used and it is indeed free to licence and use.

However the total cost of software is more than the licencing cost. Bad AI software will need more capacity to run and more people to support, just like bad open source does. And good AI software will need people and services to keep it going, just like good open source does. Some form of operations, even if it is AIOps (another cost), will need expensive humans to insure the increasing levels of quality required.

So AI can churn out an tons of free software. But the total cost of such software will go elsewhere.

To summarize, producing good software is hard. It’s hard to figure out what is required, and it is hard to design and built and run it to do what is required.  Likewise, understanding software is hard. It’s called code for a reason. Bad code is tough to figure out, but even good code that is out of date or used incorrectly can have problems and solving those problems is hard. And last, free software has other costs associated with it.

P.S. It’s very hard to keep up and counter all the hot takes on what AI is going to do for the world. Most of them I just let slide or let others better than me deal with. But I wanted to address this piece in particular, since it seemed influential and un-countered.

P.S.S. Beside all that above, they also made some statements that just had me wondering what they were thinking. For example, when they wrote: “This technical debt is about to contract in a dramatic, economy-wide fashion as the cost and complexity of software production collapses, releasing a wave of innovation.” Pure hype.

Or this : “Software is misunderstood. It can feel like a discrete thing, something with which we interact. But, really, it is the intrusion into our world of something very alien. It is the strange interaction of electricity, semiconductors, and instructions, all of which somehow magically control objects that range from screens to robots to phones, to medical devices, laptops, and a bewildering multitude of other things.” I mean, what is that all about?

And this:  “The current generation of AI models are a missile aimed, however unintentionally, directly at software production itself”. Pure bombast.

Or this hype: “They are “toys” in that they are able to produce snippets of code for real people, especially non-coders, that one incredibly small group would have thought trivial, and another immense group would have thought impossible. That. Changes. Everything.”

And this is flat up wrong: “This is just the beginning (and it will only get better). It’s possible to write almost every sort of code with such technologies, from microservices joining together various web services (a task for which you might previously have paid a developer $10,000 on Upwork) to an entire mobile app (a task that might cost you $20,000 to $50,000 or more).”

 

 

 

I eye, You eye , We all scream for AI (What I find interesting in AI, Mar. 2023)

Have you heard of…A.I.? Of course you have! You can’t go anywhere lately without reading or seeing something about AI. Not even the Kardashians can generate this much interest or hype about something. It’s incredible.

I’ve been collecting a number of links on the topics, which I’ve grouped below. As well, I have been blogging all week on the topic, trying to give my perspective on it all.

Things are changing rapidly when it comes to this subject. I hope these things help you gain a better understanding of where things stand at the moment, even if it is a brief moment.

Ideas/thoughts on AI:

Tools and technologies:

Science:

Finally:

  • Roger Schank passed away. He was a leader in the field of AI.
  • Cool stuff:  OpenWorm project is an example of just how complex organisms are.
  • I am normally a fan of our world in data, but their brief history of AI  is far too rosy for me.

Will AI tools based on large language models (LLMs) become as smart or smarter than us?

With the success and growth of tools like ChatGPT, some are speculating that the current AI could lead us to a point where AI is as smart if not smarter than us. Sounds ominous.

When considering such ominous thoughts, it’s important to step back and remember that Large Language Model (LLM) are tools based in whole or in part on machine learning technology. Despite their sophistication, they still suffer from the same limitations that other machine learning technologies suffer, namely:

    • bias
    • explainability
    • overfitting
    • learning the wrong lessons
    • brittleness

There are more problems than those for specific tools like ChatGPT, as Gary Marcus outlines here:

  • the need for retraining to get up to date
  • lack of truthfulness
  • lack of reliability
  • it may be getting worse due to data contamination (Garbage in, garbage out)

It’s hard to know if current AI technology will overcome these limitations. It’s especially hard to know when orgs like OpenAI do this.

My belief is these tools will hit a peak soon and level off or start to decline. They won’t get as smart or smarter than us. Not in their current form. But that’s based on a general set of experiences I’ve acquired from being in IT for so long. I can’t say for certain.

Remain calm. That’s my best bit of advice I have so far. Don’t let the chattering class get you fearful. In the meanwhile, check out the links provided here. Education is the antidote to fear.

Are AI and ChatGPT the same thing?

Reading about all the amazing things done by the current AI might lead you to think that: AI = ChatGPT (or DALL-E, or whatever people like OpenAI are working on). It’s true, it is currently considered AI,  but there is more to AI than that.

As this piece explains, How ChatGPT Works: The Model Behind The Bot:

ChatGPT is an extrapolation of a class of machine learning Natural Language Processing models known as Large Language Model (LLMs).

Like ChatGPT, many of the current and successful AI tools are examples of machine learning. And while machine learning is powerful, it is just part of AI, as this diagram nicely shows:

To get an idea of just how varied and complex the field of artificial intelligence is, just take a glance at this outline of AI. As you can see, AI incorporates a wide range of topics and includes many different forms of technology. Machine learning is just part of it. So ChatGPT is AI, but AI is more than ChatGPT.

Something to keep in mind when fans and hypesters of the latest AI technology make it seem like there’s nothing more to the field of AI than that.

What is AI Winter all about and why do people who’ve worked in AI tend to talk about it?

It might surprise people, but work in AI has been going on for some time. In fact it started as early as the mid-1950s. In the 50s until the 70s, “computers were solving algebra word problems, proving theorems in geometry and learning to speak English”. They were nothing like OpenAI’s ChatGPT, but they were impressive in their own way. Just like now, people were thinking the sky’s the limit.

Then three things happened: the first AI winter from 1974 until 1980, the boom years from 1980-1987, and then the next AI winter from 1987-1993. I was swept up in the second AI winter, and like the first one, there was a combination of hitting a wall in terms of what the technology could do followed by a drying up of funding.

During the boom times it seemed like there would be no stopping AI and it would eventually be able to do everything humans can do and more. It feels that way now with the current AI boom. People like OpenAI and others are saying the sky’s the limit and nothing is impossible. But just like in the previous boom eras, I think the current AI boom will hit a wall with the technology (we are seeing some of it already). At that point we may see a reduction in funding from companies like Microsoft and Google and more (just like how we are seeing a drawback from them on voice recognition technology like Alexa and Siri).

So yes, the current AI technology is exciting. And yes, it seems like there is no end to what it can do. But I think we will get another AI winter sooner than later, and during this time work will continue in the AI space but you’ll no longer be reading news about it daily. The AI effect will also occur and the work being done by people like OpenAI will just get incorporated into the everyday tools we use, just like autocorrect and image recognition is no just something we take for granted.

P.S. If you are interested in the history of the second AI winter, this piece is good.

What is the AI effect and why should you care?

Since there is so much talk about AI now, I think it is good for people to be familiar with some key ideas concerning AI. One of these is the AI effect. The cool AI you are using now, be it ChatGPT or DALL-E or something else, will eventually get incorporated into some commonplace piece of IT and you won’t even think much of it. You certainly won’t be reading about it everywhere. If anything you and I will complain about it, much like we complain about autocorrect.

So what is the AI Effect? As Wikipedia explains:

The AI effect” is that line of thinking, the tendency to redefine AI to mean: “AI is anything that has not been done yet.” This is the common public misperception, that as soon as AI successfully solves a problem, that solution method is no longer within the domain of AI. Geist credits John McCarthy giving this phenomenon its name, the “AI effect”.

McCorduck calls it an “odd paradox” that “practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the ‘failures’, the tough nuts that couldn’t yet be cracked.”[5]

It’s true. Many things over the years that were once thought of as AI are now considered simply software or hardware, if we even think of them at all.  Whether it is winning at chess, recognizing your voice, or recognizing text in an images, these things are commonplace now, but were lofty goals for AI researchers once.

The AI effect is a key idea to keep in mind when people are hyping any new AI as the thing that will change everything. If the new AI becomes useful, we will likely stop thinking it is AI.

For more on the topic, see: AI effect – Wikipedia

On Barney Frank and Isaac Chotiner too

There is a serial killer quality about Isaac Chotiner and his interviews. He  finds someone who likes to talk  and who is in the wrong and he proceeds to eviscerate them through a series of questions in the New Yorker. He’s done it so often that people like Dan Drezner wrote this: Why Do People Talk to Isaac Chotiner?

Barney Frank was the one person who I saw stand up to him in an old interview and avoid being sliced up.  I was impressed  then. I was less impressed recently when Chotiner interviewed him about working for Signature Bank. Frank comes across as pugnacious still, but clearly he is wounded and on the defensive. Here’s some excerpts.

On Frank’s own actions to weaken Dodd-Frank:

Do you see any connection between the weakening of Dodd-Frank a few years ago and the collapse? I came to the conclusion shortly after we passed the bill that fifty billion dollars was too low. I decided that by 2012, and, in fact, said it publicly. The reason I say that is that I didn’t go on the board of Signature until later. In fact, I had never heard of Signature Bank at the time when I began to advocate raising the limit. This is relevant, obviously, because Signature was a beneficiary of that.

On why it was on the regulators to choose to go after banks like Signature Bank, here is what Frank had to say:

The power to look at liquidity, to increase liquidity and to say, You have too little—they had every power they needed to do that. [The bill allowed regulators to keep liquidity and capital requirements on banks with total assets between a hundred billion and two hundred and fifty billion, but no longer mandated they do so.] I will tell you, as a member of the board of Signature, we underwent some discussions about liquidity, and the need to increase liquidity or maintain it.

On the limits of stress tests (The bold part is Chotiner: the part in italics is Frank):

But isn’t the point of stress tests to see how a bank will do under different scenarios, like the one we saw? Yeah, that is what a stress test does. It’s an artificial but valid test. I do not think that a stress test would have helped in this situation. Because? Well, this all came up very suddenly. I don’t know what a stress test would have shown. A stress test might have been helpful, but part of it was that stress tests were for institutions large enough that it wouldn’t just be about them failing—it would be that their failing could cause great waves. I think that the impact of this failure has been contained, which it wouldn’t have been if it were JPMorgan.

On why he went to work for Signature Bank:

No, that’s the answer to, “Why are you doing this? It’s inconsistent.” No, I went on it, frankly, for two reasons. One: it paid well. I don’t have a pension and, having quit, I wanted to make some money. [Frank declined to participate in the congressional pension system.]

In short: weakening Dodd-Frank was a good thing and removing mandatory  liquidity requirements from banks like Signature Bank was a good thing and also the stress tests are not that good. Also its fine for political leaders to go work the people they used to regulate and make lots of money.

The whole interview is worth reading. Unless you were a fan of Barney Frank, they way I once was. Now he just sounds like an character from The Big Short.

An odd piece on SNL from GQ


This is an odd piece in GQ: The New Cast Reshaping SNL’s Next Decade . It states: “After a slew of exits, Saturday Night Live is reloading—with a squad of young comics that could form the nucleus of the show for years to come.”

It’s odd because yes, there have been a slew of exits, and yes there are new comics, but if you have been watching it recently, the comics dominating it now seem to be people like Heidi Gardner and Bowen Yang, and of course, the great Keenan Thompson. To see what I mean, check out this recap of a recent episode with Travis Kelce starring. Or watch tonight. They aren’t the new people, and they aren’t the older comics leaving.

The new comics are no doubt good, and they likely appealed more to the GQ readership than the people I named. Plus everyone wants to talk about what’s new. But I can see the current veterans being around and in the forefront when it comes to SNL celebrating half a century in 2 years from now.

A little perspective, please, GQ. 🙂

P.S. As an aside, I’ve been a fan of both GQ and SNL since the 70s. Good to see them both still around and being current.

The best pinot grigio is pinot gris

300I have been a long time non-lover of pinot grigio. (See here). I’ve tried a lot, even Alto Adiges, and I am still not keen. I’d rather drink something made from another grape.

That said, I was somewhat reconsidering my opinion after going over this list from Food and Wine: world’s best pinot gris and pinot grigio. Most of the wines on the list are pinot gris and not pinot grigio. It made me think that the problem may not be the grape but what Italians do with it. I have often enjoyed what the Alsatians do when they make their pinot gris wines. Those wines are flavorful and great either with food or by themselves.

So if you are a fan of pinot grigio, I recommend you consider trying some pinot gris and expand your taste buds with that. And if you are not a fan of pinot grigio, do the same! You may find you like what the Alsatians (and Americans) can do with it.

On Shrinking, and some thoughts on my limited return to TV

For the last 30 years or so I have not watched TV shows. I’ve watched movies at home and other things like news and sports, but nothing like the Sopranos or Breaking Bad or Family Guy or…well, you name it. I wrote about it here.

Lately I have been watching television again. A lot of that has to do with having someone great to watch it with, as well as someone who knows what I might like. Having more time at home during the pandemic also helped.

I started off by watching Ted Lasso, which I thought was superb. Then The Crown (loved the first two seasons mainly). Followed up by Slow Horses (also great). I began to think: hey, I might love TV again.

But then I watched Loot, and while I think Maya Rudolph is a genius, I could not watch much of that. Same with Hacks, even though, again, Jean Smith is amazing. Which brings me to Shrinking.

Like Loot and Hacks, I first started to really like it. But then I just started to feel fatigue from the strained writing. (Hey writers, writing nonstop about sex makes me think you’re a bunch of frat boys.) I also remembered the problem with situation comedy (situational dramedy?) and the need to create situations just to keep the story going. I see that often in Shrinking. (Frat boys: I know, let’s get the main character to sleep with his coworker! Hijinx will ensue!)

Like Loot and Hacks, having someone great (in this case, Harrison Ford) is a good draw and he makes me want to watch it. But like those two, there’s not much more that makes me want to watch it. (I mainly don’t care what happens to the other characters, which is different than Ted Lasso or Slow Horses, where I am invested in many of the characters). It’s pleasant enough, and occasionally funny enough. And kudos to them for getting a season two: clearly people like it.

It’s been fun watching TV, mainly because I have someone great to watch it with. (Thanks, Lisa!) But if I didn’t have that, I’d go back to my old ways. TV is different in some ways (e.g. no Apple TV in the 90s) but in a lot of ways, it’s hardly changed at all.

A reminder of how great WiReD magazine was, and how often wrong it was….

A reminder of how great WiReD magazine was — and how often it was wrong — can be found in this great piece by Dave Karpf: A WIRED compendium.

I bought the first copy of WiReD when it came out, and was a buyer and collector for some time. It was everything I loved in a magazine: smart and stylish and full of ideas.

WiReD was a perfect title for it too. The publication was about how the world was becoming interconnected through the rapid build out of the Internet, but it was also about how our brains were changing as a result of it. It covered both of those areas well.

Dave’s piece also covers some of my favorite things it got wrong, from the promise of PUSH technology (companies HATED Pointcast for flooding their networks and soon worked to shut that it down) to digitally encoded smell (right?? yeah, no) to a cover on how Second Life was the future (hey, I thought that too).

WiReD got plenty right, too. But more than right or wrong, it captured the zeitgeist of the 90s and early 2000s and generally provided insights into how information technology was affecting us.

If you have only read the magazine recently, you might not get what the fuss was all about it. Check out that summary from Dave Karpf: you will get a history lesson and hopefully a glimpse of WiReD and why it was so great.

(For more on it, I also recommend Wikipedia)

 

 

 

On Canadian art forgeries, now and then

If you think of art forgery at all, you likely think of internationally known painters like Basquiat. But did you know that here in Canada we also have a history of art forgery? You can read about it here: how a forgery scandal rocked the canadian art establishment in 1962. Of course, you don’t have to go back decades to find this occurring. Only recently arrests were made in a Norval Morrisseau forgery investigation kicked off by a member of the band, Barenaked Ladies. And current forgeries are not limited to Morrisseau. Fake works of the artist Maud Lewis are also coming onto the scene. (And are likely here already.)

Art forgeries are everywhere, including Canada. With that type of money involved, it’s not too surprising.

 

 

 

How good is the repairable phone from Nokia?

Nokia has a new phone out, the G22, which you can repair on your own. When I heard that, I thought: finally. And if you read what Nokia writes, you might even get excited, like I initially did. If you feel that way too, I recommend you read this in Ars Technica: the Nokia G22 pitches standard low end phone design as repairable . Key thing they note: “The G22 is a cheap phone that isn’t water-resistant and has a plastic back.” They goes on: “But if you ask, “What deliberate design decisions were made to prioritize repair?” you won’t get many satisfying answers.”

I get the sense that Nokia has made this phone for a certain niche audience, as well as for regulators demanding repairable phones. I hope I am wrong. I hope that Nokia and others strive to make repairability a key quality of their future phones. That’s what we need.

Happy Monday! Would you like to be more productive, more focused, and communicate well?

Hey there? Would you like to be more productive, more focused, and communicate well? Of course you would! So for you (and me) I have three good links that can help you with that.

This link will help you give the best presentation of your life. And this one  will help you be more productive. Finally, this will help you stay focused by helping you stay off certain apps on your phone.

How can the best white paint colours for 2023 not include cloud white?? :)

Chatelaine has a recent piece out, The Best White Paint Colours, According To Decor Experts, and the best white paint colours are mostly Benjamin Moore paint. Despite that, Cloud White, 967, OC-130, is not on the list! How can this be?

I mean really?

At first I thought that maybe the paint company no longer makes it. But nope,it still exists. Still looks great too. (See image above.)

Sure, White Dove (OC-17) is fine (see below):

But to my mind Cloud White is still the best White.

Who knows, though?  Maybe cloud white has become passe. It was a big thing with designers a decade ago.  Maybe the new ones want new whites.

That all said, I do know that if you want white, you can’t go wrong with white from Benjamin Moore. So check out that Chatelaine piece or the other pieces on my blog and see what I mean.

How to improve yourself this weekend

For some, the weekend is either a time of relaxing or a time of catching up.  I think that it can always be a time to improve yourself in some way. Here’s some ideas for you:

Get a hobby: Here’s a good piece on how to start a hobby. Perhaps drawing could be that hobby. Here’s how to get over yourself and start drawing. And if even if you don’t think you are very good, remember:  drawing can also be good for your mental mental health.

Improve your plant game: Plants make me and others happy. If you feel the same, maybe take some time this weekend and upgrade your plants: here’s when its time to repot indoor plants.

Get fitter: start with this piece, two simple ways to get fitter faster. If you need exercise routines, try these, fitness routines from Darebee. Or use this: strength training. Some people do better with devices to help them. If that’s you, then use this device to improve your fitness.

Like drawing, fitness can help you in many ways. For example, read this: How To Reframe Your Relationship With Exercise. And don’t forget, fitness is more than exercise. It is also about eating well. Here how you can eat  better:  4 easy strategies for adding more vegetables to your plate.

Get fashionable: sometimes new clothes can help you get out there. If that’s you, I recommend these new balance 574h hiking sneakers, the new balance 997h ice blue sneakers, this intro ponto footwear. Maybe even  jordan system23 clogs .

Finally, here’s a guide to  stop ruminating, if that’s something you do.

Regardless of what you decide to do, I hope your weekend is a good one.

If you want to keep track of the COVID-19 Wastewater Signal in Ontario, bookmark this

You can either bookmark this post or the actual URL that makes up the image above. The URL (or more accurately, URI) of the image stays the same, I think, but the data changes.

I’m glad it exists. I check the hospitalization and ICU numbers that come out every Thursday and they seem to align with the wastewater signal. That’s an indication for me at any given week how we are doing in terms of COVID-19, despite the dearth of other metrics like case loads or deaths.

While things in the first quarter of 2023 are better than the first quarter of 2022, there are still relatively high levels of COVID-19 in the wastewater. Manage your risk accordingly.

For more on wasterwater data, go here.

What is cool? How about this 20-Story Hotel in Sweden Is Made Almost Entirely from Wood?

It exists:

Standing 20 stories tall, The Wood Hotel is the world’s tallest hotel mainly made from wood. Located at the birthplace of cross-country skiing, Skellefteå in Swedish Lapland, the 205-room property was built from locally harvested spruce and pine which smells awesome and absorbs more CO2 than it uses.

Now that’s cool. Would love to stay there. Would love to see more tall buildings built this way.

For more on it, see this: This 20-Story Hotel in Sweden Is Made Almost Entirely from Wood

On restaurants now, and then

500

I like that Josh Barro stepped in (on?) the pseudo-controversy that arose when Joe and Jill Biden ordered the same meal by saying, yes, it’s fine to order the same dish as your spouse. I mean, of course it is, but that didn’t stop people from arguing otherwise.

Once he was on the topic, he had a number of other recommendations such as “Consider the restaurant’s specialty”  and  “Try to be ready to order by the time your server asks if you know what you want.” So much of it is common sense, but as we all know, so much common sense like this is ignored by people. Maybe even you. I recommend you go read that and adopt those recommendations.

Speaking of restaurants, this is a very interesting walk down memory lane or history, depending how old you are: The 40 Most (American) Important Restaurants of the Past 40 Years. Some of them are well known: Chez Panisse, Spago, The French Laundry. Others are more obscure. Regardless, it’s a great article. (I slipped in American, because it is only American restaurants.)

What’s better than a well made chair? How about one that’s made sustainably?

Yeah, the Sova Lounge chair is ergonomic and comfortable, but it’s also made from sustainably sourced wood. Oh it’s also gorgeous.

At some point most if not all the things we buy will be made from sustainably sourced material. As it should be. Here’s to more things like the Sova.

For more on this beautiful and smart chair, head over to Yanko Design for more.

 

 

No, prompt engineering is not going to become a hot job. Let a former knowledge engineer explain

With the rise of AI, LLMs, ChatGPT and more, a new skill has risen. The skill involves knowing how to construct prompts for the AI software in such a way that you get an optimal result. This has led to a number of people to start saying things like this: prompt engineers is the next big job. I am here to say this is wrong. Let me explain.

I was heavily into AI in the late 20th century, just before the last AI winter. One of the hot jobs at that time was going to be knowledge engineer (KE). A big part of AI then was the development of expert systems, and the job of the KE was to take the the expertise of someone and translate it into rules that the expert system could use to make decisions. Among other things, part of my role was to be a KE.

So what happened? Well, first off, AI winter happened. People stopped developing expert systems and went and took on other roles.  Ironically, rules engines (essentially expert systems) did come back, but all the hype surrounding them was gone, and the role of KE was gone too. It wasn’t needed. A business analyst can just as easily determine what the rules are and then have a technical specialist store that in the rules engine.

Assuming tools like ChatGPT were to last, I would expect the creation of prompts for it to be taken on by business analysts and technical specialist. Business as usual, in other words. No need for a “prompt engineer”.

Also, you should not assume things like ChatGPT will last. How these tools work is highly volatile; they are not well structured things like programming languages or SQL queries. The prompts that worked on them last week may result in nothing a week later. Furthermore, there are so many problems with the new AI that I could easily see them falling into a new AI winter in the next few years.

So, no, I don’t think Prompt Engineering is a thing that will last. If you want to update your resume to say Prompt Engineer after you’ve hacked around with one of the current AI tools out there, knock yourself out. Just don’t get too far ahead of yourself and think there is going to be a career path there.

More on the indigenous people in Canada (winter 2023)

Here are a number of good pieces I’ve come across concern indigenous groups in Canada exerting their rights both politically and economically.

First up, on the West Coast there’s this story of the Squamish who are “transforming the land (seen above) into one of the largest Indigenous-led development in Canada’s history, on its own terms — free from the rules that bind most urban developers. But not everyone is happy about the nation’s power and autonomy over its project”. Second, in Central Canada, there’s this story of an Indigenous cannabis shop in London that could be major test for Ontario. I also came across this story on the Innu out East fighting for what’s theirs. It states that although “they’re getting financial compensation, the Innu have yet to receive the rest of what was agreed upon: self-governance.”

I strongly believe that indigenous people of Canada need to have more than political power to succeed: they also need economic power. So I am glad whenever I see stories like this of a Group of First Nations and Metis communities acquiring minority stake in 7 Enbridge pipelines. There’s still much to be done, of course, as this story shows: 25 years after the Delgamuukw case the fight for land is contentious.

Despite setbacks and roadblocks, there’s progress, as this story illustrates, when the federal government and 325 First Nations agree to settle a class-action lawsuit that sought reparations for the loss of language and culture brought on by Indian residential schools, for $2.8 billion. Not all progress is financial, but it still matters: Residential schools described as genocide by House of Commons.

Some other stories of note:

Flaco the Owl! A free bird in New York….

Flaco, in case you haven’t heard, is an Eurasian eagle-owl that escaped the Central Park Zoo when vandals opened his enclosure. The zookeepers tried to lure him back to capitivity, but he wasn’t having it. They weren’t being mean: there was a good chance he could die in the wild, even if the wild currently consists of Central Park. Instead, he seems to be thriving, flying around the park and dining on the many rats available to him.

I think one of the reason we love him is that Flaco and his new freedom is a great metaphor that gives us hope. I also think we love him because he is a handsome bird! Regardless, we all want him to be independent and well. It makes me happy every day to read about him.

If by chance you don’t know about him, the Times has a story on him,  here. You can find lots of people talking about him on twitter. He even has an account dedicated to him: .

Some thoughts on Palm and the rise of the handheld computer

This tweet yesterday got me thinking:

Two big tech things happened in the late 90s: one was the adoption of the Web, and two was the adoption of handheld computers. While Apple and its Newton may have been the first to go big in this area, it was Palm and its Pilot device that was truly successful. The Newton came out in 1993 and was killed by Jobs in 1998, while the Palm came out in 1997 and sold like gang busters. (Interestingly the Blackberry came out in the late 90s too.)

To appreciate why the Palm Pilot was so successful, it helps to know how things were back then. In the 90s we were still in the era of rolodexes and Dayrunners. Every year I would head down to the local paper shop (in Toronto I went to the Papery on Cumberland) and get my latest paper refills for the year and manually update my calendar and pencil things in. (And god forbid you ever lost it.) The Palm Pilot promised to get rid of all that. You could enter it all in the hand held device and then sync it up with your computer. It solved so many problems.

It also avoided the problems the Newton had. Unlike the Newton, it’s recognition of handwriting was simpler which made it better. It was relatively cheap and much cheaper than the Newton. And it worked on the PC.  All those things also helped with its success.

What did not help Palm was a deluge of competition in this space, with everyone from Sony to Microsoft to RIM to deal with. They continued to make good devices like the Tungsten, but by then I was already moved over to the Blackberry. I wasn’t alone in this regard.

I still have a Palm Pilot. It’s a well designed device, even if the functionality it possesses seems quaint now. But back then, it was a force of change. It led the revolution in computing whereby instead of sitting in front of a computer, we carried one around in our hands. I would not have guessed it at the time, as I looked up my calendar or made my notes. I thought it was just a personal digital assistant. It turned out to be a world changer.

 

 

 

On Lent, Sacrifice, and Giving Things Up


Atheists and agnostics like myself sometimes find themselves longing for or at least missing elements of the religious life. (Alain de Botton explored this in his book, Religion for Atheists.) One of these are periods of reflection and sacrifice, like Lent. Some people support something like a secular Lent, while others argue that “secular Lent” misses the point, and that:

Lent, fundamentally, is about facing the hardest elements of human existence — suffering, mortality, death. That the season has turned into giving up Twitter shows that we haven’t gotten good at talking about them yet.

Agreed. But that doesn’t mean you can’t benefit from making personal sacrifices for a period of time in order to see yourself and your place in the world in a new and different way. A period of chosen sacrifice can be a spiritual practice no matter what you believe. And choosing to do it at this time of year may be the best time to do it.

If you agree and you want help with quitting something, this can help. If you want to know more about Lent, this can help. If you are not religious but this appeals to you, consider reading de Botton’s book.

Good luck with whatever you decide to do.

On gaining an appreciation for London

Over that The New York Times, Joshua Bell speaks of London:

“The first time I came to London, I was 17,” the violinist Joshua Bell, now 54, told me. We were at dinner together following a recent performance of his at Wigmore Hall, a small but renowned concert hall. “I came with my parents to make my first album,” he continued. “This was in the ’80s, and I remember thinking there wasn’t a lot of variety in food. Now, of course, it’s great.”

Like Bell, I first went to London in the 80s and did not appreciate it. I thought the food was limited, the hotels were terrible, and concluded it was historically wonderful but not a place I’d visit again.

But I’ve returned in the last year thanks to people I love and I’ve gained a whole new appreciation for the city. Like Bell, I agree: the food now is great. The hotels are great, too. And the things that always made London worthwhile are still there. Plus, it’s relatively cheap compared to what it used to be. (Thanks, Brexit, I guess).

I highly recommend reading his appreciation of London. Better still, go there and appreciate it for yourself.

As for me, if I was going again, I’d make sure I revisited St. JOHN, Brutto’s, and Noble Rot. In between going to the British Museum, The Tate, the National Gallery and of course Flying Tiger Copenhagen. 🙂

(Photo by Gabriel Isserlis of Fidelio Cafe: a link to the image in the story)