Category Archives: nytimes.com

How to forge a painting in the Louvre 

Painting in the Louvre
Easy! Just follow these three simple steps:

  1. Apply for one of the 250 permits the museum gives out each year.
  2. Bring your supplies and stand in front of the painting you want to copy. You can do this most days in the months of  September through June from 9:30 a.m. to 1:30 p.m.
  3. Start painting.

Ok, it’s not quite that easy. Even if you can perfectly reproduce the work you stand before, the staff of the Louvre take steps to insure no one mistakes your work for the original, as this NYTimes article points out. For example, in this article, they made sure that the copyists used

canvases that were one-fifth smaller or larger than the original, and that the original artists’ signatures were not reproduced on the copies. Then (the staff) stamped the backs of the canvases with a Louvre seal, added (the staff’s) own signature and escorted (the copyists) from the museum.

It’s a fine article highlighting a great tradition of the Louvre: well worth reading.

(Photo by IVAN GUILBERT / COSMOS and linked to in the article)

Advertisements

An introduction to Richard Thaler, winner of this year’s Nobel Prize for Economics

Often times it is hard to appreciate the work of Nobel Prize winners, including those in Economics. Thaler is not one of those people. His work is very approachable for laypeople, and the benefits of his work is obvious.

Here’s one example, of how his work led to better results for people in terms of pensions.

Youtube is a great source of videos on Thaler. If you want to get started understanding what is behind his thinking, you can start there.
In addition, the New York Times covers his award winning here and it is another good introduction. Finally, here is a piece in the Times that Thaler wrote himself, on the power of Nudges. If you do anything, read that.

Good to see him win.

Two portraits of a great writer: Robert Caro

Robert Caro

I find Caro a fascinating person and this portrait of him in this Paris Review interview is well worth reading: Paris Review – Robert Caro, The Art of Biography No. 5.

It’s worth comparing it to this piece on him in the New York Times that talks about his routine, including how he goes to a separate office in Manhattan just to work and that he wears formal business attire to do so. A rare life writing about another rare life.

Another cautionary tale: this time regarding Bleecker Street in NYC

The story of Bleecker Street’s Swerve From Luxe Shops to Vacant Stores in the  NYTimes is one playing out in many cities throughout the world, though perhaps not as extreme as this. It’s a big problem when money comes flooding into neighborhoods and cities, disrupting the people that live there, and making those areas unlivable in some cases. Most people need somewhat stable places to live, but unstable social systems (capitalist or otherwise) can make that difficult unless other social systems (like local governments) come in and press back against such instability. As more of the world moves from rural to urban areas, the tools to make streets and cities livable need to be developed and put to use.

Anyone living in a growing city needs to read this piece. Recommended.

(PHOTOGRAPHS BY CHRIS MOTTALINI FOR THE NEW YORK TIMES)

The Real Bias Built in at Facebook <- another bad I.T. story in the New York Times (and my criticism of it)

There is so much wrong in this article, The Real Bias Built In at Facebook – The New York Times, that I decided to take it apart in this blog post. (I’ve read  so many bad  IT stories in the Times that I stopped critiquing them after a while, but this one in particular bugged me enough to write something).

To illustrate what I mean by what is wrong with this piece, here’s some excerpts in italics followed by my thoughts in non-italics.

  • First off, there is the use of the word “algorithm” everywhere. That alone is a problem. For an example of why that is bad, see section 2.4 of Paul Ford’s great piece on software,What is Code? As Ford explains: ““Algorithm” is a word writers invoke to sound smart about technology. Journalists tend to talk about “Facebook’s algorithm” or a “Google algorithm,” which is usually inaccurate. They mean “software.” Now part of the problem is that Google and Facebook talk about their algorithms, but really they are talking about their software, which will incorporate many algorithms. For example, Google does it here: https://webmasters.googleblog.com/2011/05/more-guidance-on-building-high-quality.html At least Google talks about algorithms, not algorithm. Either way, talking about algorithms is bad. It’s software, not algorithms, and if you can’t see the difference, that is a good indication you should not be writing think pieces about I.T.
  • Then there is this quote: “Algorithms in human affairs are generally complex computer programs that crunch data and perform computations to optimize outcomes chosen by programmers. Such an algorithm isn’t some pure sifting mechanism, spitting out objective answers in response to scientific calculations. Nor is it a mere reflection of the desires of the programmers. We use these algorithms to explore questions that have no right answer to begin with, so we don’t even have a straightforward way to calibrate or correct them.” What does that even mean? To me, I think it implies any software that is socially oriented (as opposed to say banking software or airline travel software) is imprecise or unpredictable. But at best, that is only slightly true and mainly false. Facebook and Google both want to give you relevant answers. If you start typing in “restaurants” or some other facilities in Google search box, Google will start suggesting answers to you. These answers will very likely to be relevant to you. It is important for Google that this happens, because this is how they make money from advertisers. They have a way of calibrating and correcting this. In fact I am certain they spend a lot of resources making sure you have the correct answer or close to the correct answer. Facebook is the same way. The results you get back are not random. They are designed, built and tested to be relevant to you. The more relevant they are, the more successful these companies are. The responses are generally right ones.
  • If Google shows you these 11 results instead of those 11, or if a hiring algorithm puts this person’s résumé at the top of a file and not that one, who is to definitively say what is correct, and what is wrong?” Actually, Google can say, they just don’t. It’s not in their business interest to explain in detail how their software works. They do explain generally, in order to help people insure their sites stay relevant. (See the link I provided above). But if they provide too much detail, bad sites game their sites and make Google search results worse for everyone. As well, if they provide too much detail, they can make it easier for other search engine sites – yes, they still exist – to compete with them.
  • Without laws of nature to anchor them, algorithms used in such subjective decision making can never be truly neutral, objective or scientific.” This is simply nonsense.
  • Programmers do not, and often cannot, predict what their complex programs will do. “ Also untrue. If this was true, then IBM could not improve Watson to be more accurate. Google could not have their sales reps convince ad buyers that it is worth their money to pay Google to show their ads. Same for Facebook, Twitter, and any web site that is dependent on advertising as a revenue stream.
  • Google’s Internet services are billions of lines of code.” So what? And how is this a measure of complexity?  I’ve seen small amounts of code that was poorly maintained be very hard to understand, and large amounts of code that was well maintained be very simple to understand.
  • Once these algorithms with an enormous number of moving parts are set loose, they then interact with the world, and learn and react. The consequences aren’t easily predictable. Our computational methods are also getting more enigmatic. Machine learning is a rapidly spreading technique that allows computers to independently learn to learn — almost as we do as humans — by churning through the copious disorganized data, including data we generate in digital environments. However, while we now know how to make machines learn, we don’t really know what exact knowledge they have gained. If we did, we wouldn’t need them to learn things themselves: We’d just program the method directly.” This is just a cluster of ideas slammed together, a word sandwich with layers of phrases without saying anything. It makes it sound like AI has been unleashed upon the world and we are helpless to do anything about it. That’s ridiculous. As well, it’s vague enough that it is hard to dispute without talking in detail about how A.I. and machine learning works, but it seems knowledgeable enough that many people think it has greater meaning.
  • With algorithms, we don’t have an engineering breakthrough that’s making life more precise, but billions of semi-savant mini-Frankensteins, often with narrow but deep expertise that we no longer understand, spitting out answers here and there to questions we can’t judge just by numbers, all under the cloak of objectivity and science.” This is just scaremongering.
  • If these algorithms are not scientifically computing answers to questions with objective right answers, what are they doing? Mostly, they “optimize” output to parameters the company chooses, crucially, under conditions also shaped by the company. On Facebook the goal is to maximize the amount of engagement you have with the site and keep the site ad-friendly.You can easily click on “like,” for example, but there is not yet a “this was a challenging but important story” button. This setup, rather than the hidden personal beliefs of programmers, is where the thorny biases creep into algorithms, and that’s why it’s perfectly plausible for Facebook’s work force to be liberal, and yet for the site to be a powerful conduit for conservative ideas as well as conspiracy theories and hoaxes — along with upbeat stories and weighty debates. Indeed, on Facebook, Donald J. Trump fares better than any other candidate, and anti-vaccination theories like those peddled by Mr. Beck easily go viral. The newsfeed algorithm also values comments and sharing. All this suits content designed to generate either a sense of oversize delight or righteous outrage and go viral, hoaxes and conspiracies as well as baby pictures, happy announcements (that can be liked) and important news and discussions.” This is the one thing in the piece that I agreed with, and it points to the real challenge with Facebook’s software. I think the software IS neutral, in that it is not interested in the content per se as it is how the user is responding or not responding to it. What is NOT neutral is the data it is working off of. Facebook’s software is as susceptible to GIGO (garbage in, garbage out) as any other software. So if you have a lot of people on Facebook sending around cat pictures and stupid things some politicians are saying, people are going to respond to it and Facebook’s software is going to respond to that response.
  • Facebook’s own research shows that the choices its algorithm makes can influence people’s mood and even affect elections by shaping turnout. For example, in August 2014, my analysis found that Facebook’s newsfeed algorithm largely buried news of protests over the killing of Michael Brown by a police officer in Ferguson, Mo., probably because the story was certainly not “like”-able and even hard to comment on. Without likes or comments, the algorithm showed Ferguson posts to fewer people, generating even fewer likes in a spiral of algorithmic silence. The story seemed to break through only after many people expressed outrage on the algorithmically unfiltered Twitter platform, finally forcing the news to national prominence.” Also true. Additionally, Facebook got into trouble for the research they did showing their software can manipulate people by….manipulating people in experiments on them! It was dumb, unethical, and possibly illegal.
  • Software giants would like us to believe their algorithms are objective and neutral, so they can avoid responsibility for their enormous power as gatekeepers while maintaining as large an audience as possible.” Well, not exactly. It’s true that Facebook and Twitter are flirting with the notion of becoming more news organizations, but I don’t think they have decided whether or not they should make the leap or not. Mostly what they are focused on are channels that allow them to gain greater audiences for their ads with few if any restrictions.

In short, like many of the IT think pieces I have seen the Times, it is filled with wrong headed generalities and overstatements, in addition to some concrete examples buried somewhere in the piece that likely was thing that generated the idea to write the piece in the first place. Terrible.

Finally! The cappuccino scandal revealed by  The New York Times. (I am not joking)


For some time, I have been complaining that cappuccinos have evolved into something I call “latte-ccinos”, which is a drink that is somewhere between a latte and a cappuccino. Good to see that the New York Times has a piece on it highlighting the sad state of North American coffee and in particular the sham cappuccinos now commonly served.

But what is a true cappuccino? As the Times points out, there is a debate about what it is:

There was a time when cappuccino was easy to identify. It was a shot of espresso with steamed milk and a meringue-like milk foam on top. … “In the U.S., cappuccino are small, medium and large, and that actually doesn’t exist,” the food and coffee writer Oliver Strand said. “Cappuccino is basically a four-ounce drink.” … Others cling to old-school notions of what makes a cappuccino, with the layering of ingredients as the main thing. “The goal is to serve three distinct layers: caffè, hot milk and frothy (not dense) foam,” the chef and writer Mario Batali wrote in an email. “But to drink it Italian style, it will be stirred so that the three stratum come together as one.”

I agree with Strand: a cappuccino should be a small drink and the espresso, milk and foam proportional.. If you want a bigger drink, get a latte. And if you want a true cappuccino, find a good Italian establishment — in Toronto, Grano’s makes a superb one — and get your fix there.

For more on this, see: Is That Cappuccino You’re Drinking Really a Cappuccino? – The New York Times. The photo above is a link to that article.

Against gratitude and being grateful. Some thoughts from Barbara Ehrenreich and me

This piece, The Selfish Side of Gratitude – The New York Times, is a scathing attack on gratitude by Ehrenreich. She makes some good points, but overall the writing is so dismissive, from the references to yoga mats to the numerous quotation marks around so many things, that I didn’t find it persuasive. No doubt some abuse the notion of being grateful, but I think there is more too it than a form of evasion. Read it and see if you agree.

My criticism of gratitude is smaller. My problem with the notion is that it isn’t as useful for me. I think there are better words for expressing how I feel, like glad or appreciative. Gratitude in the context of other people is subservient. I do not look down on the people who provide me a service, nor do I think they should think themselves somehow superior. Likewise if I do something for you, I don’t expect you to be grateful: if you are appreciative, that’s enough. And gratitude for certain aspects of nature or the universe make no sense if you are not religious.

There are people who I am grateful towards. Most of the time I can use other words to describe my feelings toward them and what they do. Grateful and gratitude are two words that should be used less often.