Monthly Archives: May 2016

More thoughts on Waze

I have thought a lot about Waze since I started using it. Without a doubt, it has improved my life substantially. Here are some other thoughts I had as I used it.

  1. Waze is an example of how software will eat the world. In this case, the world of gPS devices. Waze is a GPS on steroids. Not only will Waze do all the things that a GPS will do, but it does so much more, as you can see from this other Waze post I wrote. If you have a GPS, after you use Waze for a bit, you’ll likely stop using it.
  2. Waze will change the way cities work. Cities are inefficient when it comes to transportation. Our work habits contribute to that, in that so many people commute at the same time, in the same direction, on the same routes, each work day. Waze and other new forms of adding intelligence to commuting will shape our work habits over time. Drivers being able to take advantage of unbusy streets to reduce congestion on major thoroughfares is just the start. City planners could work with Waze to better understand travel patterns and travel behaviour and incorporate changes into the city  so that traffic flows better. It’s not that city planners don’t have such data, it’s that Waze likely has more data and better data than they currently have.
  3. Waze is a great example of how A.I. could work. I have no idea how much A.I. is built into Waze. It could be none, it could be alot. It does make intelligent recommendations to me, and that is all I care about. How it makes those intelligent recommendations is a black box. Developers of A.I. technologies should look at Waze as an example of how best to deploy A.I. Those A.I. developers should look at how best A.I. can solve a problem for the user and spend less time trying to make the A.I. seem human or overly intelligent. People don’t care about that. They care about practical applications of A.I. that make their lives better. Waze does that.
Advertisements

Why I love Waze. 13 good reasons why it is my favourite app


Inspired, in a way, by this article, Why I hate Waze in LA Times, I’d like to share some of the ways that Waze has made my life a lot better. 13 ways, in fact. There are more, but if this doesn’t convince drivers to use Waze, I don’t know what will.

  1. It saves me a lot of time: I used to take my son to hockey practice every week on a trip that took me 50 minutes. After talking to some other parents, I downloaded Waze. The result: my hockey commute went from 50 minutes to 25 minutes! Before Waze, I was stuck taking the major roads that were severely congested because I didn’t know what else to do. Waze recommended roads close to the roads I was on but that had no traffic problems. Over one year, I have saved hours of unnecessary commuting and saved on gas as well.
  2. It gets me to places on time: not only will Waze give you a fast route to travel, but it will tell you to the minute when you will get there. At first I didn’t think this was possible, but I was and continue to be amazed at how accurate it is. Now, before I am going somewhere, I will put the destination in Waze and know when I will arrive. No more being too early or late.
  3. It gives me options on how to get to a place: What I love about Waze is that it gives me 3 different routes to get to a place. It always recommends the fastest, but I like having the options. Sometimes it will recommend a road or a highway and I will think: I prefer to spend a bit more time and go a more scenic route.
  4. It gives me better times to travel if that is an option: Waze will also show me how long over the course of a day the route I want will take.  a 40 minute route at 4 p.m. might be a 24 minute route at 7:30 p.m. If you can shift your travel time, you can save yourself some time on the commute, according to Waze. This is a great feature.
  5. It has made me a calmer driver: I used to get anxious when I would get stuck in traffic. I’d think: God! I am never getting out of this jam! With Waze, not only do I know how long it will take to get to a place, traffic jam or not, but Waze will tell me things like: you will be stuck in traffic for 6 minutes. Now I feel much more in control of my commutes. Plus, I always know the route I am going is the best way to get to a place.
  6. It’s made me a more confident driver: one thing I didn’t like about Waze at first but now I do is that it often tells you to make left turns. Sometimes on busy streets. I used to avoid this on my own and I would go and turn at an intersection with lights. However, left hand turns save time, and the more I do them, even on busy streets, the more I realize they are no big deal. You just have to be patient and wait for an opening in traffic. It will come, often in a few seconds.
  7. It has helped me know the city better: when traffic is busy, Waze will have you going down streets you might normally skip. As I have done this, I have been amazed at how many new streets in the city I have discovered. Now, even when I don’t use Waze, I know about these streets and that knowledge helps me get around my city better.
  8. It helps you with cities you have no clue in: if you are driving into a city that you don’t know well, Waze is essential. I was driving into Montreal which has busy streets that go all different ways. With Waze I could just type in my hotel’s name and it gave me the route to get there. I had done this before Waze and it was a nightmare for me. With Waze it was easy.
  9. You don’t need to know addresses: that’s the other great thing about Waze. You can type in a name of a place and it will do a search and give you a list you can choose from. It’s perfect for when you are out with people and they say: let’s meet up at restaurant XYZ. You can enter that in Waze and off you go.
  10. It is the perfect navigator for solo drivers: I used to write down maps to help me get to places. It was ok, but not easy. It was especially difficult in new cities or driving on highways. Waze is constantly telling you how long you have to travel on a road, when you can expect to turn, and then telling you exactly where to turn. And if you miss a turn, it will recalibrate on the fly and tell you have to get back to where you need to go.
  11. It is great at night: I travel to a lot of rinks at night in the winter. Many of them are down small roads and poorly marked. I would have a heck of a time without Waze. With Waze, it is dead simple to get to the rinks. Can’t see a road sign? Can’t see the rink set far away frm the street? No problem: just follow the directions that Waze is giving you and you’ll get there.
  12. It gives you lots of time to turn: with Waze, you get lots of warning about when you have to turn. It will say: “in 1.2 kilometers, turn left….in 300 kilometers turn left….turn left at street X. ” You never have to worry about being told to turn left at the last minute.
  13. You can be flexible: Waze will suggest the fastest route. However, sometimes I will be tired or not in a rush and I will stick to a road I prefer driving down. Waze will quickly recalibrate and make additional recommendations, right to the point I arrive at my destination.

Nate Silver and Paul Krugman on the importance of good models to understand and predict

This piece by Nate Silver, How I Acted Like A Pundit And Screwed Up On Donald Trump in FiveThirtyEight, is ostensibly about how he messed up in his predictions on the rise of Donald Trump. What I think is worth reading is how he goes about his work and what he learned from his mistakes. Specifically, it’s a great study on how important models are and how a good model works and what it can tell us.

Related, Paul Krugman talks about his model here: Economics and Self-Awareness in The New York Times. Like Silver, he uses models both to understand and predict. Obviously they are modelling different things, but in both cases good models are the basis of their thinking and the work they do.

It’s likely too much to ask now, but eventually anyone doing analysis and making predictions should have to disclose the models they are basing their decisions upon. The opinions of anyone not having such models are likely not worth much.

The beauty of when science and poetry intersect

According to a post by Clive Thompson,

Recently, two scientists got interested in the poem, because they realized these two facts could be used to determine precisely what time of year Sappho wrote the poem.

The poem, the post, and the work the scientists did are all great. Highly recommended. (Click on the link to the post for more details.)

The rich stay richer and the poor stay poorer (now with data to back this up)

I was impressed by this study of economic mobility over many generations in Florence: What’s your (sur)name? Intergenerational mobility over six centuries | VOX, CEPR’s Policy Portal. They make a good case that the richer families stay richer and the poorer families stay poorer regardless of the many other changes that occur in an area.To add to this, VOX reviews it and also references a study done in Sweden that finds something similar (Today’s rich families in Florence, Italy, were rich 700 years ago – Vox).

It’s depressing, but not surprising to me. I suspect that while individuals may rise and fall in terms of economic mobility, specific families work to insure that the wealth acquired is maintained through marriage and inheritance. Worse, conditions for poorer families are such that they can never acquire enough wealth to move them from the lower percentile to a higher one.

What drives A.I. development? Better data

This article, Datasets Over Algorithms — Space Machine, makes a good point, namely

…perhaps many major AI breakthroughs have actually been constrained by the availability of high-quality training datasets, and not by algorithmic advances.

Looking at this chart they provide illustrates the point:

I’d argue that it isn’t solely datasets that drive A.I. breakthroughs. Better CPUs, improved storage technology, and of course new ideas can also propel A.I. forward. But if you ask me now, I think A.I. in the future will need better data to make big advances.

The Real Bias Built in at Facebook <- another bad I.T. story in the New York Times (and my criticism of it)

There is so much wrong in this article, The Real Bias Built In at Facebook – The New York Times, that I decided to take it apart in this blog post. (I’ve read  so many bad  IT stories in the Times that I stopped critiquing them after a while, but this one in particular bugged me enough to write something).

To illustrate what I mean by what is wrong with this piece, here’s some excerpts in italics followed by my thoughts in non-italics.

  • First off, there is the use of the word “algorithm” everywhere. That alone is a problem. For an example of why that is bad, see section 2.4 of Paul Ford’s great piece on software,What is Code? As Ford explains: ““Algorithm” is a word writers invoke to sound smart about technology. Journalists tend to talk about “Facebook’s algorithm” or a “Google algorithm,” which is usually inaccurate. They mean “software.” Now part of the problem is that Google and Facebook talk about their algorithms, but really they are talking about their software, which will incorporate many algorithms. For example, Google does it here: https://webmasters.googleblog.com/2011/05/more-guidance-on-building-high-quality.html At least Google talks about algorithms, not algorithm. Either way, talking about algorithms is bad. It’s software, not algorithms, and if you can’t see the difference, that is a good indication you should not be writing think pieces about I.T.
  • Then there is this quote: “Algorithms in human affairs are generally complex computer programs that crunch data and perform computations to optimize outcomes chosen by programmers. Such an algorithm isn’t some pure sifting mechanism, spitting out objective answers in response to scientific calculations. Nor is it a mere reflection of the desires of the programmers. We use these algorithms to explore questions that have no right answer to begin with, so we don’t even have a straightforward way to calibrate or correct them.” What does that even mean? To me, I think it implies any software that is socially oriented (as opposed to say banking software or airline travel software) is imprecise or unpredictable. But at best, that is only slightly true and mainly false. Facebook and Google both want to give you relevant answers. If you start typing in “restaurants” or some other facilities in Google search box, Google will start suggesting answers to you. These answers will very likely to be relevant to you. It is important for Google that this happens, because this is how they make money from advertisers. They have a way of calibrating and correcting this. In fact I am certain they spend a lot of resources making sure you have the correct answer or close to the correct answer. Facebook is the same way. The results you get back are not random. They are designed, built and tested to be relevant to you. The more relevant they are, the more successful these companies are. The responses are generally right ones.
  • If Google shows you these 11 results instead of those 11, or if a hiring algorithm puts this person’s résumé at the top of a file and not that one, who is to definitively say what is correct, and what is wrong?” Actually, Google can say, they just don’t. It’s not in their business interest to explain in detail how their software works. They do explain generally, in order to help people insure their sites stay relevant. (See the link I provided above). But if they provide too much detail, bad sites game their sites and make Google search results worse for everyone. As well, if they provide too much detail, they can make it easier for other search engine sites – yes, they still exist – to compete with them.
  • Without laws of nature to anchor them, algorithms used in such subjective decision making can never be truly neutral, objective or scientific.” This is simply nonsense.
  • Programmers do not, and often cannot, predict what their complex programs will do. “ Also untrue. If this was true, then IBM could not improve Watson to be more accurate. Google could not have their sales reps convince ad buyers that it is worth their money to pay Google to show their ads. Same for Facebook, Twitter, and any web site that is dependent on advertising as a revenue stream.
  • Google’s Internet services are billions of lines of code.” So what? And how is this a measure of complexity?  I’ve seen small amounts of code that was poorly maintained be very hard to understand, and large amounts of code that was well maintained be very simple to understand.
  • Once these algorithms with an enormous number of moving parts are set loose, they then interact with the world, and learn and react. The consequences aren’t easily predictable. Our computational methods are also getting more enigmatic. Machine learning is a rapidly spreading technique that allows computers to independently learn to learn — almost as we do as humans — by churning through the copious disorganized data, including data we generate in digital environments. However, while we now know how to make machines learn, we don’t really know what exact knowledge they have gained. If we did, we wouldn’t need them to learn things themselves: We’d just program the method directly.” This is just a cluster of ideas slammed together, a word sandwich with layers of phrases without saying anything. It makes it sound like AI has been unleashed upon the world and we are helpless to do anything about it. That’s ridiculous. As well, it’s vague enough that it is hard to dispute without talking in detail about how A.I. and machine learning works, but it seems knowledgeable enough that many people think it has greater meaning.
  • With algorithms, we don’t have an engineering breakthrough that’s making life more precise, but billions of semi-savant mini-Frankensteins, often with narrow but deep expertise that we no longer understand, spitting out answers here and there to questions we can’t judge just by numbers, all under the cloak of objectivity and science.” This is just scaremongering.
  • If these algorithms are not scientifically computing answers to questions with objective right answers, what are they doing? Mostly, they “optimize” output to parameters the company chooses, crucially, under conditions also shaped by the company. On Facebook the goal is to maximize the amount of engagement you have with the site and keep the site ad-friendly.You can easily click on “like,” for example, but there is not yet a “this was a challenging but important story” button. This setup, rather than the hidden personal beliefs of programmers, is where the thorny biases creep into algorithms, and that’s why it’s perfectly plausible for Facebook’s work force to be liberal, and yet for the site to be a powerful conduit for conservative ideas as well as conspiracy theories and hoaxes — along with upbeat stories and weighty debates. Indeed, on Facebook, Donald J. Trump fares better than any other candidate, and anti-vaccination theories like those peddled by Mr. Beck easily go viral. The newsfeed algorithm also values comments and sharing. All this suits content designed to generate either a sense of oversize delight or righteous outrage and go viral, hoaxes and conspiracies as well as baby pictures, happy announcements (that can be liked) and important news and discussions.” This is the one thing in the piece that I agreed with, and it points to the real challenge with Facebook’s software. I think the software IS neutral, in that it is not interested in the content per se as it is how the user is responding or not responding to it. What is NOT neutral is the data it is working off of. Facebook’s software is as susceptible to GIGO (garbage in, garbage out) as any other software. So if you have a lot of people on Facebook sending around cat pictures and stupid things some politicians are saying, people are going to respond to it and Facebook’s software is going to respond to that response.
  • Facebook’s own research shows that the choices its algorithm makes can influence people’s mood and even affect elections by shaping turnout. For example, in August 2014, my analysis found that Facebook’s newsfeed algorithm largely buried news of protests over the killing of Michael Brown by a police officer in Ferguson, Mo., probably because the story was certainly not “like”-able and even hard to comment on. Without likes or comments, the algorithm showed Ferguson posts to fewer people, generating even fewer likes in a spiral of algorithmic silence. The story seemed to break through only after many people expressed outrage on the algorithmically unfiltered Twitter platform, finally forcing the news to national prominence.” Also true. Additionally, Facebook got into trouble for the research they did showing their software can manipulate people by….manipulating people in experiments on them! It was dumb, unethical, and possibly illegal.
  • Software giants would like us to believe their algorithms are objective and neutral, so they can avoid responsibility for their enormous power as gatekeepers while maintaining as large an audience as possible.” Well, not exactly. It’s true that Facebook and Twitter are flirting with the notion of becoming more news organizations, but I don’t think they have decided whether or not they should make the leap or not. Mostly what they are focused on are channels that allow them to gain greater audiences for their ads with few if any restrictions.

In short, like many of the IT think pieces I have seen the Times, it is filled with wrong headed generalities and overstatements, in addition to some concrete examples buried somewhere in the piece that likely was thing that generated the idea to write the piece in the first place. Terrible.