Category Archives: ideas

Palo Alto vs. Tokyo: some modest thoughts on housing 

Image of Palo Alto linked to from Wikipedia

Two pieces on housing got me thinking of housing policy and what if anything can be done to improve it. The two pieces are this:

  1. Letter of Resignation from the Palo Alto Planning and Transportation Commission — Medium
  2. Tokyo may have found the solution to soaring housing costs – Vox

(Note: I don’t have much expertise on housing policy. These are just some notes I jotted down after thinking about these pieces. Take the following with a (huge?) grain of salt.)

The first piece describes how housing in Palo Alto, California is becoming too expensive for all but the rich. Part of what is causing this is the limits placed on adding new housing in the area. The second piece describes how Tokyo gets around this, namely by removing the decisions about housing from city politics and making it at a national level.

It seems pretty straightforward then: all cities should remove decision making about housing from the local level and assign it to a body at a national level. But is this true? And would it work in North American cities?

It depends on what you expect your housing policy to be and how effectively you can impose it. If the policy is to have affordable and available housing for a city, then the Tokyo model makes sense. However, there is an assumption that decisions made at a national level will be in line with the desires of the residence of the city. This is a big assumption.

There are at least two sets of desires that home owners have for their homes and their city. One, that their homes and the neighbourhood they live in remain stable or improve. Two, that their homes appreciate in value. The first desire could be wrecked by the Tokyo model. The second desire would definitely be affected by the Tokyo model. With cities like Palo Alto, you have the two sets of desires met, at least in the short term. In the longer term, the second desire could level off as people and industry move elsewhere.

The ideal is to have a national policy that takes into account the need for neighbourhoods to grow organically, for house values to appreciate over time but still allow for affordability, and for cities to  allow for new housing as well as account for when neighbourhoods become depopulated. Having such a policy would support vibrant cities at a national level. You would treat cities as a network of systems, and you would allocate or remove resources over time to keep all cities vibrant, regardless if they are growing or declining.

This is the ideal. Practically, I just can’t see this happening in North American cities. North Americans are too strongly capitalist to allow what is happening in Japan to happen here. If national organizations tried too hard to manage cities and resulted in cooling off housing markets, people would oppose that. For many people, their house is their chief asset, and any efforts to restrict that from appreciation would be met with defiance.

Sadly, I think there are going to have to be many failures within cities such as Palo Alto and San Francisco before there is enough political will to change the way housing is managed. I think the Tokyo/Japan model is out of reach for my continent for decades, still.

It’s unfortunate: you have cities in the U.S. in the rust belt suffering great decline, while cities on the coasts struggling to come to terms with growth. A national policy on housing would help all cities and have a greater benefits for people than the current approach.

I like Palo Alto. It’s a great city, in a great region. I think it would be greater still if it had more housing.

A great primer on self driving trucks that everyone should read. (Really!)

This piece, 1.8 million American truck drivers could lose their jobs to robots. What then? (Vox) is a great primer on self driving trucks and how they are going to have a major impact sooner than later.

If you are interested in IT, AI or robots, it really shows one of the places where this technology is going to have a significant impact.

If you are interested in economics, politics, or sociology, then the effect of robots replacing all these truck drivers is definitely something you want to be aware of.

If you drive on highways, you definitely want to know about it.

In any case, it’s a good piece by David Roberts. That is his beat and I find he always does a great job of breaking down a topic like this and making it easier to understand and relevant to me. I recommend any of his pieces.

The Superbook, the decline of the Personal Computer, and the future of computing

Superbook, a $99 computer project on Kickstarter, is impressive in itself. Based on the sponsorship of this project, many agree with me.

Essentially it extends your phone like a Smart Watch does, but instead of the form factor diminishing, it’s increasing. In some ways, it does what the Chromebooks do, but with the use of your phone. If it works well, it is one more nail in the coffin of the personal computer. Already tablets and other devices have distributed computing away from the personal computer. I can only see this trend increasing as displays and memory and CPUs get better. Sooner than later, the attachment of the display to the keyboard will dissolve, and people will assemble “personal computers” from a variety of tablets and other displays, keyboards, and whatever smart phones they have. The next step is better designed and detachable keyboards, along with more powerful phones. (The phone isn’t a phone anyway: it’s a handheld computer with built in telephony capability).

Networks are going become more pervasive, faster and cheaper. Displays are going to become cheaper. Phone makers are going to need to give you more reasons to buy phones. All of these things point to computing devices like this becoming more prevalent and personal computers getting further and further displaced.

You can find out more about the project, here here.

What drives A.I. development? Better data

This article, Datasets Over Algorithms — Space Machine, makes a good point, namely

…perhaps many major AI breakthroughs have actually been constrained by the availability of high-quality training datasets, and not by algorithmic advances.

Looking at this chart they provide illustrates the point:

I’d argue that it isn’t solely datasets that drive A.I. breakthroughs. Better CPUs, improved storage technology, and of course new ideas can also propel A.I. forward. But if you ask me now, I think A.I. in the future will need better data to make big advances.

The Real Bias Built in at Facebook <- another bad I.T. story in the New York Times (and my criticism of it)

There is so much wrong in this article, The Real Bias Built In at Facebook – The New York Times, that I decided to take it apart in this blog post. (I’ve read  so many bad  IT stories in the Times that I stopped critiquing them after a while, but this one in particular bugged me enough to write something).

To illustrate what I mean by what is wrong with this piece, here’s some excerpts in italics followed by my thoughts in non-italics.

  • First off, there is the use of the word “algorithm” everywhere. That alone is a problem. For an example of why that is bad, see section 2.4 of Paul Ford’s great piece on software,What is Code? As Ford explains: ““Algorithm” is a word writers invoke to sound smart about technology. Journalists tend to talk about “Facebook’s algorithm” or a “Google algorithm,” which is usually inaccurate. They mean “software.” Now part of the problem is that Google and Facebook talk about their algorithms, but really they are talking about their software, which will incorporate many algorithms. For example, Google does it here: https://webmasters.googleblog.com/2011/05/more-guidance-on-building-high-quality.html At least Google talks about algorithms, not algorithm. Either way, talking about algorithms is bad. It’s software, not algorithms, and if you can’t see the difference, that is a good indication you should not be writing think pieces about I.T.
  • Then there is this quote: “Algorithms in human affairs are generally complex computer programs that crunch data and perform computations to optimize outcomes chosen by programmers. Such an algorithm isn’t some pure sifting mechanism, spitting out objective answers in response to scientific calculations. Nor is it a mere reflection of the desires of the programmers. We use these algorithms to explore questions that have no right answer to begin with, so we don’t even have a straightforward way to calibrate or correct them.” What does that even mean? To me, I think it implies any software that is socially oriented (as opposed to say banking software or airline travel software) is imprecise or unpredictable. But at best, that is only slightly true and mainly false. Facebook and Google both want to give you relevant answers. If you start typing in “restaurants” or some other facilities in Google search box, Google will start suggesting answers to you. These answers will very likely to be relevant to you. It is important for Google that this happens, because this is how they make money from advertisers. They have a way of calibrating and correcting this. In fact I am certain they spend a lot of resources making sure you have the correct answer or close to the correct answer. Facebook is the same way. The results you get back are not random. They are designed, built and tested to be relevant to you. The more relevant they are, the more successful these companies are. The responses are generally right ones.
  • If Google shows you these 11 results instead of those 11, or if a hiring algorithm puts this person’s résumé at the top of a file and not that one, who is to definitively say what is correct, and what is wrong?” Actually, Google can say, they just don’t. It’s not in their business interest to explain in detail how their software works. They do explain generally, in order to help people insure their sites stay relevant. (See the link I provided above). But if they provide too much detail, bad sites game their sites and make Google search results worse for everyone. As well, if they provide too much detail, they can make it easier for other search engine sites – yes, they still exist – to compete with them.
  • Without laws of nature to anchor them, algorithms used in such subjective decision making can never be truly neutral, objective or scientific.” This is simply nonsense.
  • Programmers do not, and often cannot, predict what their complex programs will do. “ Also untrue. If this was true, then IBM could not improve Watson to be more accurate. Google could not have their sales reps convince ad buyers that it is worth their money to pay Google to show their ads. Same for Facebook, Twitter, and any web site that is dependent on advertising as a revenue stream.
  • Google’s Internet services are billions of lines of code.” So what? And how is this a measure of complexity?  I’ve seen small amounts of code that was poorly maintained be very hard to understand, and large amounts of code that was well maintained be very simple to understand.
  • Once these algorithms with an enormous number of moving parts are set loose, they then interact with the world, and learn and react. The consequences aren’t easily predictable. Our computational methods are also getting more enigmatic. Machine learning is a rapidly spreading technique that allows computers to independently learn to learn — almost as we do as humans — by churning through the copious disorganized data, including data we generate in digital environments. However, while we now know how to make machines learn, we don’t really know what exact knowledge they have gained. If we did, we wouldn’t need them to learn things themselves: We’d just program the method directly.” This is just a cluster of ideas slammed together, a word sandwich with layers of phrases without saying anything. It makes it sound like AI has been unleashed upon the world and we are helpless to do anything about it. That’s ridiculous. As well, it’s vague enough that it is hard to dispute without talking in detail about how A.I. and machine learning works, but it seems knowledgeable enough that many people think it has greater meaning.
  • With algorithms, we don’t have an engineering breakthrough that’s making life more precise, but billions of semi-savant mini-Frankensteins, often with narrow but deep expertise that we no longer understand, spitting out answers here and there to questions we can’t judge just by numbers, all under the cloak of objectivity and science.” This is just scaremongering.
  • If these algorithms are not scientifically computing answers to questions with objective right answers, what are they doing? Mostly, they “optimize” output to parameters the company chooses, crucially, under conditions also shaped by the company. On Facebook the goal is to maximize the amount of engagement you have with the site and keep the site ad-friendly.You can easily click on “like,” for example, but there is not yet a “this was a challenging but important story” button. This setup, rather than the hidden personal beliefs of programmers, is where the thorny biases creep into algorithms, and that’s why it’s perfectly plausible for Facebook’s work force to be liberal, and yet for the site to be a powerful conduit for conservative ideas as well as conspiracy theories and hoaxes — along with upbeat stories and weighty debates. Indeed, on Facebook, Donald J. Trump fares better than any other candidate, and anti-vaccination theories like those peddled by Mr. Beck easily go viral. The newsfeed algorithm also values comments and sharing. All this suits content designed to generate either a sense of oversize delight or righteous outrage and go viral, hoaxes and conspiracies as well as baby pictures, happy announcements (that can be liked) and important news and discussions.” This is the one thing in the piece that I agreed with, and it points to the real challenge with Facebook’s software. I think the software IS neutral, in that it is not interested in the content per se as it is how the user is responding or not responding to it. What is NOT neutral is the data it is working off of. Facebook’s software is as susceptible to GIGO (garbage in, garbage out) as any other software. So if you have a lot of people on Facebook sending around cat pictures and stupid things some politicians are saying, people are going to respond to it and Facebook’s software is going to respond to that response.
  • Facebook’s own research shows that the choices its algorithm makes can influence people’s mood and even affect elections by shaping turnout. For example, in August 2014, my analysis found that Facebook’s newsfeed algorithm largely buried news of protests over the killing of Michael Brown by a police officer in Ferguson, Mo., probably because the story was certainly not “like”-able and even hard to comment on. Without likes or comments, the algorithm showed Ferguson posts to fewer people, generating even fewer likes in a spiral of algorithmic silence. The story seemed to break through only after many people expressed outrage on the algorithmically unfiltered Twitter platform, finally forcing the news to national prominence.” Also true. Additionally, Facebook got into trouble for the research they did showing their software can manipulate people by….manipulating people in experiments on them! It was dumb, unethical, and possibly illegal.
  • Software giants would like us to believe their algorithms are objective and neutral, so they can avoid responsibility for their enormous power as gatekeepers while maintaining as large an audience as possible.” Well, not exactly. It’s true that Facebook and Twitter are flirting with the notion of becoming more news organizations, but I don’t think they have decided whether or not they should make the leap or not. Mostly what they are focused on are channels that allow them to gain greater audiences for their ads with few if any restrictions.

In short, like many of the IT think pieces I have seen the Times, it is filled with wrong headed generalities and overstatements, in addition to some concrete examples buried somewhere in the piece that likely was thing that generated the idea to write the piece in the first place. Terrible.

Glitches as a design pattern for fabric

The good folks at Glitchaus have taken an oddity of the digital world – glitches – and used it as the basis of their designs of scarves and wraps. If you are in need of either, or you’d just like to see some innovative fashion, it’s worth visiting their site.

Houses aren’t homes: they’re capital

And in the richest cities, like London, they are greatly appreciating capital, as this shows:
Media preview

With some reflection, this makes sense, if you take as a given that:

  •   Stocks and bonds and even wages are fairly stagnant in terms of return on investment
  • Urbanization means homes in cities that are desirable to live in are becoming more scarce

The result is homes becoming one of the forms of capital that can has the means to greatly appreciate in value.

To reverse this will require a greater supply of homes on the market, either through greater density in desirable cities or through more cities becoming desirable to live in. I can see both of these occurring. What I don’t see occurring is other forms of capital becoming more capable of great growth.

It will be interesting to see what happens in 10 years. But right now, bet on homes in key cities to continue to do this.