Tag Archives: Facebook

Facebook shows why we need augmented intelligence (Artificial and Human Intelligence)

Because if you don’t have augmented intelligence, and if you solely depend on AI like software, you get problems like this, whereby automated software triggers an event that a trained human might have picked up on.

AI and ML (machine learning) can be highly probabilistic and limited to the information it is trained on. Having a human involved makes up for those limits. Just like AI can process much more information quicker than a limited human can.

See the link to the New York Times story to see what I mean.

Advertisements

How to stop Whatsapp from sharing information with Facebook 

Instructions are here as to how to stop Whatsapp from sharing information with Facebook.

Facebook owns Whatsapp. I expect this simple opt out may not be so simple in the months and years to come. You may have to make a harder choice then when it comes to privacy on Whatsapp. In the meantime, you can follow those instructions to maintain the separation between your Whatsapp data and your Facebook data.

A sign of the times: Adblock blocks Facebook. Facebook circumvents Adblock. Now Adblock circumvents Facebook.

No doubt this game of cat and mouse will go on for some time. For Adblock to prosper, they need to block ads on Facebook. Likewise, for Facebook there is too much money at stake to allow Adblock to block their ads.  For details on this, see: Adblock Plus and (a little) more: FB reblock: ad-blocking community finds workaround to Facebook

One thing for sure: developers from both sides will be pushing out changes on a regular basis as this battle heats up.

Of course, behind such tactics, the deeper questions are left unresolved, questions around business models and the viability of services without access to advertising revenue.

The Real Bias Built in at Facebook <- another bad I.T. story in the New York Times (and my criticism of it)

There is so much wrong in this article, The Real Bias Built In at Facebook – The New York Times, that I decided to take it apart in this blog post. (I’ve read  so many bad  IT stories in the Times that I stopped critiquing them after a while, but this one in particular bugged me enough to write something).

To illustrate what I mean by what is wrong with this piece, here’s some excerpts in italics followed by my thoughts in non-italics.

  • First off, there is the use of the word “algorithm” everywhere. That alone is a problem. For an example of why that is bad, see section 2.4 of Paul Ford’s great piece on software,What is Code? As Ford explains: ““Algorithm” is a word writers invoke to sound smart about technology. Journalists tend to talk about “Facebook’s algorithm” or a “Google algorithm,” which is usually inaccurate. They mean “software.” Now part of the problem is that Google and Facebook talk about their algorithms, but really they are talking about their software, which will incorporate many algorithms. For example, Google does it here: https://webmasters.googleblog.com/2011/05/more-guidance-on-building-high-quality.html At least Google talks about algorithms, not algorithm. Either way, talking about algorithms is bad. It’s software, not algorithms, and if you can’t see the difference, that is a good indication you should not be writing think pieces about I.T.
  • Then there is this quote: “Algorithms in human affairs are generally complex computer programs that crunch data and perform computations to optimize outcomes chosen by programmers. Such an algorithm isn’t some pure sifting mechanism, spitting out objective answers in response to scientific calculations. Nor is it a mere reflection of the desires of the programmers. We use these algorithms to explore questions that have no right answer to begin with, so we don’t even have a straightforward way to calibrate or correct them.” What does that even mean? To me, I think it implies any software that is socially oriented (as opposed to say banking software or airline travel software) is imprecise or unpredictable. But at best, that is only slightly true and mainly false. Facebook and Google both want to give you relevant answers. If you start typing in “restaurants” or some other facilities in Google search box, Google will start suggesting answers to you. These answers will very likely to be relevant to you. It is important for Google that this happens, because this is how they make money from advertisers. They have a way of calibrating and correcting this. In fact I am certain they spend a lot of resources making sure you have the correct answer or close to the correct answer. Facebook is the same way. The results you get back are not random. They are designed, built and tested to be relevant to you. The more relevant they are, the more successful these companies are. The responses are generally right ones.
  • If Google shows you these 11 results instead of those 11, or if a hiring algorithm puts this person’s résumé at the top of a file and not that one, who is to definitively say what is correct, and what is wrong?” Actually, Google can say, they just don’t. It’s not in their business interest to explain in detail how their software works. They do explain generally, in order to help people insure their sites stay relevant. (See the link I provided above). But if they provide too much detail, bad sites game their sites and make Google search results worse for everyone. As well, if they provide too much detail, they can make it easier for other search engine sites – yes, they still exist – to compete with them.
  • Without laws of nature to anchor them, algorithms used in such subjective decision making can never be truly neutral, objective or scientific.” This is simply nonsense.
  • Programmers do not, and often cannot, predict what their complex programs will do. “ Also untrue. If this was true, then IBM could not improve Watson to be more accurate. Google could not have their sales reps convince ad buyers that it is worth their money to pay Google to show their ads. Same for Facebook, Twitter, and any web site that is dependent on advertising as a revenue stream.
  • Google’s Internet services are billions of lines of code.” So what? And how is this a measure of complexity?  I’ve seen small amounts of code that was poorly maintained be very hard to understand, and large amounts of code that was well maintained be very simple to understand.
  • Once these algorithms with an enormous number of moving parts are set loose, they then interact with the world, and learn and react. The consequences aren’t easily predictable. Our computational methods are also getting more enigmatic. Machine learning is a rapidly spreading technique that allows computers to independently learn to learn — almost as we do as humans — by churning through the copious disorganized data, including data we generate in digital environments. However, while we now know how to make machines learn, we don’t really know what exact knowledge they have gained. If we did, we wouldn’t need them to learn things themselves: We’d just program the method directly.” This is just a cluster of ideas slammed together, a word sandwich with layers of phrases without saying anything. It makes it sound like AI has been unleashed upon the world and we are helpless to do anything about it. That’s ridiculous. As well, it’s vague enough that it is hard to dispute without talking in detail about how A.I. and machine learning works, but it seems knowledgeable enough that many people think it has greater meaning.
  • With algorithms, we don’t have an engineering breakthrough that’s making life more precise, but billions of semi-savant mini-Frankensteins, often with narrow but deep expertise that we no longer understand, spitting out answers here and there to questions we can’t judge just by numbers, all under the cloak of objectivity and science.” This is just scaremongering.
  • If these algorithms are not scientifically computing answers to questions with objective right answers, what are they doing? Mostly, they “optimize” output to parameters the company chooses, crucially, under conditions also shaped by the company. On Facebook the goal is to maximize the amount of engagement you have with the site and keep the site ad-friendly.You can easily click on “like,” for example, but there is not yet a “this was a challenging but important story” button. This setup, rather than the hidden personal beliefs of programmers, is where the thorny biases creep into algorithms, and that’s why it’s perfectly plausible for Facebook’s work force to be liberal, and yet for the site to be a powerful conduit for conservative ideas as well as conspiracy theories and hoaxes — along with upbeat stories and weighty debates. Indeed, on Facebook, Donald J. Trump fares better than any other candidate, and anti-vaccination theories like those peddled by Mr. Beck easily go viral. The newsfeed algorithm also values comments and sharing. All this suits content designed to generate either a sense of oversize delight or righteous outrage and go viral, hoaxes and conspiracies as well as baby pictures, happy announcements (that can be liked) and important news and discussions.” This is the one thing in the piece that I agreed with, and it points to the real challenge with Facebook’s software. I think the software IS neutral, in that it is not interested in the content per se as it is how the user is responding or not responding to it. What is NOT neutral is the data it is working off of. Facebook’s software is as susceptible to GIGO (garbage in, garbage out) as any other software. So if you have a lot of people on Facebook sending around cat pictures and stupid things some politicians are saying, people are going to respond to it and Facebook’s software is going to respond to that response.
  • Facebook’s own research shows that the choices its algorithm makes can influence people’s mood and even affect elections by shaping turnout. For example, in August 2014, my analysis found that Facebook’s newsfeed algorithm largely buried news of protests over the killing of Michael Brown by a police officer in Ferguson, Mo., probably because the story was certainly not “like”-able and even hard to comment on. Without likes or comments, the algorithm showed Ferguson posts to fewer people, generating even fewer likes in a spiral of algorithmic silence. The story seemed to break through only after many people expressed outrage on the algorithmically unfiltered Twitter platform, finally forcing the news to national prominence.” Also true. Additionally, Facebook got into trouble for the research they did showing their software can manipulate people by….manipulating people in experiments on them! It was dumb, unethical, and possibly illegal.
  • Software giants would like us to believe their algorithms are objective and neutral, so they can avoid responsibility for their enormous power as gatekeepers while maintaining as large an audience as possible.” Well, not exactly. It’s true that Facebook and Twitter are flirting with the notion of becoming more news organizations, but I don’t think they have decided whether or not they should make the leap or not. Mostly what they are focused on are channels that allow them to gain greater audiences for their ads with few if any restrictions.

In short, like many of the IT think pieces I have seen the Times, it is filled with wrong headed generalities and overstatements, in addition to some concrete examples buried somewhere in the piece that likely was thing that generated the idea to write the piece in the first place. Terrible.

Is Mark Zuckerberg’s $45 Billion Facebook donation good or not? Best to consider the alternative

In analysing the donation, Forbes (in The Surprising Math In Mark Zuckerberg’s $45 Billion Facebook Donation) sums it up like this:

Mr. Zuckerberg’s pledge is incredibly generous. But it is also likely to involve some very savvy tax planning.

It’s true, the donation is incredibly generous. You can use all the superlatives you want, and it still amounts to something out of the ordinary. Is the donation financial savvy? Of course, but why shouldn’t it be? Either way, will it be spent well? Possibly, possibly not. For that, we will have to wait and see.

As I read people arguing against the donation, I thought about what the alternatives could be. One alternative is no donation at all. Plenty of very rich people donate only a small fraction of their money to good causes. Young billionaires aspiring to be large benefactors is something that should be encouraged, not discouraged. Another alternative is donations to political causes I disagree with. Quite a few billionaires do that. I prefer to see the billions directed otherwise.

As for people arguing for the donation, I wondered if they considered the alternative of the donation going to taxes or charities. Perhaps Zuckerberg will be very good with directing the money, better than the state or NGOs. I’d like to see a good portion go to them, though. Too little of the wealth of the 1% (or .01%) go to paying taxes that pay for a lot of things like social services and health care and the military and infrastructure. More money to pay for those things would be better. Likewise, well run charities are already up and running and could spend the money in efficient ways that a new organization cannot.

This donation is a positive thing, but you should still be able to think critically of it. Mark Zuckerberg is a smart guy and he’s maturing. Let’s hope he uses his good fortune to do good in the world.

 

On Facebook, the company

Facebook is a company. It’s not Mark Zuckerberg. It’s not an app you use on your phone. It’s a collection of services that is growing rapidly and it may be poised to grow at even crazier rates than it has now, if you believe what is in this piece, Inside Mark Zuckerberg’s Bold Plan For The Future Of Facebook. Key point it raises:

The Facebook of today—and tomorrow—is far more expansive than it was just a few years ago. It’s easy to forget that when the company filed to go public on February 1, 2012, it was just a single website and an app that the experts weren’t sure could ever be profitable. Now, “a billion and a half people use the main, core Facebook service, and that’s growing. But 900 million people use WhatsApp, and that’s an important part of the whole ecosystem now,” Zuckerberg says. “Four hundred million people use Instagram, 700 million people use Messen­ger, and 700 million people use Groups. Increasingly, we’re just going to go more and more in this direction.”

Reading this, you get the sense of a company that is going to bigger in a few years than it is now, which seems incredible to me. Note this article: it will be worth revisiting in a few years.

That said,  there are a few points I’d like to add:

  1. I actually think that Facebook the app/website is declining in active usage. It is very clever showing you things people like, even if people you know aren’t posting things. You get a sense of activity on Facebook the app/website whenever you log in. You never get the sense that it is not being used by people, even if many of the people you follow aren’t actively contributing at all. I suspect if you dropped your Facebook friends down to next to none it would still show you the same amount of information. If Facebook the company is going to remain successful, it needs to diversify from it’s main service.
  2. It is interesting that people continue to compare Twitter to Facebook. To me, there is little to compare. Facebook seems to have a better growth plan and even have a better app. If Facebook the service declines, the diversification into places like WhatsApp and Instagram is strong in a way that is unlikely to be matched by services like Vine or Periscope. While there is some commonality between the two companies, I think the story of their divergence will become a bigger one over time. Contributing to that big difference is Facebook remains a stable company with a stable leadership while Twitter’s leadership remains chaotic and unstable.
  3. The narrative in that story is very optimistic. If the numbers for any of those organizations start to slip, I could see the narrative changing, just like it has for so many IT companies. Right now the narrative is: Facebook is a very successful company and it is going to become more successful with all these promising ideas. The narrative can easily become: Facebook is a very troubled company and it is going to become more troubled with all these ideas doomed to fail. (See Yahoo! for an example of such a narrative.)

Facebook Publishes New York Times, BuzzFeed with “Instant Articles”. Let’s note this.

According to Re/code, the New York Times, BuzzFeed and others have received really good terms with Facebook regarding the publishing of “Instant Articles”. For instance:

Facebook’s “Instant Articles” are designed to load, um, instantly on Facebook’s iOS app — which is the heart of Facebook’s pitch.

Facebook lets publishers use their own publishing tools, and then converts stories automatically into a format that works on Facebook’s app. There are also some cool bells and whistles, like a photo and video-panning feature Facebook imported from its all-but-forgotten Paper app. Here’s a demo video:

Facebook will let publishers keep 100 percent of the revenue they sell for “Instant Articles”; if they have unsold inventory Facebook will sell it for them via its own ad network and give publishers 70 percent of that revenue.

Facebook will give “Instant Article” publishers access to performance data on their stuff, provided by Google Analytics and Adobe’s Ominiture.

ComScore, the Web’s most important measurement company, will give “Instant Article” publishers full credit for any traffic those stories generate on Facebook’s app.

Publishers can control much of the look and feel of how Facebook presents their stories; the item BuzzFeed publishes tomorrow won’t be mistaken for National Geographic’s.

Facebook says it won’t alter its algorithm to favor “Instant Articles” over any other kind of content. But given their novelty, and the fact they’re designed to be eye-catching, it seems very likely that these things will get lots of attention at the start.

Very generous. Enticing, even.

I am keen to revisit this in a year from now, to see if Facebook has revised these terms. If Facebook treats these terms like they treat your privacy, in a year or so I expect the revised terms will not be as generous. And if some companies are not careful, they will find they let their own IT teams dwindle and they will have no choice but to stick with Facebook.