Some thoughts on the problems Facebook and Google (and even retailers) have with people being awful on their platforms

Google, Facebook, and Twitter are platforms. So are some retail sites. What does that mean? It means that they provide the means for people to use their technology to create things for themselves. Most of the time, this is a good thing. People can communicate in ways they never could before such platforms. Likewise, people can sell things to people they never could.

Now these platforms are in a bind, as you can see in this piece and in other places: Google, Facebook, and Twitter Sell Hate Speech Targeted Ads. They are in a bind partly due to their own approach, by boasting of their ability to use AI to stop such things. They should have been much more humble. AI as it currently stands will only take you so far. Instead of relying on things like AI, they need to have better governance mechanisms in place. Governance is a cost of organizations, and often times organizations don’t insert proper governance until flaws like this start to occur.

That said, this particular piece has several weaknesses. First up, this comment: “that the companies are incapable of building their systems to reflect moral values”. It would be remarkable for global companies to build systems to reflect moral values when even within individual nations there is conflicts regarding such values. Likewise the statement: “It seems highly unlikely that these platforms knowingly allow offensive language to slip through the cracks”. Again, define offensive language at a global level. To make it harder still, trying doing it with different languages and different cultures. The same thing occurs on retail sites when people put offensive images on T shirts. For some retail systems no one from the company that own the platform takes time to review every product that comes in.

And that gets to the problem. All these platforms could be mainly content agnostic, the way the telephone system is platform agnostic. However people are expecting them to insert themselves and not be content agnostic. Once that happens, they are going to be in an exceptional bind. We don’t live in a homogenous world where everyone shares the same values. Even if they converted to non-profits and spent a lot more revenue on reviewing content, there would still be limits to what they could do.

To make things better, these platforms need to be humble and realistic about what they can do and communicate that consistently and clearly with the people that use these systems. Otherwise, they are going to find that they are going to be governed in ways they are not going to like. Additionally, they need to decide what their own values are and communicate and defend them. They may lose users and customers, but the alternative of trying to be different things in different places will only make their own internal governance impossible.

 

AI is hard, China version

According to this, chatbots in China have been removed after being critical of the Chinese government. This to me is not unlike what happened to Microsoft's chat bot that became racist after being feed racist input from users. If you put AI out there and allow any form of input, then the equivalent of vandals can overtake you AI and feed it whatever they choose. I'm not certain if that was the case in China but I suspect it was.

AI researchers need to expect the worst case use cases if they allow their software to do unsupervised learning on the Internet. If they don't, it's likely that their projects will be a disaster and they will do damage to the AI community in general.

Jean-Luc Mélenchon, a candidate right out of a Philip K Dick Novel

Melenchon hologram
In France, politician Jean-Luc Mélenchon plans to be in seven places at once using  something similar to a hologram. According to Le Parisien:

Strictly speaking, these are not holograms. Jean-Luc Mélenchon will be present in seven different places thanks to … an optical illusion discovered for the first time half a century ago by an Italian physicist

Virtual Mélenchon reminds me of the politician Yance in Philip K Dick’s novel, The Penultimate Truth. We may not be far off where we get virtual candidate that look like people but behind the scenes we have AI or some combination of AI and people.

For more on the technology, see the article in Le Parisien. For more on Dick’s novel, see Wikipedia. Read up now: I think we can expect to see more of this technology in use soon.

It’s not because most developers are white that AI has hard time with non-white faces. It’s this….

An example of a neural net topology
This piece, Most engineers are white — and so are the faces they use to train software – Recode, implies that AI software doesn’t do a good job recognizing non-white faces because most engineers (i.e. software developers) are white. I’d argue that the AI does a poor job because of this: the developers aren’t very good.

Good software developers, in particular the lead developers, take an active role in ensuring they have good test data. The success of their software when it goes live is dependent on it. Anyone using training data  (i.e. using test data) in AI projects that is not using a broad set of faces is doing a poor job. Period. Regardless of whether or not they are white.

If the AI is supposed to do something (i.e. recognize all faces) and it does not, then the AI sucks. Don’t blame it on anything but technical abilities.

 

 

Facebook shows why we need augmented intelligence (Artificial and Human Intelligence)

Because if you don’t have augmented intelligence, and if you solely depend on AI like software, you get problems like this, whereby automated software triggers an event that a trained human might have picked up on.

AI and ML (machine learning) can be highly probabilistic and limited to the information it is trained on. Having a human involved makes up for those limits. Just like AI can process much more information quicker than a limited human can.

See the link to the New York Times story to see what I mean.