According to this, chatbots in China have been removed after being critical of the Chinese government. This to me is not unlike what happened to Microsoft's chat bot that became racist after being feed racist input from users. If you put AI out there and allow any form of input, then the equivalent of vandals can overtake you AI and feed it whatever they choose. I'm not certain if that was the case in China but I suspect it was.
AI researchers need to expect the worst case use cases if they allow their software to do unsupervised learning on the Internet. If they don't, it's likely that their projects will be a disaster and they will do damage to the AI community in general.
Posted in AI
Tagged AI, chatbots, China
In France, politician Jean-Luc Mélenchon plans to be in seven places at once using something similar to a hologram. According to Le Parisien:
Strictly speaking, these are not holograms. Jean-Luc Mélenchon will be present in seven different places thanks to … an optical illusion discovered for the first time half a century ago by an Italian physicist
Virtual Mélenchon reminds me of the politician Yance in Philip K Dick’s novel, The Penultimate Truth. We may not be far off where we get virtual candidate that look like people but behind the scenes we have AI or some combination of AI and people.
For more on the technology, see the article in Le Parisien. For more on Dick’s novel, see Wikipedia. Read up now: I think we can expect to see more of this technology in use soon.
Posted in AI, ideas, IT, politics
Tagged AI, France, French, IT, philipkdick, politics, sci-fi, sciencefiction, SF
This piece, Most engineers are white — and so are the faces they use to train software – Recode, implies that AI software doesn’t do a good job recognizing non-white faces because most engineers (i.e. software developers) are white. I’d argue that the AI does a poor job because of this: the developers aren’t very good.
Good software developers, in particular the lead developers, take an active role in ensuring they have good test data. The success of their software when it goes live is dependent on it. Anyone using training data (i.e. using test data) in AI projects that is not using a broad set of faces is doing a poor job. Period. Regardless of whether or not they are white.
If the AI is supposed to do something (i.e. recognize all faces) and it does not, then the AI sucks. Don’t blame it on anything but technical abilities.
Because if you don’t have augmented intelligence, and if you solely depend on AI like software, you get problems like this, whereby automated software triggers an event that a trained human might have picked up on.
AI and ML (machine learning) can be highly probabilistic and limited to the information it is trained on. Having a human involved makes up for those limits. Just like AI can process much more information quicker than a limited human can.
See the link to the New York Times story to see what I mean.