Sure you can make yourself busy on this warm summer weekend. Or you can chill for a bit and read one of these thoughtful pieces. I know which one I am going to do. 🙂
- Here’s a piece on the joy of Latin. Really.
- 100% this: The Case for Killing the Trolley Problem
- Worthwhile: Piketty on equality.
- This is a weak piece that tries to link AI to colonialism but fails to make the case: AI colonialism.
- Do you have siblings? Read this: How Your Siblings Can Make You Happier.
- Worth chewing on:The limits of forgiveness.
- On one of our oldest technologies: the importance of wood .
- Dive into this list of common misconceptions.
- Finally, this piece on Alexa with the voice of dead people will get you thinking.
Since the beginning of the digital age, we have referred to quickly retrievable computer storage as “memory”. It has some resemblance to memory, but it has none of the complexity of our memories and how they work. If you talked to most people about this, I don’t think there would be many who would think they are the same.
Artificial Intelligence isn’t our Intelligence, regardless of how good it gets. AI is going to have some resemblance to our intelligence, but it has none of the complexity of our intelligence and how it works. Yet you can talk to many who think that over time they will become the same.
I was thinking about that last week after the kerfuffle from the Google engineer who exclaimed their software was sentient. Many many think pieces have been written about it; I think this one is the best I read from a lay person’s perspective. If you are concerned about it or simply intrigued, read that. It’s a bit harsh on Turing’s test, but I think overall it’s worth your time.
It is impressive what leaps information technology is making. But however much it resembles us as humans, it is not human. It will not have our memories. It will not have our intelligence. It will not have the qualities that make us human, any more than a scarecrow does.
I’m often critical of robots and their relatives here, but these particular drones seem very good indeed. As that linked article explains:
swarms of (theese) seed-firing drones … are planting 40,000 trees a day to fight deforestation…(their) novel technology combines artificial intelligence with specially designed proprietary seed pods that can be fired into the ground from high in the sky. The firm claims that it performs 25 times faster and 80 percent cheaper compared to traditional seed-planting methodologies.
I am sure there is still a role for humans in reforestation, but the faster and cheaper it can be done, the better. A good use of technology.
Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.
What can I say? Well, for one thing, I am embarrassed for my profession that anyone takes that system seriously. It’s a joke. Anyone who has done any reading on ethics or morality can tell you very quickly that any moral decision of weight cannot be resolved with a formula. The Delphi system can’t make moral decisions. It’s like ELIZA: it could sound like a doctor but it couldn’t really help you with your mental health problem.
Too often people from IT blunder into a field, reduce the problems in them to something computational, produce a new system, and yell “Eureka!”. The lack of humility is embarrassing.
What IT people should do is spend time reading and thinking about ethics and morality.. If they did, they’d be better off. If you are one of those people, go to fivebooks.com and search for “ethics” or “moral”. From those books you will learn something. You cannot learn anything from the Delphi system.
P.S. For more on that Delphi system, see: Can a Machine Learn Morality? – The New York Times.
(Photo by Gabriella Clare Marino on Unsplash )
This article on cells – yes, cells! – navigating mazes is fascinating and worth a read: Seeing around corners: Cells solve mazes and respond at a distance using attractant breakdown
After reading I thought: I need to rethink “intelligence”. Navigating mazes is something that was considered an intelligent act. Indeed one of the early experiments in A.I. was in the 1950s, when Marvin Minsky developed a smart “rat” (see above) to make its way through a maze. (That’s worth reading about as well.)
Seeing the cell navigate the maze, I thought: if the qualities we associate with intelligence are found at a cellular level, then I don’t really understand intelligence at all. It’s as if intelligence has an atomic level. As if intelligence is at all levels of life, not just the more complex levels.
Maybe the concept of intelligence is next to meaningless and needs to be replaced by something better. Read those pieces and think for yourself. After all, you are intelligent. 🙂
If you have even a passing knowledge of IT, you likely have heard of Pepper and Watson. Pepper was a robot and Watson was an AI system that won at Jeopardy. Last week the Verge and the New York Times had articles on them both:
- Go read how Pepper was a very bad robot – The Verge
- What Ever Happened to IBM’s Watson? – The New York Times
I don’t have any specific insights or conclusions into either technology, other than trite summations like “cutting edge technology is hard” and “don’t believe the hype”. AI and robotics are especially hard, so the risks are high and the chances of failure are high. That comes across in these two pieces.
Companies from Tesla to Boston Dynamics and more are making grand claims about their AI and their robotics. I suspect much of it will suffer the same fate as Pepper and Watson. Like all failure, none of it is final or fatal. People learn from their mistakes and move on to make better things. AI and robotics will continue to advance…just not at the pace many would like it too.
In the meantime, go read those articles. Especially if you are finding yourself falling for the hype.
(Image: link of image on The Verge)
I am a fan of smart speakers, despite the privacy concerns around them. If you are ok with that and you have one or are planning to get one, read these two links to see how you can get more out of them:
- How to control Sonos with Google Assistant
- Alexa Skills That Are Actually Fun and Useful | WIRED
I use Google Assistant on my Sonos and they make a great device even better. And while I do have Google Home devices in other parts of the house, I tend to be around the Sonos most, so having it there to do more than just play music is a nice thing indeed.
Posted in AI, IT
Tagged AI, alexa, google, IT, Sonos
- How to control Sonos with Google Assistant – good if you like / use Google assistant
- Sonos speakers now work with IFTTT so you can automate your music – good if you are a fan of IFTTT, like I am
The Sonos One is a smart little speaker. Using Google Assistant and IFTTT.com make it even smarter.
Chatbots are relatively straightforward to deploy these days. AI providers like IBM and others provide all the technology you need. But do you really need them? And if you already have a bunch of them deployed, are you doing it right? If these questions have you wondering, I recommend you read this: Does Your Company Really Need a Chatbot?
You still may want to proceed with chatbots: they make a lot of business sense for certain types of work. But you will have a better idea when not to use them, too.
What are some of the flaws with facial recognition software? Too many for me just to list. Instead, read this article to get a sense of how bad this software can be.
San Francisco is in the vanguard of trying to rein in this technology. Let’s hope more jurisdictions do the same.
I am glad to see more articles highlighting the difference between ML and AI. For example, this one: How machine learning is different from artificial intelligence – IBM Developer.
There is still lots to be done in the field of machine learning, but I think technologists and scientists need to break out of that tight circle and explore AI in general.
(Image: from the article)
Nope. And this piece, Machine Learning Vs. Artificial Intelligence: How Are They Different?, does a nice job of reviewing them at a non-technical level. At the end, you should see the differences.
(The image, via g2crowd.com, also shows this nicely).
Possibly, but as this article argues, there are at least three areas where robots and suck at:
Creative endeavours: These include creative writing, entrepreneurship, and scientific discovery. These can be highly paid and rewarding jobs. There is no better time to be an entrepreneur with an insight than today, because you can use technology to leverage your invention.
Social interactions: Robots do not have the kinds of emotional intelligence that humans have. Motivated people who are sensitive to the needs of others make great managers, leaders, salespeople, negotiators, caretakers, nurses, and teachers. Consider, for example, the idea of a robot giving a half-time pep talk to a high school football team. That would not be inspiring. Recent research makes clear that social skills are increasingly in demand.
Physical dexterity and mobility: If you have ever seen a robot try to pick up a pencil you see how clumsy and slow they are, compared to a human child. Humans have millennia of experience hiking mountains, swimming lakes, and dancing—practice that gives them extraordinary agility and physical dexterity.
Read the entire article; there’s much more in it than that. But if your job has some element of those three qualities, chances are robots won’t be replacing you soon.
Here’s an assortment of 42 links covering everything from Kubernetes to GCP and other cloud platforms to IoT to Machine Learning and AI to all sorts of other things. Enjoy! (Image from the last link)
- Prometheus Kubernetes | Up and Running with CoreOS , Prometheus and Kubernetes: Deploying – Kubernetes monitoring with Prometheus in 15 minutes – some good links on using Prometheus here
- Deploying a containerized web application | Container Engine Documentation | Google Cloud Platform – a good intro to using GCP
- How to classify workloads for cloud migration and decide on a deployment model – Cloud computing news – great insights for any IT Architects
- IP Address Locator – Where is this IP Address? – a handy tool, especially if you are browsing firewall logs
- Find a Google Glass and kick it from the network – Detect and disconnect WiFi cameras in that AirBnB you’re staying in– Good examples of how to catch spying devices
- The sad graph of software death – a great study on technical deby
- OpenTechSchool – Websites with Python Flask – get started building simple web sites using Python
- Build Your Own “Smart Mirror” with a Two-Way Mirror and an Android Device – this was something I wanted to do at some point
- Agile for Everybody: Why, How, Prototype, Iterate – On Human-Centric Systems – Medium – Helpful for those new or confused by Agile
- iOS App Development with Swift | Coursera – For Swift newbies
- Why A Cloud Guru Runs Serverless on AWS | ProgrammableWeb – If you are interested in serverless, this is helpful
- Moving tech forward with Gomix, Express, and Google Spreadsheets | MattStauffer.com – using spreadsheets as a database. Good for some
- A Docker Tutorial for Beginners – More Docker 101.
- What is DevOps? Think, Code, Deploy, Run, Manage, Learn – IBM Cloud Blog – DevOps 101
- Learning Machine Learning | Tutorials and resources for machine learning and data analysis enthusiasts – Lots of good ML links
- Machine learning online course: I just coded my first AI algorithm, and oh boy, it felt good — Quartz – More ML
- New Wireless Tech Will Free Us From the Tyranny of Carriers | WIRED – This is typical Wired hype, but interesting
- How a DIY Network Plans to Subvert Time Warner Cable’s NYC Internet Monopoly – Motherboard – related to the link above
- Building MirrorMirror – more on IT mirrors
- Minecraft and Bluemix, Part 1: Running Minecraft servers within Docker – fun!
- The 5 Most Infamous Software Bugs in History – OpenMind – also fun!
- The code that took America to the moon was just published to GitHub, and it’s like a 1960s time capsule — Quartz – more fun stuff. Don’t submit pull requests 🙂
- The 10 Algorithms Machine Learning Engineers Need to Know – More helpful ML articles
- User Authentication with the MEAN Stack — SitePoint – if you need authentication, read this…
- Easy Node Authentication: Setup and Local ― Scotch – .. or this
- 3 Small Tweaks to make Apache fly | Jeff Geerling – Apache users, take note
- A Small Collection of NodeMCU Lua Scripts – Limpkin’s blog – Good for ESP users
- Facebook OCP project caused Apple networking team to quit – Business Insider – Interesting, though I doubt Cisco is worried
- Hacked Cameras, DVRs Powered Today’s Massive Internet Outage — Krebs on Security – more on how IoT is bad
- Learn to Code and Help Nonprofits | freeCodeCamp – I want to do this
- A Simple and Cheap Dark-Detecting LED Circuit | Evil Mad Scientist Laboratories – a fun hack
- Hackers compromised free CCleaner software, Avast’s Piriform says | Article [AMP] | Reuters – this is sad, since CCleaner is a great tool
- Is AI Riding a One-Trick Pony? – MIT Technology Review – I believe it is and if AI proponents are not smart they will run into another AI winter.
- I built a serverless Telegram bot over the weekend. Here’s what I learned. – Bot developers might like this.
- Google’s compelling smartphone pitch – Pixel 2 first impressions | IT World Canada News – The Pixel 2 looks good. If you are interested, check this out
- Neural networks and deep learning – more ML
- These 60 dumb passwords can hijack over 500,000 IoT devices into the Mirai botnet – more bad IoT
- If AWS is serious about Kubernetes, here’s what it must do | InfoWorld – good read
- 5 Ways to Troll Your Neural Network | Math with Bad Drawings – interesting
- IBM, Docker grow partnership to drive container adoption across public cloud – TechRepublic – makes sense
Posted in IT
Tagged AI, cloud, computers, GCP, IOT, IT, Kubernetes, machinelearning, MEAN, ML, nodeJS
WIRED has a good review of the latest product from Sonos, here: Sonos One Review: Amazon’s Alexa Is Here, But It Still Has Some Growing Up to Do
What makes this development significant to me is that it signals that Sonos is concerned with Apple and others coming and taking away market share. Sonos has a great line of products already, but Apple is threatening to take a piece of that with their new home speaker with Siri/AI capability. Sonos has beefed up their AI capability to meet the challenge.
I expect that the next big thing in IT will be the vocal interface tied in with a speaker system in some form. I expect we will see them everywhere. Perhaps not for extended communication, but for brief and frequent requests.
If you are an IT person, I recommend you learn more about chatbot technology and how it will integrate with the work you are doing. More and more users will want to be able to communicate with your systems using voice. You need to provide a vocal interface for them to get information and send information.
Most homes will have one device acting as an aural hub. Sonos wants to make sure it is one they make, and not Apple.
Posted in IT
Tagged AI, apple, chatbots, IT, Sonos
This piece: What it’s like to be a modern engraver, the most automated job in the United States — Quartz, reminded me once again that the best use of technology is to augment the people doing the work, and not simply take away the work. Must reading for anyone who’s believes that the best way to use AI and other advanced tech is to eliminate jobs. My believe is that the best way to use AI and other advanced tech is to make jobs better, both for the employee, the employer, and the customer. The businesses that will succeed will have that belief as well.
(Image from this piece on how humans and robots can work together.)
According to Haydn Waters, a writer at CBC, the mail robots at the corporation are being discontinued. Instead:
Mail will be delivered twice a week (Tuesday and Thursday) to central mail delivery/pickup locations on each floor.”
What gets lost in alot of discussions of robots, AI, etc., taking all the jobs is that the drivers for the decisions is not technology but economics. If there is no economical need for robots and other technology, then that technology will not just appear. There is nothing inevitable about technology, and any specific technology is temporary.
Of course there will be more use of robots and AI and other technology to replace the work people may currently do. The key to finding work will be to continually improvise and improve on the tasks one has to do to remain employed. That’s something humans do well, and technology will struggle with for some time in the future, AI hype not withstanding.
If you are looking to build AI tech, or just learn about it, then you will find these interesting:
- Artificial intelligence pioneer says we need to start over – Axios – if Hinton says it, it is worth taking note
- Robots Will Take Fast-Food Jobs, But Not Because of Minimum Wage Hikes | Inverse – true. Economists need to stop making such a strong link here.
- Artificial Intelligence 101: How to Get Started | HackerEarth Blog – a good 101 piece
- Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level – MIT Technology Review – the ability of tech to learn is accelerating.
- Now AI Machines Are Learning to Understand Stories – MIT Technology Review – and not just accelerating, but getting deeper.
- Robots are coming for your job. That might not be bad news – good alternative insight from Laurie Penny.
- Pocket: Physicists Unleash AI to Devise Unthinkable Experiments – not surprisingly, a smart use of AI
- AI’s dueling definitions – O’Reilly Media – this highlights one of the problems with AI, and that it is it is a suitcase word (or term) and people fill it with what they want to fill it with
- A Neural Network Playground – a very nice tool to start working with AI
- Foxconn replaces ‘60,000 factory workers with robots’ – BBC News – there is no doubt in places like Foxconn, robots are taking jobs.
- 7 Steps to Mastering Machine Learning With Python – don’t be put off by this site’s design: there is good stuff here
- How Amazon Triggered a Robot Arms Race – Bloomberg – Amazon made a smart move with that acquisition and it is paying off
- When Police Use Robots to Kill People – Bloomberg this is a real moral quandary and I am certain the police aren’t the only people to be deciding on it. See also: A conversation on the ethics of Dallas police’s bomb robot – The Verge
- How to build and run your first deep learning network – O’Reilly Media – more good stuff on ML/DL/AI
- This expert thinks robots aren’t going to destroy many jobs. And that’s a problem. | The new new economy – another alternative take on robots and jobs
- Neural Evolution – Building a natural selection process with AI – more tutorials
- Uber Parking Lot Patrolled By Security Robot | Popular Science – not too long after this, one of these robots drowned in a pool in a mall. Technology: it’s not easy 🙂
- A Robot That Harms: When Machines Make Life Or Death Decisions : All Tech Considered : NPR – this is kinda dumb, but worth a quick read.
- Mathematics of Machine Learning | Mathematics | MIT OpenCourseWare – if you have the math skills, this looks promising
- Small Prolog | Managing organized complexity – I will always remain an AI/Prolog fan, so I am including this link.
- TensorKart: self-driving MarioKart with TensorFlow – a very cool application
- AI Software Learns to Make AI Software – MIT Technology Review – there is less here than it appears, but still worth reviewing
- How to Beat the Robots – The New York Times – meh. I think people need to learn to work with the technology, not try to defeat it. If you disagree, read this.
- People want to know: Why are there no good bots? – bot makers, take note.
- Noahpinion: Robuts takin’ jerbs
- globalinequality: Robotics or fascination with anthropomorphism – everyone is writing about robots and jobs, it seems.
- Valohai – more ML tools
- Seth’s Blog: 23 things artificially intelligent computers can do better/faster/cheaper than you can – like I said, everyone is writing about AI. Even Seth Godin.
- The Six Main Stories, As Identified by a Computer – The Atlantic – again, not a big deal, but interesting.
- A poet does TensorFlow – O’Reilly Media – artists will always experiment with new mediums
- How to train your own Object Detector with TensorFlow’s Object Detector API – more good tooling.
- Rise of the machines – the best – by far! – non-technical piece I have read about AI and robots.
- We Trained A Computer To Search For Hidden Spy Planes. This Is What It Found. – I was super impressed what Buzzfeed did here.
- The Best Machine Learning Resources – Machine Learning for Humans – Medium – tons of good resources here.
This is a pretty cool DIY project: The AIY Voice Kit Lets You Build a Google Home for Only $35.
Now, I have my qualms about letting Google have access to so much personal information. If you do not have such qualms and you want to build a cool project, click the link and head on over to Wired, where they have more information on it and how to get it.
According to this, chatbots in China have been removed after being critical of the Chinese government. This to me is not unlike what happened to Microsoft's chat bot that became racist after being feed racist input from users. If you put AI out there and allow any form of input, then the equivalent of vandals can overtake you AI and feed it whatever they choose. I'm not certain if that was the case in China but I suspect it was.
AI researchers need to expect the worst case use cases if they allow their software to do unsupervised learning on the Internet. If they don't, it's likely that their projects will be a disaster and they will do damage to the AI community in general.
Posted in AI
Tagged AI, chatbots, China
In France, politician Jean-Luc Mélenchon plans to be in seven places at once using something similar to a hologram. According to Le Parisien:
Strictly speaking, these are not holograms. Jean-Luc Mélenchon will be present in seven different places thanks to … an optical illusion discovered for the first time half a century ago by an Italian physicist
Virtual Mélenchon reminds me of the politician Yance in Philip K Dick’s novel, The Penultimate Truth. We may not be far off where we get virtual candidate that look like people but behind the scenes we have AI or some combination of AI and people.
For more on the technology, see the article in Le Parisien. For more on Dick’s novel, see Wikipedia. Read up now: I think we can expect to see more of this technology in use soon.
Posted in AI, ideas, IT, politics
Tagged AI, France, French, IT, philipkdick, politics, sci-fi, sciencefiction, SF
This piece, Most engineers are white — and so are the faces they use to train software – Recode, implies that AI software doesn’t do a good job recognizing non-white faces because most engineers (i.e. software developers) are white. I’d argue that the AI does a poor job because of this: the developers aren’t very good.
Good software developers, in particular the lead developers, take an active role in ensuring they have good test data. The success of their software when it goes live is dependent on it. Anyone using training data (i.e. using test data) in AI projects that is not using a broad set of faces is doing a poor job. Period. Regardless of whether or not they are white.
If the AI is supposed to do something (i.e. recognize all faces) and it does not, then the AI sucks. Don’t blame it on anything but technical abilities.
Because if you don’t have augmented intelligence, and if you solely depend on AI like software, you get problems like this, whereby automated software triggers an event that a trained human might have picked up on.
AI and ML (machine learning) can be highly probabilistic and limited to the information it is trained on. Having a human involved makes up for those limits. Just like AI can process much more information quicker than a limited human can.
See the link to the New York Times story to see what I mean.
Interesting article: How IBM Watson helped Time magazine narrow its search for Person of the Year (IT Business)
From a technology point of view, it is also interesting that the IBM partner was using IBM’s Watson and Bluemix technologies.
I am biased here, as someone who works for IBM and believes in these technology, but I do think that if you think A.I. and cognitive doesn’t have a place in your business, you should read this. In the next two years, expect all your competitors to adopt these new technologies to compete with you.
Posted in IT
Tagged AI, cognitive, IT, Time, Watson
This piece, 1.8 million American truck drivers could lose their jobs to robots. What then? (Vox) is a great primer on self driving trucks and how they are going to have a major impact sooner than later.
If you are interested in IT, AI or robots, it really shows one of the places where this technology is going to have a significant impact.
If you are interested in economics, politics, or sociology, then the effect of robots replacing all these truck drivers is definitely something you want to be aware of.
If you drive on highways, you definitely want to know about it.
In any case, it’s a good piece by David Roberts. That is his beat and I find he always does a great job of breaking down a topic like this and making it easier to understand and relevant to me. I recommend any of his pieces.
If you want a better understanding of artificial intelligence or if you want to gain some insight into the future of machine learning, I recommend these two free reports, found here: Free AI Reports from O’Reilly Media. There’s so much hype and speculation about AI: these reports cut through all that noise and they will give you a better understanding of what A.I. really is and where it is going.
P.S. If you like them, check out the many great non-A.I. related reports as well. You don’t have to be a technologist to be able to read them.
I have thought a lot about Waze since I started using it. Without a doubt, it has improved my life substantially. Here are some other thoughts I had as I used it.
- Waze is an example of how software will eat the world. In this case, the world of gPS devices. Waze is a GPS on steroids. Not only will Waze do all the things that a GPS will do, but it does so much more, as you can see from this other Waze post I wrote. If you have a GPS, after you use Waze for a bit, you’ll likely stop using it.
- Waze will change the way cities work. Cities are inefficient when it comes to transportation. Our work habits contribute to that, in that so many people commute at the same time, in the same direction, on the same routes, each work day. Waze and other new forms of adding intelligence to commuting will shape our work habits over time. Drivers being able to take advantage of unbusy streets to reduce congestion on major thoroughfares is just the start. City planners could work with Waze to better understand travel patterns and travel behaviour and incorporate changes into the city so that traffic flows better. It’s not that city planners don’t have such data, it’s that Waze likely has more data and better data than they currently have.
- Waze is a great example of how A.I. could work. I have no idea how much A.I. is built into Waze. It could be none, it could be alot. It does make intelligent recommendations to me, and that is all I care about. How it makes those intelligent recommendations is a black box. Developers of A.I. technologies should look at Waze as an example of how best to deploy A.I. Those A.I. developers should look at how best A.I. can solve a problem for the user and spend less time trying to make the A.I. seem human or overly intelligent. People don’t care about that. They care about practical applications of A.I. that make their lives better. Waze does that.
Posted in apps, IT, software
Tagged AI, apps, cities, commute, commuting, GPS, IT, planning, software, travel, Waze
This article, Datasets Over Algorithms — Space Machine, makes a good point, namely
…perhaps many major AI breakthroughs have actually been constrained by the availability of high-quality training datasets, and not by algorithmic advances.
Looking at this chart they provide illustrates the point:
I’d argue that it isn’t solely datasets that drive A.I. breakthroughs. Better CPUs, improved storage technology, and of course new ideas can also propel A.I. forward. But if you ask me now, I think A.I. in the future will need better data to make big advances.
Posted in ideas, IT
Tagged AI, ideas, IT, software
There is so much wrong in this article, The Real Bias Built In at Facebook – The New York Times, that I decided to take it apart in this blog post. (I’ve read so many bad IT stories in the Times that I stopped critiquing them after a while, but this one in particular bugged me enough to write something).
To illustrate what I mean by what is wrong with this piece, here’s some excerpts in italics followed by my thoughts in non-italics.
- First off, there is the use of the word “algorithm” everywhere. That alone is a problem. For an example of why that is bad, see section 2.4 of Paul Ford’s great piece on software,What is Code? As Ford explains: ““Algorithm” is a word writers invoke to sound smart about technology. Journalists tend to talk about “Facebook’s algorithm” or a “Google algorithm,” which is usually inaccurate. They mean “software.” Now part of the problem is that Google and Facebook talk about their algorithms, but really they are talking about their software, which will incorporate many algorithms. For example, Google does it here: https://webmasters.googleblog.com/2011/05/more-guidance-on-building-high-quality.html At least Google talks about algorithms, not algorithm. Either way, talking about algorithms is bad. It’s software, not algorithms, and if you can’t see the difference, that is a good indication you should not be writing think pieces about I.T.
- Then there is this quote: “Algorithms in human affairs are generally complex computer programs that crunch data and perform computations to optimize outcomes chosen by programmers. Such an algorithm isn’t some pure sifting mechanism, spitting out objective answers in response to scientific calculations. Nor is it a mere reflection of the desires of the programmers. We use these algorithms to explore questions that have no right answer to begin with, so we don’t even have a straightforward way to calibrate or correct them.” What does that even mean? To me, I think it implies any software that is socially oriented (as opposed to say banking software or airline travel software) is imprecise or unpredictable. But at best, that is only slightly true and mainly false. Facebook and Google both want to give you relevant answers. If you start typing in “restaurants” or some other facilities in Google search box, Google will start suggesting answers to you. These answers will very likely to be relevant to you. It is important for Google that this happens, because this is how they make money from advertisers. They have a way of calibrating and correcting this. In fact I am certain they spend a lot of resources making sure you have the correct answer or close to the correct answer. Facebook is the same way. The results you get back are not random. They are designed, built and tested to be relevant to you. The more relevant they are, the more successful these companies are. The responses are generally right ones.
- “ If Google shows you these 11 results instead of those 11, or if a hiring algorithm puts this person’s résumé at the top of a file and not that one, who is to definitively say what is correct, and what is wrong?” Actually, Google can say, they just don’t. It’s not in their business interest to explain in detail how their software works. They do explain generally, in order to help people insure their sites stay relevant. (See the link I provided above). But if they provide too much detail, bad sites game their sites and make Google search results worse for everyone. As well, if they provide too much detail, they can make it easier for other search engine sites – yes, they still exist – to compete with them.
- “Without laws of nature to anchor them, algorithms used in such subjective decision making can never be truly neutral, objective or scientific.” This is simply nonsense.
- “Programmers do not, and often cannot, predict what their complex programs will do. “ Also untrue. If this was true, then IBM could not improve Watson to be more accurate. Google could not have their sales reps convince ad buyers that it is worth their money to pay Google to show their ads. Same for Facebook, Twitter, and any web site that is dependent on advertising as a revenue stream.
- “Google’s Internet services are billions of lines of code.” So what? And how is this a measure of complexity? I’ve seen small amounts of code that was poorly maintained be very hard to understand, and large amounts of code that was well maintained be very simple to understand.
- “Once these algorithms with an enormous number of moving parts are set loose, they then interact with the world, and learn and react. The consequences aren’t easily predictable. Our computational methods are also getting more enigmatic. Machine learning is a rapidly spreading technique that allows computers to independently learn to learn — almost as we do as humans — by churning through the copious disorganized data, including data we generate in digital environments. However, while we now know how to make machines learn, we don’t really know what exact knowledge they have gained. If we did, we wouldn’t need them to learn things themselves: We’d just program the method directly.” This is just a cluster of ideas slammed together, a word sandwich with layers of phrases without saying anything. It makes it sound like AI has been unleashed upon the world and we are helpless to do anything about it. That’s ridiculous. As well, it’s vague enough that it is hard to dispute without talking in detail about how A.I. and machine learning works, but it seems knowledgeable enough that many people think it has greater meaning.
- “With algorithms, we don’t have an engineering breakthrough that’s making life more precise, but billions of semi-savant mini-Frankensteins, often with narrow but deep expertise that we no longer understand, spitting out answers here and there to questions we can’t judge just by numbers, all under the cloak of objectivity and science.” This is just scaremongering.
- “If these algorithms are not scientifically computing answers to questions with objective right answers, what are they doing? Mostly, they “optimize” output to parameters the company chooses, crucially, under conditions also shaped by the company. On Facebook the goal is to maximize the amount of engagement you have with the site and keep the site ad-friendly.You can easily click on “like,” for example, but there is not yet a “this was a challenging but important story” button. This setup, rather than the hidden personal beliefs of programmers, is where the thorny biases creep into algorithms, and that’s why it’s perfectly plausible for Facebook’s work force to be liberal, and yet for the site to be a powerful conduit for conservative ideas as well as conspiracy theories and hoaxes — along with upbeat stories and weighty debates. Indeed, on Facebook, Donald J. Trump fares better than any other candidate, and anti-vaccination theories like those peddled by Mr. Beck easily go viral. The newsfeed algorithm also values comments and sharing. All this suits content designed to generate either a sense of oversize delight or righteous outrage and go viral, hoaxes and conspiracies as well as baby pictures, happy announcements (that can be liked) and important news and discussions.” This is the one thing in the piece that I agreed with, and it points to the real challenge with Facebook’s software. I think the software IS neutral, in that it is not interested in the content per se as it is how the user is responding or not responding to it. What is NOT neutral is the data it is working off of. Facebook’s software is as susceptible to GIGO (garbage in, garbage out) as any other software. So if you have a lot of people on Facebook sending around cat pictures and stupid things some politicians are saying, people are going to respond to it and Facebook’s software is going to respond to that response.
- “Facebook’s own research shows that the choices its algorithm makes can influence people’s mood and even affect elections by shaping turnout. For example, in August 2014, my analysis found that Facebook’s newsfeed algorithm largely buried news of protests over the killing of Michael Brown by a police officer in Ferguson, Mo., probably because the story was certainly not “like”-able and even hard to comment on. Without likes or comments, the algorithm showed Ferguson posts to fewer people, generating even fewer likes in a spiral of algorithmic silence. The story seemed to break through only after many people expressed outrage on the algorithmically unfiltered Twitter platform, finally forcing the news to national prominence.” Also true. Additionally, Facebook got into trouble for the research they did showing their software can manipulate people by….manipulating people in experiments on them! It was dumb, unethical, and possibly illegal.
- “Software giants would like us to believe their algorithms are objective and neutral, so they can avoid responsibility for their enormous power as gatekeepers while maintaining as large an audience as possible.” Well, not exactly. It’s true that Facebook and Twitter are flirting with the notion of becoming more news organizations, but I don’t think they have decided whether or not they should make the leap or not. Mostly what they are focused on are channels that allow them to gain greater audiences for their ads with few if any restrictions.
In short, like many of the IT think pieces I have seen the Times, it is filled with wrong headed generalities and overstatements, in addition to some concrete examples buried somewhere in the piece that likely was thing that generated the idea to write the piece in the first place. Terrible.
Things I am interested in or working on these days: AI, WebSphere setup, Python, Twitter programming, development in general, configuring Netscalers, cool things IBM is doing, automation, among other things.
- If you have the AI bug and think you want to do some Prolog programming, you need this: What Prolog implementation to choose? What’s fastest? Compatibility?
- Deep Learning is hot in AI. If you want more info, this is good: Deep Learning Tutorials — DeepLearning 0.1 documentation
- Sigh. This debate never goes away in AI: Why AlphaGo Is Not AI – IEEE Spectrum
- More on the hysteria that AI brings: The founder of Evernote made a great point about why AI (probably) won’t kill us all – Vox
- Ignore most AI hysteria, but do read this: What does it mean for an algorithm to be fair? | Math ∩ Programming
- Want to whip up a quick mobile app? Consider: Mobile App Builder – new service now available – Bluemix Blog
- For power users, there’s: How to create an insane multiple monitor setup with three, four, or more displays | PCWorld
- Need virtual images? Take a look at this: Images | VirtualBoxes – Free VirtualBox® Images
- For hardcore WAS users, this is helpful: Installing optional Java 7.x on WebSphere Application Server 8.5 (Application Integration Middleware Support Blog)
- A classic. Anyone tuning WAS needs this: Case study: Tuning WebSphere Application Server V7 and V8 for performance
- Want to learn Python? Write your own Twitter client? Or do both? Then there’s this: How To Build a Twitter “Hello World” Web App in Python | ProgrammableWeb
- More on programming Twitter: How To Use The Twitter API To Find Events | ProgrammableWeb
- Nice little project to try, here: Create a mobile-friendly to-do list app with PHP, jQuery Mobile, and Google Tasks
- Creating Simple Responsive HTML5 and PHP Contact Form | Future Tutorials
- Setting up a Linux system? Then you want to read this: Most secure way to partition linux? – Information Security Stack Exchange
- Want to learn Linux? This is essential! IBM developerWorks : Technical library concerning Learning Linux
- If you are doing performance work on Unix, you will likely use vmstat. Even if you know vmstat, this is good to review: What to look for in vmstat – UNIX vmstat command
- Wow! OS/2 is still alive! OS/2: Blue Lion to be the next distro of the 28-year-old – Yahoo Finance
- Talk about old tech! This makes OS/2 seem fresh! It’s Insane that New York’s Subway Still Runs on This 80-Year-Old Switchboard | Motherboard
- I was doing some work on Netscaler and found this useful in comparing the set up of one Netscaler config with another: Export Netscaler Config – NetScaler Application Delivery – Discussions. This is also useful: Netscaler 9 Cheat Sheet.doc – netscaler9cheatsheet.pdf
- I thought this was a good development for everyone interested in Node: IBM Buys StrongLoop To Add Node.js API Development To Its Cloud Platform | TechCrunch
- Alot has changed with IBM’s OpenPOWER. Forbes gets you up to date, here: IBM’s OpenPOWER: A Lot Has Changed In Two Years – Forbes
- Cool stuff here: Access your Docker-based Raspberry Pi at home from the internet · Docker Pirates ARMed with explosive stuff
- I was using Perl scripts on Linux to send me messages to my mobile device via Pushover. This was good for that: pushover Archives – Perl Hacks
- I was also using WinSCP for that and this helped: Scripting and Task Automation :: WinSCP
- For all those trying to succeed in IT but feeling you are running into ceiling, you should read this: Tech’s Enduring Great-Man Myth or this When It Comes to Age Bias, Tech Companies Don’t Even Bother to Lie | Dan Lyons | LinkedIn
- Linus Torvalds is always interesting, and this is especially good: Linux at 25: Q&A With Linus Torvalds – IEEE Spectrum
- Very cool! Particle | Build your Internet of Things
- And finally some links to good stuff on UML online: Multi-layered web architecture UML package diagram example, web layer depends on business layer, which depends on data access layer and data transfer objects.
Posted in cool, IT
Tagged AI, IT, Linux, netscaler, OS2, performance, PHP, Prolog, Python, SoftLayer, twitter, WebSphere
Bots combined with AI and social networks are going to become an increasing problem. I thought of this when reading about the relatively recent Ashley Madison fiasco. Even if you wouldn’t be caught dead using such a service, this applies to you in other ways.
One of the fascinating aspects of Ashley Madison was just how many bots were employed by the company, at least according to this article: Ashley Madison Code Shows More Women, and More Bots.
How many? Alot! From the article:
After searching through the Ashley Madison database and private email last week, I reported that there might be roughly 12,000 real women active on Ashley Madison. Now, after looking at the company’s source code, it’s clear that I arrived at that low number based in part on a misunderstanding of the evidence. Equally clear is new evidence that Ashley Madison created more than 70,000 female bots to send male users millions of fake messages, hoping to create the illusion of a vast playland of available women.
Here’s some examples:
This matters to you because chances are you will be interacting more and more with bots. Bots are cheap, and companies and organizations are going to go with them to meet their needs and yours. Maybe the bots will be harmless, like customer service reps that are actually just software programs. However it is also possible, just like it was at Ashley Madison, that these bots will be customized to con you into thinking you are dealing with a real person so that you will give them more money in some form or another. Bots may be obvious now, but as AI improves, so will the ability of bots to fool you. It’s not inconceivable that we will spend more and more time interacting with software that we think is human. It is something we need to think about and talk to fellow humans — and not AI driven bots — about how it will affect us and if it is negative, what we are going to do about it.
Robots in the real world may not realistically resemble humans for a very long time. Online bots that realistically resemble humans will get there much sooner. We need to quickly anticipate what positive and negative effects that will have and prepare for that.
Posted in IT
Tagged AI, bots, IT
A year or so ago, a parking lot I use had a human in a booth to take tickets and provide other services. That human booth was replaced by the thing in the photo above.
It’s not a robot and it’s not A.I., but it is replacing humans.
Stories about A.I. or robots taking over work makes them interesting. It’s also secondary to the real story. What is really taking people’s jobs is a willingness of others to use technology, and a willingness of companies to replace people with technology. People are not afraid to use technology. If anything, sometimes they prefer to deal with technology. This makes it easier for companies to go with technology as compared to using people, and if companies can save money or make money, so much the better.
It is happening in all sorts of industries, from food to sportswriting. The technology isn’t the driver of this: it’s the willingness of people to prefer technology that is the driver.
While there is lots of discussion about self driving cars, it’s much more likely that self driving trucks will become standard and accepted first. Here are two stories that support that. First this: How Canada’s oilsands are paving the way for driverless trucks — and the threat of big layoffs. Second, over at Vox, is: This is the first licensed self-driving truck. There will be many more. Key quote from Vox:
Last night at the Hoover Dam, the Freightliner company unveiled its Inspiration Truck: the first semi-autonomous truck to get a license to operate on public roads.
The Inspiration is now licensed to drive autonomously on highways in Nevada. It works a bit like a plane’s autopilot system: a driver will get the rig on the highway, and can take control at any time once it’s there. But the truck will be able to drive itself at high speeds, using cameras to make sure it stays within its lane and doesn’t get too close to the vehicle in front of it.
Self driving trucks are already up and operational. Additionally, the business case and the hurdles to overcome with self driving trucks will be easier to achieve than that of self driving cars in urban areas. Sooner than you think, you will commonly see self driving trucks on highways, especially during the hours when most highways are 80-90% trucks.
Transportation is changing. Self driving trucks are going to be leading that change. Self driving cars will be a distant second.