Tag Archives: twitter

Some thoughts on the problems Facebook and Google (and even retailers) have with people being awful on their platforms

Google, Facebook, and Twitter are platforms. So are some retail sites. What does that mean? It means that they provide the means for people to use their technology to create things for themselves. Most of the time, this is a good thing. People can communicate in ways they never could before such platforms. Likewise, people can sell things to people they never could.

Now these platforms are in a bind, as you can see in this piece and in other places: Google, Facebook, and Twitter Sell Hate Speech Targeted Ads. They are in a bind partly due to their own approach, by boasting of their ability to use AI to stop such things. They should have been much more humble. AI as it currently stands will only take you so far. Instead of relying on things like AI, they need to have better governance mechanisms in place. Governance is a cost of organizations, and often times organizations don’t insert proper governance until flaws like this start to occur.

That said, this particular piece has several weaknesses. First up, this comment: “that the companies are incapable of building their systems to reflect moral values”. It would be remarkable for global companies to build systems to reflect moral values when even within individual nations there is conflicts regarding such values. Likewise the statement: “It seems highly unlikely that these platforms knowingly allow offensive language to slip through the cracks”. Again, define offensive language at a global level. To make it harder still, trying doing it with different languages and different cultures. The same thing occurs on retail sites when people put offensive images on T shirts. For some retail systems no one from the company that own the platform takes time to review every product that comes in.

And that gets to the problem. All these platforms could be mainly content agnostic, the way the telephone system is platform agnostic. However people are expecting them to insert themselves and not be content agnostic. Once that happens, they are going to be in an exceptional bind. We don’t live in a homogenous world where everyone shares the same values. Even if they converted to non-profits and spent a lot more revenue on reviewing content, there would still be limits to what they could do.

To make things better, these platforms need to be humble and realistic about what they can do and communicate that consistently and clearly with the people that use these systems. Otherwise, they are going to find that they are going to be governed in ways they are not going to like. Additionally, they need to decide what their own values are and communicate and defend them. They may lose users and customers, but the alternative of trying to be different things in different places will only make their own internal governance impossible.

 

Advertisements

Some big changes on twitter

Two new things: 1) a quality filter 2) notification settings. While people are talking a lot about the first one, I think the second one might just be the thing most people need. For more details, see this: New Ways to Control Your Experience on Twitter | Twitter Blogs

How the ‘Spicy Boi’ comments on Hillary’s Instagram shows the difficulty of dealing with trolls

To see what I mean, read this piece in NYMag, Everyone Is Commenting ‘Spicy Boi’ on Hillary’s Instagram. Note how the social networks cross over the various platforms. The social organization of this activity goes from platform (iFunny) to platform (Twitter) to platform (Instagram). No doubt at some point it will appear on Reddit, 4chan, and who knows where else. It’s very hard to deal with trolls when you have people on one platform (e.g. Twitter) trying to control things, yet you can have social groups planning raids, etc. on other platforms.

Three thoughts:

  • the comment section for big accounts on Instagram is next to useless. I wonder why it is even enabled for them? I think they should disable it, or give the user the option to disable it.
  • In many cases, the comment sections should be limited to such things as “Likes” or “Thumbs Up” or simple polls.
  • Social media needs to involve either really good AI or (better) really good people to moderate things. It can’t happen soon enough.

The Real Bias Built in at Facebook <- another bad I.T. story in the New York Times (and my criticism of it)

There is so much wrong in this article, The Real Bias Built In at Facebook – The New York Times, that I decided to take it apart in this blog post. (I’ve read  so many bad  IT stories in the Times that I stopped critiquing them after a while, but this one in particular bugged me enough to write something).

To illustrate what I mean by what is wrong with this piece, here’s some excerpts in italics followed by my thoughts in non-italics.

  • First off, there is the use of the word “algorithm” everywhere. That alone is a problem. For an example of why that is bad, see section 2.4 of Paul Ford’s great piece on software,What is Code? As Ford explains: ““Algorithm” is a word writers invoke to sound smart about technology. Journalists tend to talk about “Facebook’s algorithm” or a “Google algorithm,” which is usually inaccurate. They mean “software.” Now part of the problem is that Google and Facebook talk about their algorithms, but really they are talking about their software, which will incorporate many algorithms. For example, Google does it here: https://webmasters.googleblog.com/2011/05/more-guidance-on-building-high-quality.html At least Google talks about algorithms, not algorithm. Either way, talking about algorithms is bad. It’s software, not algorithms, and if you can’t see the difference, that is a good indication you should not be writing think pieces about I.T.
  • Then there is this quote: “Algorithms in human affairs are generally complex computer programs that crunch data and perform computations to optimize outcomes chosen by programmers. Such an algorithm isn’t some pure sifting mechanism, spitting out objective answers in response to scientific calculations. Nor is it a mere reflection of the desires of the programmers. We use these algorithms to explore questions that have no right answer to begin with, so we don’t even have a straightforward way to calibrate or correct them.” What does that even mean? To me, I think it implies any software that is socially oriented (as opposed to say banking software or airline travel software) is imprecise or unpredictable. But at best, that is only slightly true and mainly false. Facebook and Google both want to give you relevant answers. If you start typing in “restaurants” or some other facilities in Google search box, Google will start suggesting answers to you. These answers will very likely to be relevant to you. It is important for Google that this happens, because this is how they make money from advertisers. They have a way of calibrating and correcting this. In fact I am certain they spend a lot of resources making sure you have the correct answer or close to the correct answer. Facebook is the same way. The results you get back are not random. They are designed, built and tested to be relevant to you. The more relevant they are, the more successful these companies are. The responses are generally right ones.
  • If Google shows you these 11 results instead of those 11, or if a hiring algorithm puts this person’s résumé at the top of a file and not that one, who is to definitively say what is correct, and what is wrong?” Actually, Google can say, they just don’t. It’s not in their business interest to explain in detail how their software works. They do explain generally, in order to help people insure their sites stay relevant. (See the link I provided above). But if they provide too much detail, bad sites game their sites and make Google search results worse for everyone. As well, if they provide too much detail, they can make it easier for other search engine sites – yes, they still exist – to compete with them.
  • Without laws of nature to anchor them, algorithms used in such subjective decision making can never be truly neutral, objective or scientific.” This is simply nonsense.
  • Programmers do not, and often cannot, predict what their complex programs will do. “ Also untrue. If this was true, then IBM could not improve Watson to be more accurate. Google could not have their sales reps convince ad buyers that it is worth their money to pay Google to show their ads. Same for Facebook, Twitter, and any web site that is dependent on advertising as a revenue stream.
  • Google’s Internet services are billions of lines of code.” So what? And how is this a measure of complexity?  I’ve seen small amounts of code that was poorly maintained be very hard to understand, and large amounts of code that was well maintained be very simple to understand.
  • Once these algorithms with an enormous number of moving parts are set loose, they then interact with the world, and learn and react. The consequences aren’t easily predictable. Our computational methods are also getting more enigmatic. Machine learning is a rapidly spreading technique that allows computers to independently learn to learn — almost as we do as humans — by churning through the copious disorganized data, including data we generate in digital environments. However, while we now know how to make machines learn, we don’t really know what exact knowledge they have gained. If we did, we wouldn’t need them to learn things themselves: We’d just program the method directly.” This is just a cluster of ideas slammed together, a word sandwich with layers of phrases without saying anything. It makes it sound like AI has been unleashed upon the world and we are helpless to do anything about it. That’s ridiculous. As well, it’s vague enough that it is hard to dispute without talking in detail about how A.I. and machine learning works, but it seems knowledgeable enough that many people think it has greater meaning.
  • With algorithms, we don’t have an engineering breakthrough that’s making life more precise, but billions of semi-savant mini-Frankensteins, often with narrow but deep expertise that we no longer understand, spitting out answers here and there to questions we can’t judge just by numbers, all under the cloak of objectivity and science.” This is just scaremongering.
  • If these algorithms are not scientifically computing answers to questions with objective right answers, what are they doing? Mostly, they “optimize” output to parameters the company chooses, crucially, under conditions also shaped by the company. On Facebook the goal is to maximize the amount of engagement you have with the site and keep the site ad-friendly.You can easily click on “like,” for example, but there is not yet a “this was a challenging but important story” button. This setup, rather than the hidden personal beliefs of programmers, is where the thorny biases creep into algorithms, and that’s why it’s perfectly plausible for Facebook’s work force to be liberal, and yet for the site to be a powerful conduit for conservative ideas as well as conspiracy theories and hoaxes — along with upbeat stories and weighty debates. Indeed, on Facebook, Donald J. Trump fares better than any other candidate, and anti-vaccination theories like those peddled by Mr. Beck easily go viral. The newsfeed algorithm also values comments and sharing. All this suits content designed to generate either a sense of oversize delight or righteous outrage and go viral, hoaxes and conspiracies as well as baby pictures, happy announcements (that can be liked) and important news and discussions.” This is the one thing in the piece that I agreed with, and it points to the real challenge with Facebook’s software. I think the software IS neutral, in that it is not interested in the content per se as it is how the user is responding or not responding to it. What is NOT neutral is the data it is working off of. Facebook’s software is as susceptible to GIGO (garbage in, garbage out) as any other software. So if you have a lot of people on Facebook sending around cat pictures and stupid things some politicians are saying, people are going to respond to it and Facebook’s software is going to respond to that response.
  • Facebook’s own research shows that the choices its algorithm makes can influence people’s mood and even affect elections by shaping turnout. For example, in August 2014, my analysis found that Facebook’s newsfeed algorithm largely buried news of protests over the killing of Michael Brown by a police officer in Ferguson, Mo., probably because the story was certainly not “like”-able and even hard to comment on. Without likes or comments, the algorithm showed Ferguson posts to fewer people, generating even fewer likes in a spiral of algorithmic silence. The story seemed to break through only after many people expressed outrage on the algorithmically unfiltered Twitter platform, finally forcing the news to national prominence.” Also true. Additionally, Facebook got into trouble for the research they did showing their software can manipulate people by….manipulating people in experiments on them! It was dumb, unethical, and possibly illegal.
  • Software giants would like us to believe their algorithms are objective and neutral, so they can avoid responsibility for their enormous power as gatekeepers while maintaining as large an audience as possible.” Well, not exactly. It’s true that Facebook and Twitter are flirting with the notion of becoming more news organizations, but I don’t think they have decided whether or not they should make the leap or not. Mostly what they are focused on are channels that allow them to gain greater audiences for their ads with few if any restrictions.

In short, like many of the IT think pieces I have seen the Times, it is filled with wrong headed generalities and overstatements, in addition to some concrete examples buried somewhere in the piece that likely was thing that generated the idea to write the piece in the first place. Terrible.

29 IT links to things I am working on or interested in: AI, Python, Netscaler, automation and more

Things I am interested in or working on these days: AI, WebSphere setup, Python, Twitter programming, development in general, configuring Netscalers, cool things IBM is doing, automation, among other things.

  1. If you have the AI bug and think you want to do some Prolog programming, you need this: What Prolog implementation to choose? What’s fastest? Compatibility?
  2. Deep Learning is hot in AI. If you want more info, this is good: Deep Learning Tutorials — DeepLearning 0.1 documentation
  3. Sigh. This debate never goes away in AI: Why AlphaGo Is Not AI – IEEE Spectrum
  4. More on the hysteria that AI brings: The founder of Evernote made a great point about why AI (probably) won’t kill us all – Vox
  5. Ignore most AI hysteria, but do read this: What does it mean for an algorithm to be fair? | Math ∩ Programming
  6. Want to whip up a quick mobile app? Consider: Mobile App Builder – new service now available – Bluemix Blog
  7. For power users, there’s: How to create an insane multiple monitor setup with three, four, or more displays | PCWorld
  8. Need virtual images? Take a look at this: Images | VirtualBoxes – Free VirtualBox® Images
  9. For hardcore WAS users, this is helpful: Installing optional Java 7.x on WebSphere Application Server 8.5 (Application Integration Middleware Support Blog)
  10. A classic. Anyone tuning WAS needs this: Case study: Tuning WebSphere Application Server V7 and V8 for performance
  11. Want to learn Python? Write your own Twitter client? Or do both? Then there’s this: How To Build a Twitter “Hello World” Web App in Python | ProgrammableWeb
  12. More on programming Twitter: How To Use The Twitter API To Find Events | ProgrammableWeb
  13. Nice little project to try, here: Create a mobile-friendly to-do list app with PHP, jQuery Mobile, and Google Tasks
  14. Creating Simple Responsive HTML5 and PHP Contact Form | Future Tutorials
  15. Setting up a Linux system? Then you want to read this: Most secure way to partition linux? – Information Security Stack Exchange
  16. Want to learn Linux? This is essential! IBM developerWorks : Technical library concerning Learning Linux
  17. If you are doing performance work on Unix, you will likely use vmstat. Even if you know vmstat, this is good to review: What to look for in vmstat – UNIX vmstat command
  18. Wow! OS/2 is still alive! OS/2: Blue Lion to be the next distro of the 28-year-old – Yahoo Finance
  19. Talk about old tech! This makes OS/2 seem fresh! It’s Insane that New York’s Subway Still Runs on This 80-Year-Old Switchboard | Motherboard
  20. I was doing some work on Netscaler and found this useful in comparing the set up of one Netscaler config with another: Export Netscaler Config – NetScaler Application Delivery – Discussions. This is also useful:  Netscaler 9 Cheat Sheet.doc – netscaler9cheatsheet.pdf
  21. I thought this was a good development for everyone interested in Node: IBM Buys StrongLoop To Add Node.js API Development To Its Cloud Platform | TechCrunch
  22. Alot has changed with IBM’s OpenPOWER. Forbes gets you up to date, here: IBM’s OpenPOWER: A Lot Has Changed In Two Years – Forbes
  23. Cool stuff here: Access your Docker-based Raspberry Pi at home from the internet · Docker Pirates ARMed with explosive stuff
  24. I was using Perl scripts on Linux to send me messages to my mobile device via Pushover. This was good for that: pushover Archives – Perl Hacks
  25. I was also using WinSCP for that and this helped: Scripting and Task Automation :: WinSCP
  26. For all those trying to succeed in IT but feeling you are running into ceiling, you should read this: Tech’s Enduring Great-Man Myth or this When It Comes to Age Bias, Tech Companies Don’t Even Bother to Lie | Dan Lyons | LinkedIn
  27. Linus Torvalds is always interesting, and this is especially good: Linux at 25: Q&A With Linus Torvalds – IEEE Spectrum
  28. Very cool! Particle | Build your Internet of Things
  29. And finally some links to good stuff on UML online: Multi-layered web architecture UML package diagram example, web layer depends on business layer, which depends on data access layer and data transfer objects.

Twitter: a former bar you used to love and now visit nostalgically

I’ve likely said enough about twitter. So much so, that there doesn’t seem much else to say. I wanted to highlight this comic, though (the long, slow death of Twitter | Technology | The Guardian) because it wonderfully sums up the arc of Twitter over the years. It matches my thoughts and feelings about the platform very well.

I still come to Twitter, the way you go to a bar you used to love. There’s not as many friends there as there was before, but there are still some. It becomes as much a visit to experience nostalgia as anything else. But then the shouters and the fighters show up and you remember why you lost your interest in it.

More on the decline of Twitter from a variety of sources

From the New Yorker and Business Insider. A rebuttal here, on Medium, and also Slate.

My take is a simple one: most people are interacting less on Twitter. This likely leads to people contributing less on Twitter, which leads to a downwards spiral. I see this on other social media as well.

The one exception to those interacting less are active self promoters. Self promoters, whether doing it personally or professionally, are still interacting regularly with social media such as Twitter. After all, it’s free and it’s better than doing nothing.

Overall, though, I expect there to be a decline in use of all kinds of social media, until someone can invent a social media that is more effective than what we have today. That may be a few years off.