On political consequentialists vs political deontologists (with a reference to Churchill and Andor)

In politics I believe it is common to talk of political pragmatists vs political ideologues.  What is less common is to talk about political consequentialists vs political deontologists. The two “vs” seem to overlap, but there are differences. The key difference is that political consequentialists and political deontologists make their decisions from an ethical viewpoint.

I was thinking of political ethics this week when there was a discussion around whether or not the Democratic Party in the U.S. should accept money from Elon Musk. As someone who is more of a political consequentialist, I thought: of course they should take his money, especially because it could help them win control of the U.S. government and for starters they could reverse the changes he has done. Then I read others who argued they would not take money “from a guy who does a Hitler salute” (i.e. is evil). I get that argument: they think they have a duty to never ally with someone as bad as Musk, and they must believe they can get money from elsewhere that does not conflict with their political duties.

There are pros and cons with either ethical approaches to politics. I tend to take a deontological approach when the consequences are difficult to measure, but when the possible outcomes are measurable, I tend to take a consequentialist approach.  For example, thinking like a political consequentialist, I might not vote for a corrupt politician or an anti-democratic politician, even if I think this will lead to good short term outcomes, because I believe there are potentially larger bad outcomes that come in the long term from having corrupt and anti-democratic politicians in power. But that’s a complicated calculus. Thinking as a political deontologist, I would simply not vote for a corrupt or anti-democratic politician because I have a moral obligation to support only those people who are not corrupt and are for democracy.

People can be on the same side of the political aisle and still argue. Sometimes they argue over the practicality of something. But sometimes they will be arguing for ethical reasons. Something to watch for.

P.S. More on the difference between consequentialism and deontology terms, here. Also this piece, which also adds virtue ethics to the mix.

P.S.S. The photo is of Churchill walking through Coventry. The moral question there was: if you have access to the secret communications of your enemies and you know they are going to bomb a certain city on a certain day, do you warn the people of that city, knowing that by doing so, you risk losing your access and potentially lengthening the war? It’s a question that also comes up in the TV series, Andor, where one character (Luthen Rael) sacrifices 31 men in order to continue hiding the fact that he has an informant in the Empire he is fighting against.

On the ethics of the pig heart transplant

David Bennett Sr. has died, two months after receiving a genetically modified pig’s heart. Like any transplant operation, there were ethical decisions to make. If you are an animal rights activist, you have even more ethical decisions to think about. But this particular transplant brings in even a broader range of ethical considerations, which is obvious once you read this: The ethics of a second chance: Pig heart transplant recipient stabbed a man seven times years ago.

I generally have faith in medical professionals to make the right ethical choices when it comes to transplants.  I think he should have received the transplant and a transplant from a pig is acceptable. But read about it yourself and see what you think.

 

You cannot learn anything from AI technology that makes moral judgements. Do this instead

books
Apparently…

Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.

What can I say? Well, for one thing, I am embarrassed for my profession that anyone takes that system seriously. It’s a joke. Anyone who has done any reading on ethics or morality can tell you very quickly that any moral decision of weight cannot be resolved with a formula. The Delphi system can’t make moral decisions. It’s like ELIZA: it could sound like a doctor but it couldn’t really help you with your mental health problem.

Too often people from IT blunder into a field, reduce the problems in them to something computational, produce a new system, and yell “Eureka!”.  The lack of humility is embarrassing.

What IT people should do is spend time reading and thinking about ethics and morality.. If they did, they’d be better off. If you are one of those people, go to fivebooks.com and search for “ethics” or “moral”. From those books you will learn something. You cannot learn anything from the Delphi system.

P.S. For more on that Delphi system, see: Can a Machine Learn Morality? – The New York Times.

(Photo by Gabriella Clare Marino on Unsplash )

Some good philosophy links for amateur thinkers

The word Philosophy
These are all links I’ve come across recently and thought worthwhile:

If you are not used to reading philosophy, the first one is a must read. Otherwise, you may find yourself trying to read philosophy in a way that leaves you frustrated.

I’ve seen references to virtue ethics (as well as stoicism) frequently these days: if you aren’t familiar with it, that link is a good starting point to get to know it.

Finally, the last link is useful if you are new to philosophy and want to know it better but find it hard to get started.

(Image from http://uucch.org/morning-philosophy-group)