Resisting Techphobia: Ethics of Tech
To wrap up this mini series on resisting techphobia, let’s look at some general questions about the ethics of technology.
Inherent Moral Value
It’s a standard ethical debate: does technology inherently have any moral value? Is a gun inherently evil or is it only evil when a person uses it to hurt another person? Does it become good if used to kill an animal to feed your starving family (before the upper-middle class contingent responds, not everyone can afford grocery store prices)? And if it can change so easily depending on our use, did it really hold that moral value to begin with or was it just a neutral tool?
Of course, it is true that the medium is the message. There are certain obvious things about what guns are designed to do: shoot things. You don’t have to shoot things with it, and you definitely don’t have to shoot people, but shooting is the main intended function. What if it is an assault rifle that you wouldn’t use to hunt, meaning that it’s only real value is in killing people? Does that sole function make it morally evil in and of itself or is it still only evil when used for that purpose?
Same goes for social media. Yes, it is mediated communication through screens. That carries with it the function of communicating from a distance. Does mediated communication make it inherently evil because nobody really thinks that it is as good as face-to-face communication? Or does it make it good because communication is the basis of relationship and mediated communication is better than none? Or is it neutral and all depends on how we use it?
If you can’t tell, I’m a believer in the neutrality of technology, at least most technology. Even in cases like the assault rifle, we had to differentiate from guns in general to get to a point where we didn’t see a good use for it. Is that fair or realistic? Do we have to treat them as a group since having one would inevitably lead to the other anyway in the real broken world?
Let’s close this section with a principle I picked up at a recent seminar on a Kingdom approach to technology. If taken to the extreme, a technology often becomes the opposite of what it was intended to do. For example, cars were designed to help us get around faster. If taken to the extreme where everyone drives everywhere, we get traffic jams and you actually move slower. Social media is designed to connect with people. If taken to the extreme, always on it instead of face-to-face communication (rather than in addition to), you’ll probably end up disconnecting from people.
So again, does that make social media good because of the original purpose, bad for what happens at the extreme, or neutral and up to how we use it? Couldn’t we simply say: “watch yourself so you aren’t at the extreme”?
Be Discerning, not Afraid
My answer in short: yes you definitely should be discerning of what effects come from the way you use technology, whether a shovel or a car or a state-of-the-art smartphone. No, you should not be afraid that society will crumble around you. Technology will change the world and people will learn new social etiquette for how best to handle it. Many have already figured this out, others never will.
Being paranoid doesn’t help. I have yet to encounter anybody who has abandoned social media because of these viral videos. They point at somebody else who they think more exemplifies the extremes used, and they may or may not have a point about that person using it poorly. And they can go on with their lives until the next viral video comes around which gives them an opportunity to try to shame those other people more. Of course that shaming doesn’t work, because those people are also thinking of somebody else who they have judged to be the worst.
So how about some discernment instead? Instead of fear, which usually pushes us to judge others and how they are ruining our world, we could try asking the tough questions about our own use. With how I use technology, what hurts the goal of loving everyone? What helps that goal? Are there ways to stop hurting without losing the helping? If not, do the positives outweigh the negatives or vice versa? Those are the kinds of questions I think we should be asking.