Revisiting “Is Technology Neutral?” Seven Years Later

Recently I was surprised by what another Forum contributor told me. I learned that “Is Technology Neutral?” an article I published on October 2011, was our highest viewed post ever.

I was astonished. Other Forum articles have elicited more comments. Others have addressed topics of broader appeal. I’m even sure it wasn’t the best-written piece we’ve ever published! Why the interest, then?

I wondered how general Internet searches might have played a role, so I did what any good researcher would do: I googled it. If you ask Google, “is technology neutral,” my article is the second search result, out of 433 million pages.

Is this really surprising, though? If anything has intensified in the last seven years on the global scene, it is the pace of technological change. People feel it keenly.

In America we’re eighteen months removed from a presidential election. Our political talking points include Facebook’s moral obligations to its users. We’re concerned about the ability of bad actors to influence elections and hack into vital infrastructure. Daily we hear of self-driving cars coming into the market. People fear what automation in the workplace will mean for them. These are just a few technological challenges facing America and other nations.

I want to revisit the question I broached in 2011. Certainly many philosophers, engineers, and authors have addressed the character of technology. Yet the more attention we give to this specific question the more likely we’ll discard some of the generalizations, stereotypes, and clichés that cloud our thinking.

Clarifications

When thinking about technology we must consider the ancient idea behind the root term techné. This refers to the concept of design, craft, or technique. Something doesn’t have to be modern, electric, or Wi-Fi-enabled to be a ‘technology.’ Any manmade tool could be considered a technology. We make things to accomplish some kind of purpose, whether to provide shelter, facilitate our labor, or entertain ourselves. We’re addressing a need or desire. There is intent behind our creations.

Simultaneously, our tools or technologies make possible new circumstances. A farmer organizes his days differently if he has a GPS-equipped tractor instead of two mules. He now farms 5000 acres instead of fifty. That changes things.

With an interstate system, visiting relatives two states away is a very different proposition than taking state highways. A four-hour drive versus a 10-hour drive changes plans, as well as the way we think about how close our in-laws are!

Technological wisdom isn’t just about avoiding addiction. No, technology changes our lives in ways we see, and others we don’t. How we then address the question of neutrality becomes critical. Part of our inability to understand this issue lies in the term itself. What counts as neutral, anyway?

“Neutral” or “neutrality” yields a few definitions in dictionaries. Generally someone is said to be neutral if he is impartial or unbiased in a dispute. A machine is said to be in neutral if its main gear is disengaged. During war, then, Switzerland is neutral; it’s not involved. It takes no sides.

While neutrality might be adequate to describe Switzerland’s official political position with respect to participants in a war, is it adequate in other ways? Even if Switzerland refuses to take sides, it’s not inert. It occupies 15,500 square miles, and has over 8 million souls within its borders. It makes decisions that have a bearing on the central European economy. Imagine an alliance of European countries respond militarily to largescale genocide being perpetrated by a rogue European nation against a smaller, neighboring nation. Does Switzerland’s decision to remain neutral not make some kind of moral statement about its valuation of human life?

When we use “neutral” to describe our engagement with technology, we’re actually letting ourselves off the hook. We’ve decided the only relevant question to ask of technology is, “Do I have good or bad intentions with how I’m going to use this tool?”

Many authors realize this shortsightedness. Authors as diverse as Sherry Turkle and Nicholas Carr, to distinctly Christian voices such as Tony Reinke and Andy Crouch have all written about the various effects of contemporary technology. I’m encouraged by such publications, but only to an extent. As with many things, sudden critical attention to a cultural artifact sometimes proves as self-defeating as beneficial. If even a reasonable person feels that something they’re deeply dependent upon (knowingly or not) is being challenged, the “pile-on effect” numbs them to the warnings. They can accept, in principle, that the critique has legitimacy, but the perceived overreach of the critique actually closes them off to serious reflection. Alarmism, ironically, often does as much to reinforce people in their beliefs and habits as it does awaken them to action and change.

Amid wide-scale discussion about what technology is doing, I’d like to offer a brief meditation on where I think the discussion should go. Particularly, let’s explore two aspects of technological engagement: agency and responsibility.[1]

Agency

Philosophers and sociologists sometimes use the term “agency” to speak of action, specifically, people’s ability to act upon their world. Without getting too technical, our agency is being both eroded and redefined in our technological society.

If you read a tech-article most likely it will sound like this, “Automation is changing the way we interact,” or, “Man fired by a machine.” The title will probably be terse, assertive, and clever. But look closer. One gets the sense from such articles that technology is a blind force that we’re simply reacting to. We are being acted upon. We are passive in the matter. According to many economists, journalists, and politicians, technology is basically like weather. We cannot stop the hurricane that is developing off the Atlantic. We just have to brace for its inevitable landfall.

Our language reflects a view of the world, and reinforces a view of it. The fact that we view technology like we view weather suggests, at least from a gut-level and the level of appearance, we have little to no agency. We just have to wait and hope we have the resources to endure whatever its howling winds and crashing waves do to us, our families, our communities, and the economy.

I doubt most people would quite accept this grim picture if they were asked, “Can we shape our technological future?” Most would assert our ability to make good decisions about what we are inventing, how we’re using it, and what restrictions and regulations to impose via legislation. But how most of us write and talk sounds much different: “People are people being pushed out of jobs by automation.” “Artificial intelligence is coming, whether we want it to or not.” This type of grammar reveals that we really do think we’ve lost much of our agency; it also reinforces the sense that we are losing control.

What’s ironic is that technology was historically thought to be a tool to give us mastery over the world. Science, a precondition of technological advance, has been seen as a practical enterprise, at least since Francis Bacon. Yet as we near the quarter mark of the twenty-first century, our language suggests that we’re not sure we’re in control.[2]

Responsibility

These tensions and contradictions lead to questions of responsibility. What are we supposed to do? Answering this assumes we have some kind of meaningful agency left in a technological age.

It’s strange that in the most litigious country in the world few even wonder aloud whether we should consider legislative measures to limit certain innovations that might not be decidedly beneficial to the American populace. Of course, history is filled with examples of us doing things solely because we can, and only asking later if we should have done them, or should continue to do them. I raise the legal question because it forces us to pause and ask what we should do. Entailed within questions about our agency or ability to act is that of what we ought to be doing.

We live in an age where a programmer can create an algorithm that is then able to run a system in a nearly self-sufficient way. But it’s only when an undesirable or harmful outcome eventually happens—perhaps years later—that affected persons look for a culprit. But who’s to blame? The person who marketed and sold the program? The company who manufactures it? The person who invented it? The user read the Terms & Conditions (Well, they were available to be read).

Even though the technological age seems to have eroded human agency, an inescapable moral impulse survives. That moral impulse feels that behind every virtual act exists a moral agent (even when there isn’t a corresponding keystroke for every act). That process may seem blind and amoral, even neutral. But it’s never neutral when we deal with human lives.

Any who take seriously technology must remember Marshall McLuhan. McLuhan, among others, raised some of the most pointed questions about modern media decades ago. One of his statements is worth lingering on: “There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.”[3] We may feel that technological change is inevitable, and we are partially correct. But we cannot concede. We cannot surrender to the idea that we are incapable of examining our world and our lives to consider how to engage technology and how technology engages us.

God has made us. And though we can make things, too, and those things in turn make new worlds, God calls the sons of Adam to try to discern the differences between Eden, Babel, and the New Jerusalem. This discernment will help us to understand which city we’re building.

____________________

[1]There are many more valid areas of consideration, including privacy and bias, but I will reserve my thoughts to these two.

[2]Another irony is the semantic connection between “automation” and “autonomy.” Only in late modernity are those who make things not autonomous. What they make is.

[3]Marshall McLuhan and Quentin Fiore, The Medium is the Massage: An Inventory of Effects (New York: Random House, 1967).

Note: The three articles hyperlinked in the latter half of this article were taken from The Convivial Society, no. 5. https://tinyletter.com/lmsacasas/letters/the-convivial-society-no-5-action; Thanks to Michael Sacasas for his excellent work at www.thefrailestthing.com. Also, see his excellent article on some themes related to this essay at The New Atlantis.

Author: Jackson Watts

Share This Post On

What do you think? Comment Here:

SUBSCRIBE:

The best way to stay up-to-date with the HSF

You have Successfully Subscribed!

Pin It on Pinterest

Share This