When Is Monitoring Good for You? When You Consent to It

Torture_Inquisition_-_Category_Inquisition_in_art_-_Wikimedia_CommonsYou’ll thank us later…

To make sense of a controversy, I often try to define the two most extreme versions of opposed positions, and examine those. This can help me see what their contrasts really are, and where those are stark. But the risk is that I’ll oversimplify and end up with a sort of cartoon version of the debate. That’s what happened in my last post.

As a number of readers pointed out, that post proffered a false dichotomy about the nature of self-assessment and self-control. In my sketch of the issue, people behave well either because (a) they know they are being watched and don’t want to get caught, and have no insight into why it’s better to behave well or (b) they have taken time alone to reflect on their principles and conduct (and decided in this magisterial isolation about what they should do). This made it easy to see what bothers me in the idea that more scrutiny will mean less bad behavior.

Spiritually, people who do good only out of fear of getting caught are not being good. They’re just putting on a show, like a chimpanzee smoking a cigar to avoid the master’s whip. And, practically, that good behavior will vanish as soon as there’s a power failure or a system crash down at Panopticon Central. Then, too, there’s the effect on a democratic society. We need to know our fellow citizens are capable of self-management, if we are to trust them with our money and our lives. And if they have no room to make such judgments for themselves, how can we know they’re capable? Relying on transparency is a signal that we won’t or can’t rely on each other’s self-control and self-respect. It’s a recipe for cynicism and mistrust.

I still think there is something to this argument but, as I was quickly reminded (by, among others, Evan Selinger and Michael Hallsworth), the extremes I was pondering don’t map well onto real life. People don’t apologize or express regret only out fear of abuse. In fact, the kind of serious ethical pondering that I imagined—in which you evaluate, say, your own rudeness and privilege, and resolve to do better by your fellow human beings in the future—is more common after hearing what other people think of you than it is after sitting alone in a quiet room. In other words, being observed and judged are not antithetical to moral autonomy. In many situations, many of us consent to monitoring (or at least don’t mind it) because we want someone, as the phrase goes, “to keep us honest.”

The same goes for self-monitoring and self-management—practices in which one version of the self makes commitments and then enforces them against the backsliding tendencies of other versions of the same self. If you set yourself a goal and commit here to be embarrassed if you fail to meet it, you are recognizing that monitoring can help you to adhere to your own choice. It’s a way of saying you have a best self to which you want to be true. Pushing yourself to comply doesn’t make you an automaton.

So, to recap: Mea culpa—I oversimplified the psychology of monitoring in my previous post.

And yet…

Note that all the examples I’ve mentioned above share an important trait: They all involve the consent of the person monitored.

That need not necessarily be prior consent. Perhaps I’ll find it awful to be lambasted by hundreds of strangers—or one very cutting and astute friend—and wish very much while it happens that I hadn’t been caught. But if, a week later, I find that I have learned from the experience and been helped to be in some way a better person, I could decide in retrospect that I had been done a service.

However, there are many circumstances in which I might not. For example, if the sanction for my rude tweet is that I lose my job and my home, I might feel, quite reasonably, that I am a man more sinned against than sinning. No insight into myself there—I am too distracted by the unfairness inflicted on me. Or I might simply and sincerely not agree with the condemnation (who wants questions of morals settled by majority vote?). Or I might be troubled by the fact that the chastisement comes not from a trusted mentor, nor from a circle of friends, but from strangers who obviously want to hurt, rather than instruct. When there is no consent to surveillance and judgment—when it is experienced as an out-of-all-proportion attack by unconcerned strangers—then, I think, we are in the cartoon world I sketched. The world where you get death threats from people you don’t know. The world where you act contrite just so people will stop retweeting that stupid joke you made last week. A world where scrutiny and judgment may make you vow never to get caught again, but offer you no insight into ethics or your self.

I’m anxious that such a world may come into being, if only because there are people who every much want it to. Noah Dyer, for example, has said “if I knew the guy downstairs was beating his wife … he’d need privacy in order to do that. In a world without privacy, we’d also know he’s searching for her information. If there was a restraining order, we’d know he was doing things that showed an intention to violate that restraining order. We could prevent abuse in the first place.” (Putting his (or maybe your) money where his mouth is, Dyer has launched a Kickstarter campaign to support a “year without privacy” in which he’ll live in complete transparency. You can read about that in this piece by Woodrow Hartzog and Evan Selinger, which has a video at the end where you can hear from Dyer himself.)

For a “world without privacy” to work fairly, consent could not be considered. There could be no opt-out; everyone would have to participate in the general openness. And without the ability to consent—to choose whether one will be monitored, and by whom one will be judged—then the moral benefits of surveillance disappear. So, yes, the world we know includes plenty of people who are willing to be observed and judged by others, for their own moral betterment. But a world of total transparency doesn’t.

When Is Monitoring Good for You? When You Consent to It

The Right Deed for the Wrong Reason

pillory

 

Until a few days ago, I didn't know who Britt McHenry was. Now I do—not through her day job at ESPN, but rather through her surveillance-enabled, Web-driven disgrace. If you don't know her story, you know one very like it: McHenry was brutally rude to a tow-pound employee. A surveillance camera caught her tirade and some footage of it ended up on a website. The Internet pounced. McHenry later tweeted: “in an intense and stressful moment, I allowed my emotions to get the best of me and said some insulting and regrettable things,” which sounds about right to me. Who hasn't done that? However, this is 2015, and McHenry didn't get the time to scrutinize and evaluate herself in private. Instead, she was hoisted up onto the virtual pillory of Internet scorn.

That's life in the 21st century. Surveillance now is not just imposed by the state on the citizenry, à la 1984. It's also a practice citizens impose on one another (and on agents of the state), with cameras and social media. More and more of what we do and say—to say nothing of what we tweet and post—is available for others to see and (more importantly) to judge instantly.

Pondering this obviously huge shift in the way people now live their lives (and, specifically, McHenry's story), Megan Garber made an argument the other day that puzzled me. We behave better when we know we are being watched, she wrote, therefore being watched is not all bad. Woe unto the two-faced and the slackers and sliders, because technology makes it, as Garber wrote, “harder to differentiate between the people we perform and the people we are.” Wealthy celebrities will have to think twice about insulting lowly service workers. More importantly, cops will, we hope, hesitate to abuse prisoners when body cams are recording their every move. Who would say that's not good?

Twenty or thirty years ago, a lot of people would have. The assumption that underlies Garber's claim would have been, at the very least, debatable. But in 2015 it is considered to be obviously true, and she spends no time examining it. Surveillance has been around so long that we accept its premises even when we argue about it.

That assumption is this: All that matters is what people do, not why they do it. That is the justification when we use monitoring to ensure compliance to any rule, be it basic courtesy, professional standards, adherence to the law or obedience to a moral code. If a viral video of my bad behavior subjects me to global contempt, you can be fairly sure that I won't make that mistake again. But you can't be sure that I won't want to. You won't know if I have reflected on my behavior and understood that I “let my emotions get the best of me,” or if I'm just avoiding an unpleasant ordeal. I myself may not understand why it is so important that I comply. All I need to know is that nonconformity will be revealed and punished.

That is what works, without the murky, unmeasurable complications that would ensue if you had to get me to reflect and decide for myself. And what works is what is being deployed all around us. At the office there are keystroke monitors to make sure employees stay on task. Online there is insta-shaming to make sure you don't use any word or phrase that your tweeps consider un-PC. Even in the privacy of your own lived life, there are thousands of apps you can use to monitor and shame yourself into eating less, exercising more, saving money, or spending less time on Facebook.

These technologies are oriented toward measurable results: hours saved, pounds lost, cigarettes unsmoked, clients contacted and so on. In that, they express the ideology of our time, which can also be seen driving the turn in government away from explicit appeals to reason in favor of “nudges,” and a similar turn in business toward marketing via big-data prediction, social media or other avenues that bypass conscious reflection. It doesn't matter what you think or feel, it only matters what you do.

Now, this assumption can be justified in a variety of ways. One is that in some circumstances, where life and limb are at hazard, it is entirely appropriate not to care what people are thinking. It is so important that police not violate civil liberties, for example, that we can reasonably say we don't care if they're cool with the concept. Don't Get Caught (And You Will Be) is a crude but effective way of insuring as little death and damage as possible. But this claim doesn't justify the Internet shaming of celebrities or the use of software to make sure employees don't bounce over to Ebay in the office. The cost of a violation there is too low.

For the vast majority of other situations in which we accept monitoring tech to guarantee courtesy or conscientiousness, the justification is the same as you hear for most tech: It just makes life easier, you know? Why struggle with yourself about going the extra mile at work, when a social app that reveals your performance to colleagues is sure to motivate you? For that matter, why agonize about eating too much when you can use a special fork-gadget to let you know you are eating too fast? As Evan Selinger has put it, letting the monitors decide is a form of outsourcing. And outsourcing is about making life “seamless” and “frictionless,” to use the developer buzzwords.

The problem with this justification, of course, is that when we remove work and friction from life, we lose as well as gain. Selinger has criticized apps that “outsource intimacy” on this basis. When you set up an app to text your significant other, you save time and effort that you actually needed to spend to be engaged with that person. You shouldn't avoid the work because the work is the point. In these cases, it most certainly does matter what people think and feel as they perform an act. These are the times when doing the “right thing” without insight or self-awareness is a moral catastrophe, as T.S. Eliot famously put it:

The last temptation is the greatest treason:

To do the right deed for the wrong reason.

I think our fast-evolving methods of surveillance and shaming have the same flaw as the apps that outsource intimacy. When we monitor others to make sure they behave—as when we monitor ourselves to make sure we behave—we are outsourcing the work of self-government.

Instead of asking people to decide for themselves, imperfectly as ever, what they should and should not do in carrying out their jobs, we trust the cameras. Instead of affording McHenry her chance to examine her own behavior and come to terms with her conscience, we shame her into an apology. Did she mean it? Does she even know? Her chance to figure that out was taken from her. I can't speak for her, but if that had happened to me, I know I would be the poorer for it. My sense that I am different from the person people can see—that I have in me mysteries, hope and surprise—would be diminished. That is what it means to no longer “differentiate between the person I perform and the person I am.” And it is a terrible thing.

Guess who knew that? Back in the bad old days, when only governments had the power to engage in mass surveillance, the spymasters of oppressive states understood it very well.

When there was still a Czechoslovakia and it was run by Communists, the security forces there tapped phones and bugged apartments of dissidents. One day, to torment the writer Jan Prochazka, they took recordings of his chats with friends and family and broadcast them on the radio. Prochazka was devastated. After all, as Milan Kundera wrote of the incident, “that we act different in private than in public is everyone's most conspicuous experience, it is the very ground of the life of the individual.” Is that worth giving up, to be sure semi-celebrities behave themselves?

The Right Deed for the Wrong Reason

“Other Knows Best” — What This Blog Is About

FACEBOOK_WOULD_LIKE_TO_ACCESS_YOUR_HEART_RATE__CANCEL___OK_-_KID_SHIRT

This blog is about how people make fewer and fewer decisions by and for themselves, and how that fact will change what it means to be human.

In a few years, here’s what middle-class life will look like: Your car will drive itself; your refrigerator will decide on its own when to order more milk; City Hall will imperceptibly nudge you to save money and avoid elevators; Amazon will tell you what you want to buy before you know you need it. At work you’ll be monitored and measured (with, for example, wearable cameras and keystroke counters) to prevent deviation from company norms, even as more of your moment-to-moment decisions will be “assisted” by algorithms. Meantime, your exercise, sleep, eating and other intimate details will be turned into data (thanks to gadgets you eagerly bought), to help you manage yourself in the same way that others are managing you. Moreover, you will face consequences for having “bad numbers” (no exercise? higher insurance rates for you!). And even intimate chores by which you express yourself—texting a friend, choosing which photo to swipe right in Tinder—will not be left to you, as apps and gadgets take up the work.

None of these changes is far off or theoretical. The policies and devices that will create them already exist.

Technological and economic change are alienating millions of people from a collection of assumptions that they once took for granted: that we can know ourselves, and act rationally with that knowledge, and that because of these facts we are entitled to autonomy, privacy and equal treatment from institutions and businesses. Autonomy is often described as personal self-government, or “the condition of being self-directed,” as the philosopher Marina Oshana has put it. But as these technologies and policies come online, it is reasonable to ask, what will be left to direct?

Many people will welcome at least some of these changes, for good reason. Who doesn’t want safer cars and fewer road deaths? Who is opposed to helping people stay healthy for more years? After a terrible crime, who doesn’t feel relief to know that a suspect was captured on a surveillance camera? I, for one, have come to think that I am not and cannot be the best judge of whether I am biased, or affected by racist or sexist ideology. I do not think police officers are always the best judges of the actions of other police officers either.

In sum, I, like (I think) most onlookers, tend to support monitoring, surreptitious control, predictive technology and reduced decision-making power when I think I might benefit, and when it limits the judgments of people whom I do not know, either personally or as a group. On the other hand, like most onlookers, I want to defend autonomy when I can imagine my own decision-making limited, denied or disrespected by others. After all, I don’t want to be spied on, or treated like a collection of data points.

This is why the changes I want to track and reflect on here are, I think, inevitable. They don’t appeal to all of the people all of the time, but each manifestation of the “other knows best” mentality appeals to enough of the people, enough of the time, to advance.

And that’s what this new blog is about: Self-directedness, self-awareness and self-control in the era of surveillance, personal-data crunching, predictive technology and the new tools they make available to governments, businesses and individuals. How do the moments of celebration (“surveillance cams and Twitter caught the bad guy!”) relate to moments of alarm (“I don’t want them to be able to spy on me!”)? We are moving from the 20th-century model (self-aware decision-makers responding to explicit attempts at persuasion) to a 21st century model (people whose choices are outside their awareness, coping with invisible attempts to influence them). How will we, should we, manage the transition?

“Other Knows Best” — What This Blog Is About