To make sense of a controversy, I often try to define the two most extreme versions of opposed positions, and examine those. This can help me see what their contrasts really are, and where those are stark. But the risk is that I’ll oversimplify and end up with a sort of cartoon version of the debate. That’s what happened in my last post.
As a number of readers pointed out, that post proffered a false dichotomy about the nature of self-assessment and self-control. In my sketch of the issue, people behave well either because (a) they know they are being watched and don’t want to get caught, and have no insight into why it’s better to behave well or (b) they have taken time alone to reflect on their principles and conduct (and decided in this magisterial isolation about what they should do). This made it easy to see what bothers me in the idea that more scrutiny will mean less bad behavior.
Spiritually, people who do good only out of fear of getting caught are not being good. They’re just putting on a show, like a chimpanzee smoking a cigar to avoid the master’s whip. And, practically, that good behavior will vanish as soon as there’s a power failure or a system crash down at Panopticon Central. Then, too, there’s the effect on a democratic society. We need to know our fellow citizens are capable of self-management, if we are to trust them with our money and our lives. And if they have no room to make such judgments for themselves, how can we know they’re capable? Relying on transparency is a signal that we won’t or can’t rely on each other’s self-control and self-respect. It’s a recipe for cynicism and mistrust.
I still think there is something to this argument but, as I was quickly reminded (by, among others, Evan Selinger and Michael Hallsworth), the extremes I was pondering don’t map well onto real life. People don’t apologize or express regret only out fear of abuse. In fact, the kind of serious ethical pondering that I imagined—in which you evaluate, say, your own rudeness and privilege, and resolve to do better by your fellow human beings in the future—is more common after hearing what other people think of you than it is after sitting alone in a quiet room. In other words, being observed and judged are not antithetical to moral autonomy. In many situations, many of us consent to monitoring (or at least don’t mind it) because we want someone, as the phrase goes, “to keep us honest.”
The same goes for self-monitoring and self-management—practices in which one version of the self makes commitments and then enforces them against the backsliding tendencies of other versions of the same self. If you set yourself a goal and commit here to be embarrassed if you fail to meet it, you are recognizing that monitoring can help you to adhere to your own choice. It’s a way of saying you have a best self to which you want to be true. Pushing yourself to comply doesn’t make you an automaton.
So, to recap: Mea culpa—I oversimplified the psychology of monitoring in my previous post.
Note that all the examples I’ve mentioned above share an important trait: They all involve the consent of the person monitored.
That need not necessarily be prior consent. Perhaps I’ll find it awful to be lambasted by hundreds of strangers—or one very cutting and astute friend—and wish very much while it happens that I hadn’t been caught. But if, a week later, I find that I have learned from the experience and been helped to be in some way a better person, I could decide in retrospect that I had been done a service.
However, there are many circumstances in which I might not. For example, if the sanction for my rude tweet is that I lose my job and my home, I might feel, quite reasonably, that I am a man more sinned against than sinning. No insight into myself there—I am too distracted by the unfairness inflicted on me. Or I might simply and sincerely not agree with the condemnation (who wants questions of morals settled by majority vote?). Or I might be troubled by the fact that the chastisement comes not from a trusted mentor, nor from a circle of friends, but from strangers who obviously want to hurt, rather than instruct. When there is no consent to surveillance and judgment—when it is experienced as an out-of-all-proportion attack by unconcerned strangers—then, I think, we are in the cartoon world I sketched. The world where you get death threats from people you don’t know. The world where you act contrite just so people will stop retweeting that stupid joke you made last week. A world where scrutiny and judgment may make you vow never to get caught again, but offer you no insight into ethics or your self.
I’m anxious that such a world may come into being, if only because there are people who every much want it to. Noah Dyer, for example, has said “if I knew the guy downstairs was beating his wife … he’d need privacy in order to do that. In a world without privacy, we’d also know he’s searching for her information. If there was a restraining order, we’d know he was doing things that showed an intention to violate that restraining order. We could prevent abuse in the first place.” (Putting his (or maybe your) money where his mouth is, Dyer has launched a Kickstarter campaign to support a “year without privacy” in which he’ll live in complete transparency. You can read about that in this piece by Woodrow Hartzog and Evan Selinger, which has a video at the end where you can hear from Dyer himself.)
For a “world without privacy” to work fairly, consent could not be considered. There could be no opt-out; everyone would have to participate in the general openness. And without the ability to consent—to choose whether one will be monitored, and by whom one will be judged—then the moral benefits of surveillance disappear. So, yes, the world we know includes plenty of people who are willing to be observed and judged by others, for their own moral betterment. But a world of total transparency doesn’t.