The People We Don’t Really See

A neighbor of ours does some housework for us twice a month. We pay her the going rate, and as far as I know we’ve never imposed on her to do anything especially surprising or annoyingly beyond what we’d agreed to—some vacuuming, counter-washing and tidying up, that sort of thing. 

Our relations are friendly. Yet I never talk about her. When I discuss my life, my home, our family’s ups and downs, this woman — who spends much more time in the place than most of our friends — never comes up. I am not giving her name here because I don’t have her permission to put her in pixels. Well, that’s part of it, and it applies to her first name. I am not revealing her last name because I don’t know it.

Now, it may well be that this intimate yet invisible relationship strikes her, as it did us, as a fair and reasonable trade, freely entered into. Money for work, the basic capitalist deal. But still, there is a fundamental inequality to our relations. She is doing work we don’t want to do, freeing us up to do things that we prefer—things that are more fun, or that earn some money, or maybe both. But we never switch roles. We’re never over at her house, filling her dishwasher. 

And so I have to wonder whether we aren’t, in some sense, taking advantage of an unearned privilege. Our family has more money and more education and a language advantage. Would she clean our house if circumstances put her in a position to make money doing something else? Maybe she would. Maybe she wouldn’t. It’s not an answerable question. Circumstances are what they are.

Thinking of this smiling, friendly woman, whom I like but who I can’t call a friend, I think of all the unmentioned people—almost all women—whom I have not discussed, and not remarked on, as I made my way through middle class life in these past few decades. There was a part-time nanny. People who came to clean. In a less intimate circle, more men, but again, people whom we would engage to do a job and then ignore. They were in my home but not of it. And, fine, very likely most did not want to be. But the fundamental unequalness here — people doing work I chose not to do, doing work for me that otherwise they could for their own families, their own children — seldom gave me pause. 

I grew up without a lot of money (no one working for us) and so I only learned this culture of invisible helpers in my 20s. You have something draining, repetitive, hard, emotionally weighty and heavy (like tending to a small child or a sick old person)—well, you find someone willing to do the work, and hire them. And then don’t talk about them. Or talk about them like a resource. Years ago, in Tennessee, I met the woman who had helped to raise my then-wife. The former nanny/housekeeper was an African-American lady of somber mien and few words. She’d been ill, my in-laws told a neighbor. “Ah, yes,” he drawled, “the same thing happened to ours.” 

Occasionally, our caretakers, our burden-bearers — I almost wrote “hidden,” but they aren’t hidden, they are right there, we just don’t see them the way we see our children or our neighbors—do come up in conversation with people we treat as peers. Usually, it’s because something has gone wrong (“the nanny quit with no notice!”) or because we want to follow convention and we aren’t sure what it demands (“there’s a blizzard and she can’t get here, do I still have to pay her?”). 

Now and then the subject is praise for their services. (Like the time I heard some neighborhood fathers get into how pleasant it was to come home because their servants did such a good job putting the house in order — “but when you come home early, wow! it’s chaos, am I right?”) 

As a general rule, though, we keep these people invisible. And I find I think very little about how I behave with them. So, for example, I try to have dishes out of the sink and toys put away when our neighbor comes to clean, because (in theory) it’s more efficient for her to do the bigger stuff we never get to, and because (in reality) I feel like there are intimate, grinding daily life things we should do for ourselves. However, if “things have gotten out of hand” or “I just don’t have time” then I leave her with the mess. Because … well, I can. She won’t complain. No one else will either. 

These thoughts have been prompted by this amazing piece in The Atlantic“My Family’s Slave,” by the late Alex Tizon, a Pulitzer-prize winning reporter who, in his 50s, decided to come to terms with the fact that the woman who raised him was, effectively, enslaved. In the Philippines, Eudocia Tomas Pulido  had been taken into the family by his grandfather, tasked with raising the author’s mother. She was then brought to America and forced to work for the family for generations. She raised Tizon and his siblings. She raised Tizon’s children. She had no life of her own. And his parents thought this was fine.

So he grew up in a house in which Pulido was imprisoned and abused—because that was how things were, because this sort of thing was done by families like his. And even though he saw the injustice early in life, his loyalty to his family—especially his mother—held him in check until he finally, in his middle age, he  tried to make some amends.

It’s possible to read this piece as the exotic and strange story of an immigrant family. And it’s possible to read it as a metaphor for American capitalism or for the position of women in patriarchy. It is possible to condemn Tizon for not doing more, in life and in print, to give Pulido her stolen life back. There is much to be said about it. It is unsettling, and my own thoughts about it are in flux. 

But one thing I think the essay inevitably does is prompt its readers to think about their own invisible caretakers, and the ways in which their family’s advantages shaped the relationship. A vast amount of work undergirds the lives of people who can read The Atlantic. The people doing that work—the nannies, housecleaners, handymen—are doing it because they are at some kind of disadvantage. It might be education, migration status, money, social capital, capital-capital, or some combination of those. So it’s worth asking if  arrangements that seem well and good to us  — because no one complains, because that’s how things are, because someone has to pick up little Jake while I am at work — aren’t at least a little exploitative. And if that shouldn’t affect how we see and how we treat other people whom we now barely see at all. 

The People We Don’t Really See

When People Believe They’re Both Responsible for Everything and Incompetent at Everything, Privacy Will Die

Imagine driving down a street and hearing impatient honking behind you. Annoying, right? But there might be extenuating circumstances. Perhaps there's a woman in labor in that car. Or someone who could miss her plane because of all the traffic from that accident two miles back. On the other hand, that honker could have reasons that make him more annoying than you'd expect. Maybe that driver is cheating on a life partner and rushing to get home so an alibi will hold up. Maybe it's a car full of goons out to collect protection money for a guy named Ice Pick Willie.

The way life is currently organized, those details are unknowable. So the reasonable assumption is that anyone honking at you has an average sort of motive. This is a sound application of the Copernican principle, which holds it most likely that there's nothing special about the place and time in which you are observing anything. So whatever you're perceiving is probably in the average range for its type. The driver behind you is unlikely to be 4 feet tall, and equally unlikely to be 7 foot 2. Analogously, that driver is unlikely to have an extremely good or extremely bad reason to badger you.

An alternative to the Copernican approach to others is, of course, to use stereotypes. But a stereotype like "Southerners are casual about time" or "old people are forgetful" is still a rough generalization, in which you invoke (supposed) facts about a category of people to explain the behavior of an individual. You have to do this because you don't know enough about the person in front of you and their motivations of the moment.

This lack of information about individuals and their circumstances is a veil of ignorance, hiding us from each other. And that veil has been, I would argue, a good thing. Unless we're invoking a bad stereotype, the veil causes us to see each person who imposes on us as an equal, and to treat his motives as more or less valid, and to refrain from judging him.

What happens, though, if the veil of ignorance is removed? If it becomes possible to know, in real time, scores of relevant details about what made the person honk? This is the prospect raised by the Internet of Things, that near-future Web over which phones, cars, highways, fitness trackers, offices and stores will all talk to one another. Soon, when the IoT is part of daily life, as Daniel Burrus noted here, concrete in a bridge will detect ice and alert your car to the hazard. Then the car "will instruct the driver to slow down, and if the driver doesn’t, then the car will slow down for him."

In other words, the main subject of all this digital talk among things will be the humans they serve and monitor. I think it's unlikely that we'll let them keep all that information to themselves. If you have a woman in labor in your car, the car will know—and so, if you are idling in the way, will you. On the other hand, if that person behind you is just worried about getting to lunch before they seat someone at her favorite table, then you'll know that too. And you'll judge your fellow human accordingly.

What brings this to mind is the changing semantics of cars bashing into one another. As Matt Richtel reports here, traffic-safety advocacy groups are campaigning to rebrand the "car accident." That term invites people to think that colliding with another car or plowing into a ditch are events that just happen, like lightning strikes or heavy rain. In fact, almost all car crashes are caused by people's decisions. Driving while drunk, driving while texting, driving while asleep and other choices are the cause of "accidents," which kill some 38,000 people a year on American roads.

So police departments around the U.S. have been replacing the blank veiling concept of "accident" with the word "crash." The authorities want to shift to a concept that nudges people to see drivers as responsible for what happens to them. In this, they are like their colleagues elsewhere in government, who are trying to reclassify other experiences that people used to see as fate. Heart attacks and diabetes don't just happen, we're told—they're the consequences of choices we make about food and exercise. Catastrophic global warming isn't something that's occurring by accident; we're making it worse by flying and wasting electricity. If you're poor in old age, don't blame capitalism—you were supposed to be saving for retirement!

You might think that this trend will encourage people to take more responsibility for their own conduct, and thus help them not only to avoid accidents but to see themselves as empowered agents of their own lives and authors of their own fates.

I think, though, that the long term consequences will be the opposite.

This is because today's message of individual responsibility coincides with an equally powerful message of individual incompetence. The psychologists who have the attention of our media and our leaders are the ones who say that people make mistakes all the time, about everything: what to eat, what to buy, what to fear, what to plan for. Driving is no exception: One of the most commonly cited justifications for the coming age of self-driving cars is that way fewer people will die on the road. Driving, like so many other skills, will soon be—if it is not already—an activity that machines are better at than humans.

When you combine the message of individual responsibility with the message of human incompetence, the only consistent stance left is that people ought to let the machines do the work. The responsible thing to do is to cede control. Let the cars drive themselves, and thousands of people will live, who would have died at the hands of human drivers.

But when we cede control, we give up information. The controlling system needs it to make the decisions we once had to make. So, for example, an effective transport system will need to know if you're going to the hospital with a woman in labor, or just worried that they'll run out of Chunky Monkey at the ice cream parlor. And once the machine knows it, why should it keep that information a secret?

So the Internet of Things is likely to shred the veil of ignorance that forces us to see all claims on us as equal. In any given confrontation, instead of assuming every person is an average human being with average reasons for his behavior, we will be able to tell the good reasons from the bad, the Samaritans from the selfish cheaters. And another layer of privacy—the privacy to decide for yourself how much of a hurry you should be in, how much you should impose on others—will be gone. When we combine the presumption of accountability with the presumption of incompetence, we're signing up for a transparency that is so complete, and so different from life today, that we can scarcely imagine the world we are creating.

When People Believe They’re Both Responsible for Everything and Incompetent at Everything, Privacy Will Die

How Post-Rational Politics Works

Like a lot of journalists and politicos, I have been pondering David Samuels' mind-blowing profile of White House speechwriter Ben Rhodes. It's the best illustration I've ever seen of the practical consequences of what can sound like an abstract theoretical problem.

All White Houses have attempted to sway public opinion (Jack Shafer has a great piece here recapping such campaigns from Truman forward). But Shafer's emphasis on continuity—Obama is the same as his predecessors, so there!—seems a bit off to me. As Shafer notes, earlier White House opinion-shaping efforts, whatever their sly strategems, were at some level explicit appeals to the principles and interests of the citizenry. Americans then knew what the White House was claiming, and they knew it was the President's team that made the claim.

That's no longer the case in the 21st century. As David Axelrod puts it in Samuels' piece: "It’s not as easy as standing in front of a press conference and speaking to 70 million people like past presidents have been able to do." Now, Samuels writes, "the most effectively weaponized 140-character idea or quote will almost always carry the day, and it is very difficult for even good reporters to necessarily know where the spin is coming from or why."

This is a qualitatively different process from 20th century politics. It is one in which no one knows who crafted "the narrative" or why, except the master crafter and his minions. George W. Bush, selling intervention, and Barack Obama, selling non-intervention, differ from their predecessors here. They do so in much the same way (and for the same reasons) that Obama differs from earlier Presidents in choosing polices that "nudge" people into making better choices in health, finance and other areas.

Like nudge, invisible spin assumes that people are irrational much of the time — that they take action not in response to explicit arguments and clear self-understanding, but rather in response to perceptions and emotions that they themselves do not understand. This is a departure from the way we imagine politics is supposed to work: " 'I mean, I’d prefer a sober, reasoned public debate, after which members of Congress reflect and take a vote,” [Rhodes] said, shrugging. 'But that’s impossible.' "

I don't mean to suggest that we should be nostalgic for a bygone age of sagacity and selfless wisdom. Politics in this country has never been Ciceronian. It has always involved threats and horse-trading as much as it did arguments of sweet Reason. (Considering that the Roman Republic was run a lot like the Mafia, not even Cicero was Ciceronian.) But even low maneuvering is, at least, explicit—a gambit like "vote for this and we'll put a base in your district" is an appeal to rational thought and self-awareness. A gambit like "everyone knows this is the right way to see it, just look at Twitter" is an appeal to unconscious social motivations and anxieties. It is not the same old thing. It is different.

Bureaucrats across the world are accepting that people are not rational (and therefore should be unconsciously nudged rather than explicitly persuaded to do the right thing). So why be surprised that their politician-masters have accepted that people aren't rational when it comes to selling policies that the bureaucrats will then enact?

Yet this raises a problem: How can a Republic that supposedly depends on rational debate continue after its government has accepted that this is not how people "do" politics? How to avoid a Soviet-style hypocrisy, in which not a single actor in a public debate — politicians, policymakers, advocates, non profits, journalists — believes what he or she says about it?

I have no idea how to answer those questions, but, as I said, they are becoming less theoretical and more practical with every passing day. Samuels' piece dramatizes that very effectively, which is one reason I admire it.

I was also impressed with its craftmanship.

Many years ago, the New York Times Magazine assigned me to write a profile of a DC advocacy organization, and I went down to Washington as an outsider. As I wrote the piece, I tried not to get swept up in DC conventions. Instead, I tried to remain the outsider, contemplating the odd folkways of the capital—the insipid blather that passes for repartee, the greasy-pole climbing, the transformation of major public problems into trivial stories of personal feuds and favor-trading. I looked at it all from the outside, as I guessed a normal reader would. I thought (I think correctly) that this was something of a departure from standard journalism about "the process," in which the point seemed to be to sound like a member of the club.

The editors killed the piece, without discussion. They wanted knowing, insidery journalism, not something that, by sounding human, would make the magazine look amateurish.

So, all credit to Samuels and his editors, for producing the best story about politics for non-pole-climbers that I have ever read. Clearly, being in the White House did not turn his head. He captures the banality of the political mind, its unimaginative second-rateness (the story begins with its subject's decision that writing for politicians is obviously a better thing to do than pursuing his art). But Samuels also captures how alarming it is when the President, the person in charge of Washington, feels a normal person's contempt for the place. I have a lot in common with people who dislike the culture of medicine. But wouldn't want my doctor to be one of those people.

This is a paradoxical and ambivalent take-away, a literary achievement, rather than a political one. A lot of those attacking the article don't get this. They want to interpret it as if it were inside the system of spin and game-playing that it describes.

For instance, it seems many people are offended by Rhodes' contempt for "the Blob"—the foreign policy network of think tanks, academics, wise men, officials etc. who made (or endorsed) so many bad decisions over the years.

A big theme in the piece is that there is a difference between knowing what you are talking about and being a sanctified expert. Rhodes is the former sort of person, formed by deep confrontation with a set of facts. The people he disdains are the other type—formed by mentors, networks, training in how to think and sound like the right sort of person. That such experts, and their friends, dislike the idea that their experience isn't so impressive should not be surprising. But the thesis that expertise is overrated is eminently defensible. Should I care that it offends experts? I don't think so.

Another line of attack interprets the piece as more typical journalism: A move in the game of persuasion and counter-persuasion. Since the article takes place in and around the White House, it can't be taken at face value—as an account of one human being engaged with some alarming changes in our politics. Instead, it must be read as an attack on the Iran nuclear deal. But that deal is simply the specific story that illustrates both the psychology of one interesting fellow and a broader social trend.

A third criticism leveled at Samuels is that he exaggerates his subject's importance. For instance, here, Dan Drezner makes the point that Rhodes' effort made a great deal less difference in the politics of the Iran deal than Samuels suggests.

This is an inevitable hazard of magazine profiles, which I like to think of as the "biopic problem." You expect a movie about someone's life to make contact with the major historical developments that took place in that lifetime. But people's lives are often touched tangentially and irregularly by history. Even someone present at the crucial meeting may have been distracted by his gout that morning. So it is hard to line up the milestones of an individual's life with the milestones of history. This is why, in biopics, we get odd scenes, like Josephine Baker singing The Times They Are a-Changing in nightclubby feathers and satin. (OK, we've worked in the 60s).

I've written a lot of "profiles" for magazines, so I am familiar with the magazine version of the biopic problem. Almost inevitably, your subject looms larger in the picture than she would in a Tolstoyan grand overview. You compensate for this problem by proffering something of interest that is peculiar to your subject. In effect, you are saying, I know others will say my guy didn't invent the Internet, but you aren't reading this to find out who invented the Internet.

I think this is what Samuels was trying to do. His narrative is Rhodes-centric because Rhodes was his subject, not because he wants you to believe that Rhodes matters more in history than John Kerry or the Washington press corps. He has a different story to tell: What Washington game-playing looks like to a normal person, and how it is moving further away from the Enlightenment model of rational individuals debating in good faith.

ADDENDUM: Unsurprisingly, some of the complaints about this piece have raised substantive issues. I don't think they detract from overall success as a piece of writing about politics, but they shouldn't be ignored.

First, Samuels didn't disclose anywhere that he is a longstanding opponent of the Iran nuclear deal for which he describes Rhodes working. Jeffrey Goldberg has the goods here. A number of writers were quick to point this out, including Robert Wright and Fred Kaplan. This raises two questions: Why didn't Samuels own up to the skin he had in the game? And why, given his record, did the supposedly crafty and disciplined Obama White House give him such access to its inner workings?

Second, as Goldberg explains in his post (others have too), Samuels frames quotations about some journalists to make it sound as if those people were simply sock puppets for the Administration. Goldberg is offended by this for a number of reasons. One is that he is one of the writers tarred unfairly. Another is that he and Samuels have some bad blood between them, which Samuels doesn't disclose.

As I said, I don't think these points detract from the larger import of the story, which is about the way politics has moved from explicit and rational arguments to a sort of post-rational style of policy debate. But they are troubling.

How Post-Rational Politics Works

Want People to Behave Better? Give Them More Privacy

Would you tell your six-year-old there's no Santa Claus, and then describe exactly how you bought and wrapped all the Christmas gifts? If your spouse asked who was on the phone, would you say "it's your best friend about the surprise party we're planning for your birthday"? Would you tell your boss about your plans to test your new idea, with its 50-50 odds of success, at work next Monday?

If you answer "no" to these questions, then you have a healthy and normal appreciation for the value of hiding some facts, from some people, some of the time. It's part of life as an autonomous adult to make these hidden pockets of information, which give people room to try new things, to make difficult decisions, to protect others or to make them happy. There are many good, even noble, reasons for this kind of version-control. When you protect the boss from blame for your gamble, it's a form of loyalty. When you don't disclose where the Christmas presents come from, it's a form of loving care.

So why do people often throw this instinct out the window when they think about companies, governments and other institutions? Individuals who protect their personal privacy nonetheless assume that in groups of people, total openness must be the most efficient, effective and only morally right way to operate.

As a consequence, even privacy's defenders often concede that it's a kind of sand in the gears—gears that would turn better if nothing were hidden. We only accept the sand, they say, to preserve other things we value. We could catch terrorists if the government spied on all communications, but you don't want cops reading your email. You'd get more out of workers if you tracked their every move 24/7, but you don't want companies snooping on you in the bathroom. "Commercial aviation would be safer if we were all required to fly stark naked," the journalist Nicholas Kristof wrote some years ago, "but we accept trade-offs — such as clothing — and thus some small risk."

This kind of argument is so familiar that it's kind of shock to wonder if it's true. Yet it's just these sorts of unquestioned assumptions about privacy that need examining as it comes under increased pressure from increasingly powerful technology. It is well worth asking: Must a government or business lose competence when it gives people some power to decide what is known about them, and who knows it?

Ethan S. Bernstein, a Harvard Business School professor, expected to find that the answer is yes. He had, as he wrote a few years ago, accepted what he calls "the gospel of transparency" in business. But his research on companies in China, the U.S. and elsewhere changed his mind. He found, in fact, that giving workers a certain measure of privacy led them to perform better than their more heavily monitored peers.

There's a natural state of heightened attention to the self when we know we're being watched, Bernstein notes. "Our practiced response become better," he told me, "our unpracticed responses become worse." So actions that have been drilled by the boss may well turn out better when everyone believes the boss is watching. On the other hand, for behavior that isn't already learned—where the best response needs unselfconscious focus on the problem, and the chance to try something new without fear—being watched makes things harder. Attention that could have gone to one's actions goes, instead, to managing the appearance of one's actions.

The "gospel of transparency" declares that this is not a problem, because workers should stick to management's script. But in one vast Chinese factory that Bernstein studied, workers who craftily deviated from standard procedure often improved the plant's productivity.

For example, in a Bernstein study that embedded Chinese-speaking Harvard undergrads as workers in the factory, the students soon discovered something surprising about behavior that looked to outsiders like employees just chatting and fooling around. Within each team, workers were quietly training others to do their jobs. That way, the team could keep things moving on a day when someone was missing or falling behind.

Yet the workers determined to keep their actions secret. In the plant, where all workers were visible to anyone, and whiteboards and computer terminals displayed data about how they were doing, experienced employees quickly taught his undercover researchers how to hide their deviance when an outsider swung into view. It was a triumph of craftiness, he recalls. "They weren't necessarily hiding specifically from managers. They were also hiding from their peers as well."

Why wouldn't the workers want management to know they were, in effect, helping the company? The answer is obvious to anyone who has ever been an employee. Their innovations weren't in their job descriptions. When they were being watched, they had to play to their audience.

Workers were well aware of this paradox, and they didn't like it. One of Bernstein's observers overheard a worker say the team would do better if they could be hidden from the rest of the factory.

Bernstein decided to test that idea. He got the factory management to curtain off—literally, with a curtain—four of 32 production lines that made mobile data cards. Over the next five months, those curtained lines were 10 to 15 percent more productive than their more exposed colleagues.

Team members had a kind of collective privacy—they were hidden from the constant scrutiny by management and other workers, even though within their shared workspace they were still visible to each other. The effect, Bernstein writes, was to shrink the size of the surveillance "audience," and confine it to people the workers had a personal connection with. This kind of "privacy within team boundaries," he says, has been associated with better results in many workplaces, from Google to hospital emergency rooms.

Of course, in the modern workplace, observation doesn't just mean literally being watched. It also includes data being relentlessly collected and analyzed. Many a boss has used these "digital bread crumbs," as Bernstein calls them, to help determine whether employees get raises, reprimands or termination notices. But there's a privacy-preserving alternative: Limit who can see the data, and limit how it can be used to affect employee's lives.

For example (as I mention in a piece that will soon be out in the May/June issue of PT) many trucking companies use cameras that automatically record a driver whenever there's sudden braking, swerving or speeding up. But in one company Bernstein studied, the videos never go to management and are not used in performance reviews (unless the driver is texting-at-the-wheel dangerous). Instead, a team of coaches, whose only job is help drivers improve, receives the videos. Drivers, he says, like and trust that the system is there to help them, because it keeps their mistakes within a trusted circle of people who are not wielding power over their lives.

Another form of privacy that enhances work is to reduce the degree to which every decision is observed by management. So, for example, a retail chain that uses an algorithm to assign work shifts now lets individual store managers revise the algorithm's schedule without having to clear the decision with headquarters. After it granted its local managers this modicum of privacy, the chain's profits rose.

Yet another form of restraint on surveillance is a decision not to monitor time as closely as possible. Though the technology is in place now to measure exactly how each minute of work time is used, many organizations decline to track that closely. Giving employees hours, days or even months in which to work without close scrutiny, Bernstein writes, has enhanced productivity instead of harming it.

In instituting these four forms of privacy—privacy within team boundaries, privacy limits on employee data, privacy in decision-making, and privacy about time—the organizations Bernstein studied refused the temptation to observe (or try to observe) everything. That refusal did not cost them profits or effectiveness. Instead, respect for privacy enhanced their success.

That suggests that privacy's defenders should not concede that total surveillance is safest (or most efficient or most profitable), before going on to say that it would be creepy to have to fly naked. That's important in a society where monitoring technology is ever cheaper and ever more powerful, and the notion is spreading that surveillance, and the data it generates, can solve any problem. Privacy, so often depicted as the enemy of efficiency in public life, can be its friend.

Want People to Behave Better? Give Them More Privacy

How U.S. Colleges Teach Their Students To Be Obsessed With Identity

When I was an undergraduate at Yale in the late 1970s, I had an acquaintance who awarded himself more freedom of speech than I'd thought any human needed. No scatology, no deranged opinion, no drugged out story, no wild insult was beyond the pale for him. He thought it, he said it. No taboos. His talk exhilarated and shocked me all the time. But one day in conversation I mentioned his trust fund.

Suddenly, he was as prim as he was angry. How dare I say such a thing? That's pretty funny, coming from you, I answered, and looked around for an Amen from the rest of the table. But that rest of the table was silent.

In that room, it was I who was the pariah. I had denied the centerpiece of our campus ideology. I had said we were not all the same, and that privilege mattered. That was Not Done. The affiliations that were supposed to matter on campus were those invented on campus, not the ones we had arrived with.

Such is the ginned-up tribalism of the American college, whose purpose is to convince students that there is something essential, mystical and important in their membership in the extended family that is their university. College students in this country are bathed in this strange brew, which convinces them against all reason that there is something vitally important in the distinction between Harvard and Princeton, or Georgia and Georgia Tech, or Penn State and Pittsburgh. Why, then, are we surprised that they're obsessed with identity? Or that they invest such insanely-high expectations in their identity as students of this or that particular school?

In my first semester at Yale, we were frequently told that the hardest part—getting in—was behind us. Having been admitted, you were encouraged (in speeches, images, college lore and songs) to think you would now be as free as any of your fellow students to enjoy your "bright college years, with pleasure rife, the shortest gladdest years of life." (That's from one of the songs you learn your first September at Yale, which, oddly enough, is set to the tune these gentlemen are singing before they are drowned out by another song.) In this lovely vision, there was no room for the notion that some could not partake as fully as others.

Of course, there was dissent. Organizations for minorities helped students deal with the disconnect between the university's Utopian pronouncements and the realities of life for many minority students, and related these stresses to American history and politics. But most of my fellow freshmen were more eager to create new identities than to deepen the ones they had arrived with.

So Yale, as I experienced it, was a psychic matryoshka of clans within clans — as a Yale student you looked askance at Harvard and Princeton; as a resident of Jonathan Edwards residential college you snickered at those geeks next door in Branford (another residential college that looked much the same); as a member of one singing group or sports team or publication you felt viscerally your rivalry with the others. By their first October the new arrivals on campus were caught up in interlocking identities that were all the more important to them because, like the amulets of an exotic cult, they meant nothing to most anyone else.

All this was intended, and succeeded, in getting us to think less often about the loyalties we'd come with, and more about the ones we shared with other members of the tribe. But like most converts to a new creed, we were intensely invested precisely because we were recently invested. Having abruptly placed our trust in these new identities, we were vulnerable to feelings of disappointment and betrayal.

My guess is that this deliberate forging of febrile new identities continues today, and that it is part of the reason that protesters at Yale and elsewhere have sounded a bit deranged to older people. Wait, you're mad because you don't feel totally at ease in your dorm? You're screaming because you think other people feel they belong here, and sometimes you don't? What?

Remember, before you start complaining about Kids Today, that these students didn't invent the expectation that college would be a utopia of solidarity, where their status as a student would outweigh centuries of history. That bill of goods was sold to them by the colleges they attend.

How U.S. Colleges Teach Their Students To Be Obsessed With Identity

Making Sense of Yale’s Protests

Evelyn Waugh was a xenophobe typical of his time and place. But he was also a great novelist, which means there is more truth in his portrait of racial attitudes than in many a pious PC tract. In his 1928 novel, Decline and Fall, he writes a telling moment, when a rich, fun-loving socialite named Margot Beste-Chetwynde is preparing herself to drop her African-American lover.

“I sometimes think I’m getting rather bored with colored people,” she says to another society lady. After all, she adds a few lines later, “they take a lot of living up to; they are so earnest.”

When I first read these sentences, I felt a vast distance from this acid portrait of narcissism. Nowadays, I’m not so sure. The explicit racism is alien to my time and place, but the self-absorbed approach to other people is not. In my student years in the 1970s I admired and supported those who engaged in the struggle for racial equality—but I felt, when I did so, that I was putting on a burden of awareness and self-policing. When I was aware that I was carrying that burden, I was all too pleased with myself. And when I tired of that weight, I knew, like Margot Beste-Chetwynde, that I could shrug it off.

It wasn’t a mask, exactly—it’s not as if we well-meaning white students turned into white supremacists when we weren’t “on.” (We would never, for example, have said we were simply bored with the struggle.) What we turned into, rather, was the kind of person who didn’t engage with issues of race.

I don’t mean to say that some of us chose to be engaged while others chose not to (that common trope of activist rhetoric). I mean that within many of us enlightened and well-meaning white students, engagement with racial injustice waxed and waned. That ability to wax and wane was licensed by our skin color. When I hear talk of “white privilege,” I think of this license, and the way it created a divide between us and the non-white fellow students whom we congratulated ourselves for supporting. We dealt with the consequences of the country’s racial history when we felt like it. They dealt with it all the time.

As Waugh observed, people free to ignore racial injustice are not absolved just by choosing to attend to it now and then. You can choose to get involved for reasons that are about you, that have nothing to do with the people whose trials you are being so good to notice.

It is this kind of inequality that is at issue at Yale and other campuses across the country (the ones fortunate enough not to have to deal with old-fashioned “macro-aggression” outright racism, of which there is still plenty). The students there are groping for a kind of conversation about race, in which all participants meet as genuine equals. That means meeting as people who all feel, all the time, the same inescapable burden.

In a sense, then, the protesters are indeed seeking to curb other people’s freedom—specifically, the freedom white people have to take up and put down the weight of racial injustice.

It’s human nature to confuse this kind of constraint with repression. Some protesters have helped confuse matters further by shrugging off fairness and free expression as if they were unimportant. (Among the Yale students’ demands, for example, are that two administrators be fired for honestly stating their opinions, and that an allegation of racism against a fraternity be taken as proved, rather than investigated.)

Still, an attempt to change norms of behavior is not the same as an attempt to ban free expression.

A campus culture in which very few people want to dress up as Injun warriors or crazy Muslim terrorists is not a police state. After all, already on those campuses very few people would dress as wily conniving Jewish bankers or bloodthirsty baby-slaughtering American soldiers. That’s not because such costumes have been banned, but because norms against stereotypes and denigration apply to Jewish students and to the military. Why, the protesters want to know, do those norms protect the feelings of some people on campus, and not others?

That is a fair question, even as it is equally fair to point out that it cannot be answered on a campus where people aren’t allowed to speak their minds. The world the protesters imagine would have no room for Margot Beste-Chetwynde, or her modern descendants. That’s a world that would weigh more heavily on people who don’t want to be bothered. But it’s a world that would be more just.

It’s fair to say, then, that the protesters need to think more carefully about free speech. And that they should recall that while they fight insidious racism there are other campuses dealing with the explicit kind, which involves swastikas and death threats. I’d agree. But it’s not fair to say that they’re being childish, or spoiled, or trivial.

Making Sense of Yale’s Protests

The Connection Between Deep Dream and the Alienness of Algorithmic “Thoughts”

Last month, a team at Google got a tremendous amount of well-deserved attention when they tweaked their image-recognizing system to draw objects. They had the system revealing, and then exaggerating, the features that it uses to tell one object from another. If a layer of the network was identifying something as a face, say, the researchers’ algorithm fed that back to the machine as a face, thus increasing the importance of whatever features it had used to decide what it was. The resulting Dali-esque “Google Dream” images have turned up all over the web, especially since the researchers made their code available. You can even process your own photo through the Google Dream process here.

But something else about the original post caught my eye. At the time I read it, I was at work on this piece, which was published yesterday in Nautilus. In it, I discuss recent efforts to figure out what machine learning algorithms are doing when they make weird, inhuman mistakes, like identifying some TV static as a cheetah. Neural nets devise their own rules to decide whether to connect a particular input to a label like “cheetah.” And what the research has made clear is that the rules (which we can’t really access) are clearly not the same as those that humans use to decide what is what.

In their post discussing “inceptionalism,” the Google Dream work, the researchers—Alexander Mordvintsev, Christopher Olah and Mike Tyka—also mentioned that they could use their technique to shed light on just this issue.  In some of their experiments, they reversed the usual neural-net image-identifying process. Instead of asking their system to find (to take one example) a dumbbell, they asked it to take random static and “gradually tweak the image” until it produced a dumbbell. The result, they explained,  revealed the features the algorithm considered essential to label something a dumbbell. Here is one of those images:


Like all the others the machine produced, it included some human flesh. Which means the algorithm seems to think that the dumbbell and the arm that lifts it are somehow one thing.

I wanted to get this into the Nautilus piece, but it ended up cut for reasons not worth going into here. So I’ll make the point in this blog post: The inceptionalism work at Google has turned up the same kind of artificial-intelligence weirdness that I wrote about. You may think the essential features of a dumbbell include weight, shape and maybe color. Google’s neural net thinks one essential feature of a dumbbell is a bit of human flesh. That’s not a reason to panic, but it is mighty intriguing.

The Connection Between Deep Dream and the Alienness of Algorithmic “Thoughts”

When Is Monitoring Good for You? When You Consent to It

Torture_Inquisition_-_Category_Inquisition_in_art_-_Wikimedia_CommonsYou’ll thank us later…

To make sense of a controversy, I often try to define the two most extreme versions of opposed positions, and examine those. This can help me see what their contrasts really are, and where those are stark. But the risk is that I’ll oversimplify and end up with a sort of cartoon version of the debate. That’s what happened in my last post.

As a number of readers pointed out, that post proffered a false dichotomy about the nature of self-assessment and self-control. In my sketch of the issue, people behave well either because (a) they know they are being watched and don’t want to get caught, and have no insight into why it’s better to behave well or (b) they have taken time alone to reflect on their principles and conduct (and decided in this magisterial isolation about what they should do). This made it easy to see what bothers me in the idea that more scrutiny will mean less bad behavior.

Spiritually, people who do good only out of fear of getting caught are not being good. They’re just putting on a show, like a chimpanzee smoking a cigar to avoid the master’s whip. And, practically, that good behavior will vanish as soon as there’s a power failure or a system crash down at Panopticon Central. Then, too, there’s the effect on a democratic society. We need to know our fellow citizens are capable of self-management, if we are to trust them with our money and our lives. And if they have no room to make such judgments for themselves, how can we know they’re capable? Relying on transparency is a signal that we won’t or can’t rely on each other’s self-control and self-respect. It’s a recipe for cynicism and mistrust.

I still think there is something to this argument but, as I was quickly reminded (by, among others, Evan Selinger and Michael Hallsworth), the extremes I was pondering don’t map well onto real life. People don’t apologize or express regret only out fear of abuse. In fact, the kind of serious ethical pondering that I imagined—in which you evaluate, say, your own rudeness and privilege, and resolve to do better by your fellow human beings in the future—is more common after hearing what other people think of you than it is after sitting alone in a quiet room. In other words, being observed and judged are not antithetical to moral autonomy. In many situations, many of us consent to monitoring (or at least don’t mind it) because we want someone, as the phrase goes, “to keep us honest.”

The same goes for self-monitoring and self-management—practices in which one version of the self makes commitments and then enforces them against the backsliding tendencies of other versions of the same self. If you set yourself a goal and commit here to be embarrassed if you fail to meet it, you are recognizing that monitoring can help you to adhere to your own choice. It’s a way of saying you have a best self to which you want to be true. Pushing yourself to comply doesn’t make you an automaton.

So, to recap: Mea culpa—I oversimplified the psychology of monitoring in my previous post.

And yet…

Note that all the examples I’ve mentioned above share an important trait: They all involve the consent of the person monitored.

That need not necessarily be prior consent. Perhaps I’ll find it awful to be lambasted by hundreds of strangers—or one very cutting and astute friend—and wish very much while it happens that I hadn’t been caught. But if, a week later, I find that I have learned from the experience and been helped to be in some way a better person, I could decide in retrospect that I had been done a service.

However, there are many circumstances in which I might not. For example, if the sanction for my rude tweet is that I lose my job and my home, I might feel, quite reasonably, that I am a man more sinned against than sinning. No insight into myself there—I am too distracted by the unfairness inflicted on me. Or I might simply and sincerely not agree with the condemnation (who wants questions of morals settled by majority vote?). Or I might be troubled by the fact that the chastisement comes not from a trusted mentor, nor from a circle of friends, but from strangers who obviously want to hurt, rather than instruct. When there is no consent to surveillance and judgment—when it is experienced as an out-of-all-proportion attack by unconcerned strangers—then, I think, we are in the cartoon world I sketched. The world where you get death threats from people you don’t know. The world where you act contrite just so people will stop retweeting that stupid joke you made last week. A world where scrutiny and judgment may make you vow never to get caught again, but offer you no insight into ethics or your self.

I’m anxious that such a world may come into being, if only because there are people who every much want it to. Noah Dyer, for example, has said “if I knew the guy downstairs was beating his wife … he’d need privacy in order to do that. In a world without privacy, we’d also know he’s searching for her information. If there was a restraining order, we’d know he was doing things that showed an intention to violate that restraining order. We could prevent abuse in the first place.” (Putting his (or maybe your) money where his mouth is, Dyer has launched a Kickstarter campaign to support a “year without privacy” in which he’ll live in complete transparency. You can read about that in this piece by Woodrow Hartzog and Evan Selinger, which has a video at the end where you can hear from Dyer himself.)

For a “world without privacy” to work fairly, consent could not be considered. There could be no opt-out; everyone would have to participate in the general openness. And without the ability to consent—to choose whether one will be monitored, and by whom one will be judged—then the moral benefits of surveillance disappear. So, yes, the world we know includes plenty of people who are willing to be observed and judged by others, for their own moral betterment. But a world of total transparency doesn’t.

When Is Monitoring Good for You? When You Consent to It

The Right Deed for the Wrong Reason



Until a few days ago, I didn't know who Britt McHenry was. Now I do—not through her day job at ESPN, but rather through her surveillance-enabled, Web-driven disgrace. If you don't know her story, you know one very like it: McHenry was brutally rude to a tow-pound employee. A surveillance camera caught her tirade and some footage of it ended up on a website. The Internet pounced. McHenry later tweeted: “in an intense and stressful moment, I allowed my emotions to get the best of me and said some insulting and regrettable things,” which sounds about right to me. Who hasn't done that? However, this is 2015, and McHenry didn't get the time to scrutinize and evaluate herself in private. Instead, she was hoisted up onto the virtual pillory of Internet scorn.

That's life in the 21st century. Surveillance now is not just imposed by the state on the citizenry, à la 1984. It's also a practice citizens impose on one another (and on agents of the state), with cameras and social media. More and more of what we do and say—to say nothing of what we tweet and post—is available for others to see and (more importantly) to judge instantly.

Pondering this obviously huge shift in the way people now live their lives (and, specifically, McHenry's story), Megan Garber made an argument the other day that puzzled me. We behave better when we know we are being watched, she wrote, therefore being watched is not all bad. Woe unto the two-faced and the slackers and sliders, because technology makes it, as Garber wrote, “harder to differentiate between the people we perform and the people we are.” Wealthy celebrities will have to think twice about insulting lowly service workers. More importantly, cops will, we hope, hesitate to abuse prisoners when body cams are recording their every move. Who would say that's not good?

Twenty or thirty years ago, a lot of people would have. The assumption that underlies Garber's claim would have been, at the very least, debatable. But in 2015 it is considered to be obviously true, and she spends no time examining it. Surveillance has been around so long that we accept its premises even when we argue about it.

That assumption is this: All that matters is what people do, not why they do it. That is the justification when we use monitoring to ensure compliance to any rule, be it basic courtesy, professional standards, adherence to the law or obedience to a moral code. If a viral video of my bad behavior subjects me to global contempt, you can be fairly sure that I won't make that mistake again. But you can't be sure that I won't want to. You won't know if I have reflected on my behavior and understood that I “let my emotions get the best of me,” or if I'm just avoiding an unpleasant ordeal. I myself may not understand why it is so important that I comply. All I need to know is that nonconformity will be revealed and punished.

That is what works, without the murky, unmeasurable complications that would ensue if you had to get me to reflect and decide for myself. And what works is what is being deployed all around us. At the office there are keystroke monitors to make sure employees stay on task. Online there is insta-shaming to make sure you don't use any word or phrase that your tweeps consider un-PC. Even in the privacy of your own lived life, there are thousands of apps you can use to monitor and shame yourself into eating less, exercising more, saving money, or spending less time on Facebook.

These technologies are oriented toward measurable results: hours saved, pounds lost, cigarettes unsmoked, clients contacted and so on. In that, they express the ideology of our time, which can also be seen driving the turn in government away from explicit appeals to reason in favor of “nudges,” and a similar turn in business toward marketing via big-data prediction, social media or other avenues that bypass conscious reflection. It doesn't matter what you think or feel, it only matters what you do.

Now, this assumption can be justified in a variety of ways. One is that in some circumstances, where life and limb are at hazard, it is entirely appropriate not to care what people are thinking. It is so important that police not violate civil liberties, for example, that we can reasonably say we don't care if they're cool with the concept. Don't Get Caught (And You Will Be) is a crude but effective way of insuring as little death and damage as possible. But this claim doesn't justify the Internet shaming of celebrities or the use of software to make sure employees don't bounce over to Ebay in the office. The cost of a violation there is too low.

For the vast majority of other situations in which we accept monitoring tech to guarantee courtesy or conscientiousness, the justification is the same as you hear for most tech: It just makes life easier, you know? Why struggle with yourself about going the extra mile at work, when a social app that reveals your performance to colleagues is sure to motivate you? For that matter, why agonize about eating too much when you can use a special fork-gadget to let you know you are eating too fast? As Evan Selinger has put it, letting the monitors decide is a form of outsourcing. And outsourcing is about making life “seamless” and “frictionless,” to use the developer buzzwords.

The problem with this justification, of course, is that when we remove work and friction from life, we lose as well as gain. Selinger has criticized apps that “outsource intimacy” on this basis. When you set up an app to text your significant other, you save time and effort that you actually needed to spend to be engaged with that person. You shouldn't avoid the work because the work is the point. In these cases, it most certainly does matter what people think and feel as they perform an act. These are the times when doing the “right thing” without insight or self-awareness is a moral catastrophe, as T.S. Eliot famously put it:

The last temptation is the greatest treason:

To do the right deed for the wrong reason.

I think our fast-evolving methods of surveillance and shaming have the same flaw as the apps that outsource intimacy. When we monitor others to make sure they behave—as when we monitor ourselves to make sure we behave—we are outsourcing the work of self-government.

Instead of asking people to decide for themselves, imperfectly as ever, what they should and should not do in carrying out their jobs, we trust the cameras. Instead of affording McHenry her chance to examine her own behavior and come to terms with her conscience, we shame her into an apology. Did she mean it? Does she even know? Her chance to figure that out was taken from her. I can't speak for her, but if that had happened to me, I know I would be the poorer for it. My sense that I am different from the person people can see—that I have in me mysteries, hope and surprise—would be diminished. That is what it means to no longer “differentiate between the person I perform and the person I am.” And it is a terrible thing.

Guess who knew that? Back in the bad old days, when only governments had the power to engage in mass surveillance, the spymasters of oppressive states understood it very well.

When there was still a Czechoslovakia and it was run by Communists, the security forces there tapped phones and bugged apartments of dissidents. One day, to torment the writer Jan Prochazka, they took recordings of his chats with friends and family and broadcast them on the radio. Prochazka was devastated. After all, as Milan Kundera wrote of the incident, “that we act different in private than in public is everyone's most conspicuous experience, it is the very ground of the life of the individual.” Is that worth giving up, to be sure semi-celebrities behave themselves?

The Right Deed for the Wrong Reason

“Other Knows Best” — What This Blog Is About


This blog is about how people make fewer and fewer decisions by and for themselves, and how that fact will change what it means to be human.

In a few years, here’s what middle-class life will look like: Your car will drive itself; your refrigerator will decide on its own when to order more milk; City Hall will imperceptibly nudge you to save money and avoid elevators; Amazon will tell you what you want to buy before you know you need it. At work you’ll be monitored and measured (with, for example, wearable cameras and keystroke counters) to prevent deviation from company norms, even as more of your moment-to-moment decisions will be “assisted” by algorithms. Meantime, your exercise, sleep, eating and other intimate details will be turned into data (thanks to gadgets you eagerly bought), to help you manage yourself in the same way that others are managing you. Moreover, you will face consequences for having “bad numbers” (no exercise? higher insurance rates for you!). And even intimate chores by which you express yourself—texting a friend, choosing which photo to swipe right in Tinder—will not be left to you, as apps and gadgets take up the work.

None of these changes is far off or theoretical. The policies and devices that will create them already exist.

Technological and economic change are alienating millions of people from a collection of assumptions that they once took for granted: that we can know ourselves, and act rationally with that knowledge, and that because of these facts we are entitled to autonomy, privacy and equal treatment from institutions and businesses. Autonomy is often described as personal self-government, or “the condition of being self-directed,” as the philosopher Marina Oshana has put it. But as these technologies and policies come online, it is reasonable to ask, what will be left to direct?

Many people will welcome at least some of these changes, for good reason. Who doesn’t want safer cars and fewer road deaths? Who is opposed to helping people stay healthy for more years? After a terrible crime, who doesn’t feel relief to know that a suspect was captured on a surveillance camera? I, for one, have come to think that I am not and cannot be the best judge of whether I am biased, or affected by racist or sexist ideology. I do not think police officers are always the best judges of the actions of other police officers either.

In sum, I, like (I think) most onlookers, tend to support monitoring, surreptitious control, predictive technology and reduced decision-making power when I think I might benefit, and when it limits the judgments of people whom I do not know, either personally or as a group. On the other hand, like most onlookers, I want to defend autonomy when I can imagine my own decision-making limited, denied or disrespected by others. After all, I don’t want to be spied on, or treated like a collection of data points.

This is why the changes I want to track and reflect on here are, I think, inevitable. They don’t appeal to all of the people all of the time, but each manifestation of the “other knows best” mentality appeals to enough of the people, enough of the time, to advance.

And that’s what this new blog is about: Self-directedness, self-awareness and self-control in the era of surveillance, personal-data crunching, predictive technology and the new tools they make available to governments, businesses and individuals. How do the moments of celebration (“surveillance cams and Twitter caught the bad guy!”) relate to moments of alarm (“I don’t want them to be able to spy on me!”)? We are moving from the 20th-century model (self-aware decision-makers responding to explicit attempts at persuasion) to a 21st century model (people whose choices are outside their awareness, coping with invisible attempts to influence them). How will we, should we, manage the transition?

“Other Knows Best” — What This Blog Is About