Extensive social science research documents that white people are afraid of black people, and particularly black men. One study found that simply seeing black men made white research subjects uncomfortable, and that overall black men elicited the most negative reactions from white people. More tellingly, in a 2009 survey examining white people’s attitudes toward black people and violence, more than 30% of respondents said that black people were more violent than white people. Moreover, more than 40% of respondents said that “many” or “almost all” black men were violent, while fewer than 20% said the same of black women and white men, and fewer than 10% said the same of white women. And remember, these are just the respondents who were willing to voice their thoughts out loud.
These attitudes toward black men explain the disproportionate media emphasis on stories involving violence by black men. It also explains that all-too-common phenomenon of white people blaming crime — particularly violent crime — on invented black men. In a culture that too often paints black men as violent criminals, the lies make a perverse kind of sense. If people are more likely to view black men as violent criminals, they’re more likely to believe a made-up story about a black man doing something violent and criminal. I suspect this believability is why white people choose to blame crimes on imaginary black men as frequently as they do.
a commentary on Darren Wilson's grand jury testimony follows
But it is not enough for me to stand before you tonight and condemn riots. It would be morally irresponsible for me to do that without, at the same time, condemning the contingent, intolerable conditions that exist in our society. These conditions are the things that cause individuals to feel that they have no other alternative than to engage in violent rebellions to get attention. And I must say tonight that a riot is the language of the unheard. And what is it America has failed to hear? It has failed to hear that the plight of the negro poor has worsened over the last twelve or fifteen years. It has failed to hear that the promises of freedom and justice have not been met. And it has failed to hear that large segments of white society are more concerned about tranquility and the status quo than about justice and humanity.
Perhaps, like our poolside guru, Christensen believes he’s always right…but, on rare occasions, he’s simply wrong on the timing.
Apple will, of course, eventually meet its maker, whether through some far off, prolonged mediocrity, or by a swift, regrettable decision. But such predictions are useless, they’re storytelling – and a bad, facile kind at that. What would be really interesting and courageous would be a detailed scenario of Apple’s failure, complete with a calendar of main steps towards the preordained ending. No more Wrong on the Timing excuses.
A more interesting turn for a man of Christensen’s intellect and reach inside academia would be to become his own Devil’s Advocate. Good lawyers pride themselves in researching their cases so well they could plead either side. Perhaps Clayton Christensen could explain, with his usual authority, how the iPhone defines a new theory of innovation. Or why the Macintosh has prospered and ended up disrupting the PC business by sucking up half of the segment profits. He could then draw comparisons to other premium goods that are happily chosen by consumers, from cars to clothes and…watches.
The legal standard authorizing deadly force is something called “objective reasonableness.”
This standard originates in the 1985 case of Tennessee v. Garner, which appeared at first to tighten restrictions on the police use of deadly force. The case involved a Memphis cop, Elton Hymon, who shot dead one Edward Garner: 15 years old, black and unarmed. Garner had just burgled a house, grabbing a ring and ten bucks. The US Supreme Court ruled that a police officer, henceforth, could use deadly force only if he “has probable cause to believe that the suspect poses a significant threat of death or serious physical injury to the officer or others.” The ruling required that the use of force be “objectively reasonable.” How this reasonableness should be determined was established in a 1989 case, Graham v. Connor: severity of the crime, whether the suspect is resisting or trying to escape and above all, whether the suspect posed an immediate threat to the safety of officers or others. All this appeared to restrict police violence—even if, in the end, Officer Hymon was never criminally charged for fatally shooting Edward Garner.
“Objectively reasonable”—what could be wrong with that? But in actual courtroom practice, “objective reasonableness” has become nearly impossible to tell apart from the subjective snap judgments of panic-fueled police officers. American courts universally defer to the law enforcement officer’s own personal assessment of the threat at the time.
The Graham analysis essentially prohibits any second-guessing of the officer’s decision to use deadly force: no hindsight is permitted, and wide latitude is granted to the officer’s account of the situation, even if scientific evidence proves it to be mistaken. Such was the case of Berkeley, Missouri, police officers Robert Piekutowski and Keith Kierzkowski, who in 2000 fatally shot Earl Murray and Ronald Beasley out of fear that the victims’ car was rolling towards them. Forensic investigations established that the car had not in fact lurched towards the officers at the time of the shooting—but this was still not enough for the St. Louis County grand jury to indict the two cops of anything.
Not surprisingly then, legal experts find that “there is built-in leeway for police, and the very breadth of this leeway is why criminal charges against police are so rare,” says Walter Katz, a police oversight lawyer who served on the Los Angeles County Office of Independent Review until it disbanded in July of this year. According to Erwin Chemerinsky, dean of the UC Irvine Law School, recent Supreme Court decisions are not a path towards justice but rather a series of obstacles to holding police accountable for civil rights violations.
As the delegate spoke, Pinkney had to make sense of a message composed in one language while simultaneously constructing and articulating the same message in another tongue. The process required an extraordinary blend of sensory, motor and cognitive skills, all of which had to operate in unison. She did so continuously and in real time, without asking the speaker to slow down or clarify anything. She didn’t stammer or pause. Nothing in our evolutionary history can have programmed Pinkney’s brain for a task so peculiar and demanding. Executing it required versatility and nuance beyond the reach of the most powerful computers. It is a wonder that her brain, indeed any human brain, can do it at all.
Neuroscientists have explored language for decades and produced scores of studies on multilingual speakers. Yet understanding this process – simultaneous interpretation – is a much bigger scientific challenge. So much goes on in an interpreter’s brain that it’s hard even to know where to start. Recently, however, a handful of enthusiasts have taken up the challenge, and one region of the brain – the caudate nucleus – has already caught their attention.
The caudate isn’t a specialist language area; neuroscientists know it for its role in processes like decision making and trust. It’s like an orchestral conductor, coordinating activity across many brain regions to produce stunningly complex behaviours. Which means the results of the interpretation studies appear to tie into one of the biggest ideas to emerge from neuroscience over the past decade or two. It’s now clear that many of our most sophisticated abilities are made possible not by specialist brain areas dedicated to specific tasks, but by lightning-fast coordination between areas that control more general tasks, such as movement and hearing. Simultaneous interpretation, it seems, is yet another feat made possible by our networked brains.