An excerpt from Touch: The Science of Hand, Heart and Mind, by David J. Linden via SciAm
The main active ingredient in mint is menthol, while its equivalent in chili peppers is a chemical called capsaicin. Less potent chili peppers, like the Anaheim, have a low concentration of capsaicin, while very strong ones, like the Bhut Jolokia pepper, can produce about one‑thousand‑fold more. So why are we biologically predisposed to perceive menthol as cool and capsaicin as hot? One possibility is that there’s a class of nerve ending in the skin that can sense cooling and a different class that can respond to menthol. The signals conveyed by these distinct fibers could then ultimately converge in the brain: Mint and cooling might feel the same because they activate the same brain region dedicated to the sensation of cooling. In an analogous fashion, separate heat‑sensing and capsaicin‑sensing nerve fibers could ultimately send their impulses to a heat‑sensitive brain region.
This hypothesis, therefore, rests on signal convergence in the somatosensory cortex, and while it’s reasonable and appealing, it’s actually dead wrong. How do we know that? First, we can record electrical signals from single sensory nerve fibers in the arm that respond to both heat and capsaicin, and other single nerve fibers that respond to both menthol and cooling. These show that temperature and chemical signals are present in the neurons that innervate the skin long before any signals reach the brain. We also have some molecular evidence. There are free nerve endings in the epidermal layer of the skin that contain a sensor on their outer membrane called TRPV1. This single protein molecule can respond to both heat and capsaicin by opening an ion channel, a pore that lets positive ions flow inside, thereby causing the sensory neuron to fire electrical spikes. Similarly, there are free nerve endings that contain a different sensor, called TRPM8, that can respond to both menthol and cooling. The answer to our puzzle is that the metaphor is not in the culture, or even in the brain region. The metaphor is encoded within the sensor molecules in the nerve endings of the skin.
When you’re a first-time parent, something perverse happens that makes you seem like a visitor to your own culture. In the first year of my son’s life, I found myself pondering things like baby rattles. Where do they come from? Why do we give rattles to babies? Are there cultures where babies don’t get rattles? (Indeed, there are.)
At precisely the moment that I was worrying about my cultural performance of parenthood, I stumbled across mention of “The Anthropology of Childhood” on a blog and got a copy. I was immediately taken. The book does not render judgments, like other parenting books we know. “My goal is to offer a correction to the ethnocentric lens that sees children only as precious, innocent and preternaturally cute cherubs,” Professor Lancy writes. “I hope to uncover something close to the norm for children’s lives and those of their caretakers.”
That norm is that children are expected to earn their keep, starting at a very early age (or they are tolerated as semi-supernatural forces, the “changelings” of the book’s title). Worldwide, there is little formal schooling; most knowledge is learned through play and imitation. Kids may spend more time overseen by older siblings than adults. Fathers have very little to do with their children. And adults in most cultures rarely, if ever, play with their children as extensively as we do with ours.
Admission into the “Big Three” was fairly easy if the applicant possessed a “manly, Christian character.” He had to pass subject-based entrance exams devised by the colleges, but the tests weren’t particularly hard, and he could take them over and over again to pass. Even if a student didn’t pass the required exams, he could be admitted with “conditions.” Once enrolled at Harvard, Yale, or Princeton, he would focus primarily on his social life, clubs, sports, social organizations, and campus activities, while often ignoring his academic work.
Admissions began to change, however, when Charles William Eliot became president of Harvard in 1869. Annoyed with “the stupid sons of the rich,” Eliot sought to draw into the university’s fold capable students from all segments of society. To ensure that smart students could attend Harvard regardless of their means, Eliot, in 1898, abolished the archaic Greek admission exams that were popular up until that time. He also replaced Harvard’s admissions exams with exams created by the College Entrance Examination Board because it tripled the number of locations where applicants could be tested. The result of Eliot’s changes was the admission of more public school students, including Catholics and Jews.
The most fundamental lessons we can draw from Sandy revolve around predictions: how we make predictions of the atmosphere’s behavior, and how we respond to them once they are made. Weather prediction is a unique enterprise. People make predictions of many kinds: about the outcomes of elections or baseball games, or the fluctuations of the stock market or of the broader economy. Some of those forecasts are based on mathematical models. Most of those mathematical models are statistical, meaning they use empirical rules based on what has happened in the past. The models used for weather prediction (and its close relative, climate prediction), in contrast, are dynamical. They use the laws of physics to predict how the weather will change from one moment to the next. The underlying laws governing elections or the stock market—the rules of mass human behavior that determine the outcomes—are not known well, if they exist at all. The models need to be built on assumptions that past experience will be a guide to future performance. If weather prediction were still done in this way, it would have been simply impossible to predict, days ahead of time, that Hurricane Sandy would turn left and strike the coast while moving westward. No forecaster had ever seen something like that occur, because no storm had ever done it. For the same reason, no statistical model trained on past behavior would have produced it as a likely outcome.
In Sandy’s case, forecasters not only could see this outcome as a possibility over a week ahead of time, but they were quite confident of it by four or five days before the storm hit. Forecasts such as the ones we had as Sandy formed and moved up the coast don’t come from the heavens. They’re the result of a century of remarkable scientific achievement, beginning in Norway in the early 1900s. The intellectual foundation of the whole enterprise of weather prediction was the idea that the laws of physics could be used to understand the weather, a radical idea in the early twentieth century. Carrying this out required multiple conceptual advances, over decades, and improvements in technology (especially digital computers).
[T]he most serious problems highlighted by Sandy were not in the preparations right before the disaster or in the response right after. They were in the construction of our coastlines over the span of many decades. Over that long term, too, there had been good forecasts of what could happen to our built environment along the water in the New York City area. These were not forecasts of a specific storm at a specific date and time, but rather scientific assessments of the risk of a storm as bad as Sandy, or worse. It had been known for decades at least that New York City was vulnerable to flooding by a hurricane-induced storm surge. The consequences that would follow were also clear, in broad outline. The flooding of the subways, for example, had been envisioned since the 1990s.
Sandy didn’t need climate change in order to happen, and the story of the disaster doesn’t need climate change to make it important. The main subject of this book is Sandy, and you can read large fractions of the book without seeing climate change mentioned at all. But climate change looms large when we try to think about what Sandy means for the future.