For Your Consideration: Words, Libraries, Reading, Writing, and Futures for Kids

1. Neil Gaiman: Why our future depends on libraries, reading and daydreaming​

I’m here giving this talk tonight, under the auspices of the Reading Agency: a charity whose mission is to give everyone an equal chance in life by helping people become confident and enthusiastic readers. Which supports literacy programs, and libraries and individuals and nakedly and wantonly encourages the act of reading. Because, they tell us, everything changes when we read.

And it’s that change, and that act of reading that I’m here to talk about tonight. I want to talk about what reading does. What it’s good for.

I was once in New York, and I listened to a talk about the building of private prisons – a huge growth industry in America. The prison industry needs to plan its future growth – how many cells are they going to need? How many prisoners are there going to be, 15 years from now? And they found they could predict it very easily, using a pretty simple algorithm, based on asking what percentage of 10 and 11-year-olds couldn’t read. And certainly couldn’t read for pleasure.

It’s not one to one: you can’t say that a literate society has no criminality. But there are very real correlations.

And I think some of those correlations, the simplest, come from something very simple. Literate people read fiction.

Fiction has two uses. Firstly, it’s a gateway drug to reading. The drive to know what happens next, to want to turn the page, the need to keep going, even if it’s hard, because someone’s in trouble and you have to know how it’s all going to end … that’s a very real drive. And it forces you to learn new words, to think new thoughts, to keep going. To discover that reading per se is pleasurable. Once you learn that, you’re on the road to reading everything. And reading is key. There were noises made briefly, a few years ago, about the idea that we were living in a post-literate world, in which the ability to make sense out of written words was somehow redundant, but those days are gone: words are more important than they ever were: we navigate the world with words, and as the world slips onto the web, we need to follow, to communicate and to comprehend what we are reading. People who cannot understand each other cannot exchange ideas, cannot communicate, and translation programs only go so far.

The simplest way to make sure that we raise literate children is to teach them to read, and to show them that reading is a pleasurable activity. And that means, at its simplest, finding books that they enjoy, giving them access to those books, and letting them read them.

2. What is writing? Why Telepathy, of Course

In his book On Writing, Stephen King explains what writing is in three words: “Telepathy, of course.”

Then again, this isn’t an entirely new concept. When we were kids, writing was explained as the act of transmitting ideas from our brains onto a sheet of paper. But I don’t know, telepathy just sounds better. For one thing, it means transmitting real objects from one space and time to another, I like this. It means writing doesn’t end or even begin with the writer’s internal struggle, but with the notion that the writer has something to show and can do so by making his mind connect with that of the reader.

And while this may sound ridiculous, even “cute,” as King says other people might call it, we’ve experienced what he’s talking about.

To make the point, King writes a description of a bunny, munching a carrot in a cage with the number 8 written on his back in blue ink and says this afterward, “The most interesting thing here isn’t even the carrot-munching rabbit in the cage, but the number on its back… This is what we’re looking at, and we all see it. I didn’t tell you. You didn’t ask me… We’re having a meeting of the minds.” And he was right. We are looking at the number on the bunny’s back. We feel connected to him as if we were present with him examining the number.

The question I ask now is, how did he do that? How did he know the number was the subject of our focus? It was his, but how did he know he had successfully pulled our gaze from the cage, bunny and the carrot and onto the blue number?

Of course, you could say, “it’s obviously the most interesting thing in the piece” or “Well, he is the writer, after all. He knew we would want to look at something as out of place as the blue number on the bunny.” And yes, the example is a very easy one to see. But his acclaimed fame as a writer would defend that this is not something he does by accident, but knows exactly what we are seeing. And my question is how did he learn this?

3. Philip K Dick on Disneyland, reality and science fiction (1978)

It was always my hope, in writing novels and stories which asked the question “What is reality?”, to someday get an answer. This was the hope of most of my readers, too. Years passed. I wrote over thirty novels and over a hundred stories, and still I could not figure out what was real. One day a girl college student in Canada asked me to define reality for her, for a paper she was writing for her philosophy class. She wanted a one-sentence answer. I thought about it and finally said, “Reality is that which, when you stop believing in it, doesn’t go away.” That’s all I could come up with. That was back in 1972. Since then I haven’t been able to define reality any more lucidly.

But the problem is a real one, not a mere intellectual game. Because today we live in a society in which spurious realities are manufactured by the media, by governments, by big corporations, by religious groups, political groups—and the electronic hardware exists by which to deliver these pseudo-worlds right into the heads of the reader, the viewer, the listener. Sometimes when I watch my eleven-year-old daughter watch TV, I wonder what she is being taught. The problem of miscuing; consider that. A TV program produced for adults is viewed by a small child. Half of what is said and done in the TV drama is probably misunderstood by the child. Maybe it’s all misunderstood. And the thing is, Just how authentic is the information anyhow, even if the child correctly understood it? What is the relationship between the average TV situation comedy to reality? What about the cop shows? Cars are continually swerving out of control, crashing, and catching fire. The police are always good and they always win. Do not ignore that point: The police always win. What a lesson that is. You should not fight authority, and even if you do, you will lose. The message here is, Be passive. And—cooperate. If Officer Baretta asks you for information, give it to him, because Officer Beratta is a good man and to be trusted. He loves you, and you should love him.

4. Education Needs a Digital-Age Upgrade​

If you have a child entering grade school this fall, file away just one number with all those back-to-school forms: 65 percent.

Chances are just that good that, in spite of anything you do, little Oliver or Abigail won’t end up a doctor or lawyer — or, indeed, anything else you’ve ever heard of. According to Cathy N. Davidson, co-director of the annual MacArthur Foundation Digital Media and Learning Competitions, fully 65 percent of today’s grade-school kids may end up doing work that hasn’t been invented yet.

So Abigail won’t be doing genetic counseling. Oliver won’t be developing Android apps for currency traders or co-chairing Google’s philanthropic division. Even those digital-age careers will be old hat. Maybe the grown-up Oliver and Abigail will program Web-enabled barrettes or quilt with scraps of Berber tents. Or maybe they’ll be plying a trade none of us old-timers will even recognize as work.

For those two-thirds of grade-school kids, if for no one else, it’s high time we redesigned American education.

As Ms. Davidson puts it: “Pundits may be asking if the Internet is bad for our children’s mental development, but the better question is whether the form of learning and knowledge-making we are instilling in our children is useful to their future.”

In her galvanic new book, “Now You See It,” Ms. Davidson asks, and ingeniously answers, that question. One of the nation’s great digital minds, she has written an immensely enjoyable omni-manifesto that’s officially about the brain science of attention. But the book also challenges nearly every assumption about American education.

…Simply put, we can’t keep preparing students for a world that doesn’t exist. We can’t keep ignoring the formidable cognitive skills they’re developing on their own. And above all, we must stop disparaging digital prowess just because some of us over 40 don’t happen to possess it. An institutional grudge match with the young can sabotage an entire culture.

When we criticize students for making digital videos instead of reading “Gravity’s Rainbow,” or squabbling on Politico.com instead of watching “The Candidate,” we are blinding ourselves to the world as it is. And then we’re punishing students for our blindness. Those hallowed artifacts — the Thomas Pynchon novel and the Michael Ritchie film — had a place in earlier social environments. While they may one day resurface as relevant, they are now chiefly of interest to cultural historians. But digital video and Web politics are intellectually robust and stimulating, profitable and even pleasurable.

 

Albert Einstein was asked once how we could make our children intelligent. His reply was both simple and wise. “If you want your children to be intelligent,” he said, “read them fairy tales. If you want them to be more intelligent, read them more fairy tales.” 

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassingI hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter

For Your Consideration: Hijacking Minds, Chinese Robot Army, Tomorrow’s Internet, and Multiple Realities

Two pieces by Kevin Kelly this week and both are components of his new book (required reading if you plan to be around in the future) The Inevitable.

1. How Technology Hijacks People’s Minds – from a Magician and Google’s Design Ethicist

I’m an expert on how technology hijacks our psychological vulnerabilities. That’s why I spent the last three years as a Design Ethicist at Google caring about how to design things in a way that defends a billion people’s minds from getting hijacked.

When using technology, we often focus optimistically on all the things it does for us. But I want you to show you where it might do the opposite.

Where does technology exploit our minds’ weaknesses?

I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.

And this is exactly what product designers do to your mind. They play your psychological vulnerabilities (consciously and unconsciously) against you in the race to grab your attention.

I want to show you how they do it.

#1 [of 10] If You Control the Menu, You Control the Choices

Western Culture is built around ideals of individual choice and freedom. Millions of us fiercely defend our right to make “free” choices, while we ignore how those choices are manipulated upstream by menus we didn’t choose in the first place.

This is exactly what magicians do. They give people the illusion of free choice while architecting the menu so that they win, no matter what you choose. I can’t emphasize enough how deep this insight is.

When people are given a menu of choices, they rarely ask:

  • “what’s not on the menu?”
  • “why am I being given these options and not others?”
  • “do I know the menu provider’s goals?”
  • “is this menu empowering for my original need, or are the choices actually a distraction?” (e.g. an overwhelming array of toothpastes)

2. China Is Building a Robot Army of Model Workers

“The system is down,” explains Nie Juan, a woman in her early 20s who is responsible for quality control. Her team has been testing the robot for the past week. The machine is meant to place stickers on the boxes containing new routers, and it seemed to have mastered the task quite nicely. But then it suddenly stopped working. “The robot does save labor,” Nie tells me, her brow furrowed, “but it is difficult to maintain.”

The hitch reflects a much bigger technological challenge facing China’s manufacturers today. Wages in Shanghai have more than doubled in the past seven years, and the company that owns the factory, Cambridge Industries Group, faces fierce competition from increasingly high-tech operations in Germany, Japan, and the United States. To address both of these problems, CIG wants to replace two-thirds of its 3,000 workers with machines this year. Within a few more years, it wants the operation to be almost entirely automated, creating a so-called “dark factory.” The idea is that with so few people around, you could switch the lights off and leave the place to the machines.

But as the idle robot arm on CIG’s packaging line suggests, replacing humans with machines is not an easy task. Most industrial robots have to be extensively programmed, and they will perform a job properly only if everything is positioned just so. Much of the production work done in Chinese factories requires dexterity, flexibility, and common sense. If a box comes down the line at an odd angle, for instance, a worker has to adjust his or her hand before affixing the label. A few hours later, the same worker might be tasked with affixing a new label to a different kind of box. And the following day he or she might be moved to another part of the line entirely.

Despite the huge challenges, countless manufacturers in China are planning to transform their production processes using robotics and automation at an unprecedented scale. In some ways, they don’t really have a choice. Human labor in China is no longer as cheap as it once was, especially compared with labor in rival manufacturing hubs growing quickly in Asia. In Vietnam, Thailand, and Indonesia, factory wages can be less than a third of what they are in the urban centers of China. One solution, many manufacturers—and government officials—believe, is to replace human workers with machines.

3. You are not late

But, but…here is the thing. In terms of the internet, nothing has happened yet. The internet is still at the beginning of its beginning. If we could climb into a time machine and journey 30 years into the future, and from that vantage look back to today, we’d realize that most of the greatest products running the lives of citizens in 2044 were not invented until after 2014. People in the future will look at their holodecks, and wearable virtual reality contact lenses, and downloadable avatars, and AI interfaces, and say, oh, you didn’t really have the internet (or whatever they’ll call it) back then.

And they’d be right. Because from our perspective now, the greatest online things of the first half of this century are all before us. All these miraculous inventions are waiting for that crazy, no-one-told-me-it-was-impossible visionary to start grabbing the low-hanging fruit — the equivalent of the dot com names of 1984.

Because here is the other thing the greybeards in 2044 will tell you: Can you imagine how awesome it would have been to be an entrepreneur in 2014? It was a wide-open frontier! You could pick almost any category X and add some AI to it, put it on the cloud. Few devices had more than one or two sensors in them, unlike the hundreds now. Expectations and barriers were low. It was easy to be the first. And then they would sigh, “Oh, if only we realized how possible everything was back then!”

So, the truth: Right now, today, in [2016] is the best time to start something on the internet.

4. Hyper Vision – A survey of VR/AR/MR

One of the first things I learned from my recent tour of the synthetic-reality waterfront is that virtual reality is creating the next evolution of the Internet. Today the Internet is a network of information. It contains 60 trillion web pages, remembers 4 zettabytes of data, transmits millions of emails per second, all interconnected by sextillions of transistors. Our lives and work run on this internet of information. But what we are building with artificial reality is an internet of experiences. What you share in VR or MR gear is an experience. What you encounter when you open a magic window in your living room is an experience. What you join in a mixed-reality teleconference is an experience. To a remarkable degree, all these technologically enabled experiences will rapidly intersect and inform one another.

The recurring discovery I made in each virtual world I entered was that although every one of these environments was fake, the experiences I had in them were genuine. VR does two important things: One, it generates an intense and convincing sense of what is generally called presence. Virtual landscapes, virtual objects, and virtual characters seem to be there—a perception that is not so much a visual illusion as a gut feeling. That’s magical. But the second thing it does is more important. The technology forces you to be present—in a way flatscreens do not—so that you gain authentic experiences, as authentic as in real life. People remember VR experiences not as a memory of something they saw but as something that happened to them. 

…Not immediately, but within 15 years, the bulk of our work and play time will touch the virtual to some degree. Systems for delivering these shared virtual experiences will become the largest enterprises we have ever made. Fully immersive VR worlds already generate and consume gigabytes of data per experience. In the next 10 years the scale will increase from gigabytes per minute to terabytes per minute. The global technology industry—chip designers, consumer device makers, communication conglomerates, component manufacturers, content studios, software creators—will all struggle to handle the demands of this vast system as it blossoms. And only a few companies will dominate the VR networks because, as is so common in networks, success is self-reinforcing. The bigger the virtual society becomes, the more attractive it is. And the more attractive, the bigger yet it becomes. These artificial-reality winners will become the largest companies in history, dwarfing the largest companies today by any measure.

“My interest is in the future because I am going to spend the rest of my life there.” – Charles Kettering

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Random

For Your Consideration: Epigenetics and Identity, Vulgar Vocabulary, Learning to Learn, and the Sublimity of Mike Rowe

1. The Science of Identity and Difference

Why are identical twins alike? In the late nineteen-seventies, a team of scientists in Minnesota set out to determine how much these similarities arose from genes, rather than environments—from “nature,” rather than “nurture.” Scouring thousands of adoption records and news clips, the researchers gleaned a rare cohort of fifty-six identical twins who had been separated at birth. Reared in different families and different cities, often in vastly dissimilar circumstances, these twins shared only their genomes. Yet on tests designed to measure personality, attitudes, temperaments, and anxieties, they converged astonishingly. Social and political attitudes were powerfully correlated: liberals clustered with liberals, and orthodoxy was twinned with orthodoxy. The same went for religiosity (or its absence), even for the ability to be transported by an aesthetic experience. Two brothers, separated by geographic and economic continents, might be brought to tears by the same Chopin nocturne, as if responding to some subtle, common chord struck by their genomes.

One pair of twins both suffered crippling migraines, owned dogs that they had named Toy, married women named Linda, and had sons named James Allan (although one spelled the middle name with a single “l”). Another pair—one brought up Jewish, in Trinidad, and the other Catholic, in Nazi Germany, where he joined the Hitler Youth—wore blue shirts with epaulets and four pockets, and shared peculiar obsessive behaviors, such as flushing the toilet before using it. Both had invented fake sneezes to diffuse tense moments. Two sisters—separated long before the development of language—had invented the same word to describe the way they scrunched up their noses: “squidging.” Another pair confessed that they had been haunted by nightmares of being suffocated by various metallic objects—doorknobs, fishhooks, and the like.

The Minnesota twin study raised questions about the depth and pervasiveness of qualities specified by genes: Where in the genome, exactly, might one find the locus of recurrent nightmares or of fake sneezes? Yet it provoked an equally puzzling converse question: Why are identical twins different? Because, you might answer, fate impinges differently on their bodies. One twin falls down the crumbling stairs of her Calcutta house and breaks her ankle; the other scalds her thigh on a tipped cup of coffee in a European station. Each acquires the wounds, calluses, and memories of chance and fate. But how are these changes recorded, so that they persist over the years? We know that the genome can manufacture identity; the trickier question is how it gives rise to difference…

2. Is Swearing a Sign of a Limited Vocabulary? | Scientific American

When words fail us, we curse. At least this is what the “poverty-of-vocabulary” (POV) hypothesis would have us believe. On this account, swearing is the “sign of a weak vocabulary”, a result of a lack of education, laziness or impulsiveness. In line with this idea, we tend to judge vulgarians quite harshly, rating them as lower on socio-intellectual status, less effective at their jobs and less friendly.

But this view of the crass does not square with recent research in linguistics. For example, the POV hypothesis would predict that when people struggle to come up with the right words, they are more likely to spew swears left and right. But research shows that people tend to fill the awkward gaps in their language with “ers” and “ums” not “sh*ts” and “godd*mnits.” This research has led to a competing explanation for swearing: fluency with taboo words might be a sign of general verbal fluency. Those who are exceptionally vulgar might also be exceptionally eloquent and intelligent.  Indeed, taboo words hold a particular purpose in our lexicon that other words cannot as effectively accomplish: to deliver intense, succinct and directed emotional expression. So, those who swear frequently might just be more sophisticated in the linguistic resources they can draw from in order to make their point.

New research by cognitive scientists at Marist College and the Massachusetts College of Liberal Arts attempts to test this possibility, and further debunk the POV hypothesis, by measuring how taboo word fluency relates to general verbal fluency. The POV hypothesis suggests that there should be a negative correlation: the more you swear, the lower your verbal prowess. But the researchers hypothesized just the opposite: the more you swear the more comprehensive your vocabulary would be.

“The ability to learn faster than your competitors may be the only sustainable competitive advantage.”

I’m not talking about relaxed armchair or even structured classroom learning. I’m talking about resisting the bias against doing new things, scanning the horizon for growth opportunities, and pushing yourself to acquire radically different capabilities—while still performing your job. That requires a willingness to experiment and become a novice again and again: an extremely discomforting notion for most of us.

Over decades of coaching and consulting to thousands of executives in a variety of industries, however, my colleagues and I have come across people who succeed at this kind of learning. We’ve identified four attributes they have in spades: aspiration, self-awareness, curiosity, and vulnerability. They truly want to understand and master new skills; they see themselves very clearly; they constantly think of and ask good questions; and they tolerate their own mistakes as they move up the learning curve.

Of course, these things come more naturally to some people than to others. But, drawing on research in psychology and management as well as our work with clients, we have identified some fairly simple mental tools anyone can develop to boost all four attributes—even those that are often considered fixed (aspiration, curiosity, and vulnerability).

4. The Importance of Being Dirty: Lessons from Mike Rowe

**If you didn’t already adore Mike Rowe this conversation will make you. Amazingly interesting guy on top of everything you thought you knew. Also, The Tim Ferriss Show is hands down one of my favorite podcasts. Light in tone but deep in intellectual curiosity about an immense variety of topics.
—-

“Just because you love something doesn’t mean you can’t suck at it.” – Mike Rowe

Stream Here: http://traffic.libsyn.com/timferriss/Tim_Ferriss_Show_-_Mike_Rowe.mp3

Mike Rowe (@mikeroweworks) is perhaps the best storyteller and pitchman I’ve ever had on the show.

You might know Mike from his eight seasons of Dirty Jobs, but that’s just a tiny piece of the story.

His performing career began in 1984 when he faked his way into the Baltimore Opera to get his union card and meet girls, both of which he accomplished during a performance of Rigoletto. His transition to television occurred in 1990 when — to settle a bet — he auditioned for the QVC Shopping Channel and was promptly hired after talking about a pencil for nearly eight minutes. There, he worked the graveyard shift for three years, until he was ultimately fired for making fun of products and belittling viewers.  Now, he is a massively successful TV host, writer, narrator, producer, actor, and spokesman.

Why listen to this episode? You will learn:

  • Secrets of the perfect pitch
  • How Mike flew around the world for free (until he got caught)
  • Why to pursue opportunity instead of passion
  • How being different can help you win in business and life
  • The business of Mike Rowe
  • Favorite books, voice-over artists, and much, much more…

If you’re in a rush and just want a fantastic 5-minute story about his selling pencils for the QVC audition, click here.

“We are infected by our own misunderstanding of how our own minds work.” – Kevin Kelly

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Random

For Your Consideration: Hacking Elections, Boosting Conspiracy Theories, Presence vs. Advice, and Earth Imaged Daily

1. How to Hack an Election

Two thousand miles away, in an apartment in Bogotá’s upscale Chicó Navarra neighborhood, Andrés Sepúlveda sat before six computer screens. Sepúlveda is Colombian, bricklike, with a shaved head, goatee, and a tattoo of a QR code containing an encryption key on the back of his head. On his nape are the words “</head>” and “<body>” stacked atop each other, dark riffs on coding. He was watching a live feed of Peña Nieto’s victory party, waiting for an official declaration of the results.

When Peña Nieto won, Sepúlveda began destroying evidence. He drilled holes in flash drives, hard drives, and cell phones, fried their circuits in a microwave, then broke them to shards with a hammer. He shredded documents and flushed them down the toilet and erased servers in Russia and Ukraine rented anonymously with Bitcoins. He was dismantling what he says was a secret history of one of the dirtiest Latin American campaigns in recent memory.

For eight years, Sepúlveda, now 31, says he traveled the continent rigging major political campaigns. With a budget of $600,000, the Peña Nieto job was by far his most complex. He led a team of hackers that stole campaign strategies, manipulated social media to create false waves of enthusiasm and derision, and installed spyware in opposition offices, all to help Peña Nieto, a right-of-center candidate, eke out a victory. On that July night, he cracked bottle after bottle of Colón Negra beer in celebration. As usual on election night, he was alone.

Sepúlveda’s career began in 2005, and his first jobs were small—mostly defacing campaign websites and breaking into opponents’ donor databases. Within a few years he was assembling teams that spied, stole, and smeared on behalf of presidential campaigns across Latin America. He wasn’t cheap, but his services were extensive. For $12,000 a month, a customer hired a crew that could hack smartphones, spoof and clone Web pages, and send mass e-mails and texts. The premium package, at $20,000 a month, also included a full range of digital interception, attack, decryption, and defense. The jobs were carefully laundered through layers of middlemen and consultants. Sepúlveda says many of the candidates he helped might not even have known about his role; he says he met only a few.
2. Social Network Algorithms Are Distorting Reality By Boosting Conspiracy Theories

The filter bubble—the idea that online recommendation engines learn what we like and thus keep us only reading things we agree with—has evolved. Algorithms, network effects, and zero-cost publishing are enabling crackpot theories to go viral, and—unchecked—these ideas are impacting the decisions of policy makers and shaping public opinion, whether they are verified or not.

First it is important to understand the technology that drives the system. Most algorithms work simply: Web companies try to tailor their content (which includes news and search results) to match the tastes and interests of readers. However as online organizer and author Eli Pariser says in the TED Talk where the idea of the filter bubble became popularized: “There’s a dangerous unintended consequence. We get trapped in a ‘filter bubble’ and don’t get exposed to information that could challenge or broaden our worldview.”

Facebook’s news feed and personalized search delivers results that are tailored just to us because a social network’s business is to keep us interested and happy. Feeling good drives engagement and more time spent on a site, and that keeps a user targetable with advertisements for longer. Pariser argues that this nearly invisible editing of the Internet limits what we see—and that it will “ultimately prove to be bad for us and bad for democracy.”

In his 1962 book, The Image: A Guide to Pseudo-Events in America, former Librarian of Congress Daniel J. Boorstin describes a world where our ability to technologically shape reality is so sophisticated, it overcomes reality itself. “We risk being the first people in history,” he writes, “to have been able to make their illusions so vivid, so persuasive, so ‘realistic’ that they can live in them.”
3. The Gift of Presence, The Perils of Advice

Here’s the deal. The human soul doesn’t want to be advised or fixed or saved. It simply wants to be witnessed — to be seen, heard and companioned exactly as it is. When we make that kind of deep bow to the soul of a suffering person, our respect reinforces the soul’s healing resources, the only resources that can help the sufferer make it through.

Aye, there’s the rub. Many of us “helper” types are as much or more concerned with being seen as good helpers as we are with serving the soul-deep needs of the person who needs help. Witnessing and companioning take time and patience, which we often lack — especially when we’re in the presence of suffering so painful we can barely stand to be there, as if we were in danger of catching a contagious disease. We want to apply our “fix,” then cut and run, figuring we’ve done the best we can to “save” the other person.

During my depression, there was one friend who truly helped. With my permission, Bill came to my house every day around 4:00 PM, sat me down in an easy chair, and massaged my feet. He rarely said a word. But somehow he found the one place in my body where I could feel a sense of connection with another person, relieving my awful sense of isolation while bearing silent witness to my condition.

By offering me this quiet companionship for a couple of months, day in and day out, Bill helped save my life. Unafraid to accompany me in my suffering, he made me less afraid of myself. He was present — simply and fully present — in the same way one needs to be at the bedside of a dying person.
4. A New 50-Trillion-Pixel Image of Earth, Every Day​

But it’s Planet Labs and Terra Bella who seem to be driving the small-satellite industry. Both are born of and based in Bay Area business culture. (Terra Bella’s founders often speak of the Stanford class where they met.) Both companies are now non-negligible in size: Planet Labs has more than 330 employees, evenly split between space-operations and product engineering; Terra Bella numbers more than 180. For reference, DigitalGlobe employs 1,300 people.

And, despite both manufacturing satellites, Planet Labs and Terra Bella both downplay their importance. Dan Berkenstock, Terra Bella’s CEO, even implied its why the company is changing its name: “I think Skybox, in many ways, came to be equated with satellite imaging,” he told me. “And satellite imaging is great—but that’s one piece of the puzzle.” (I also wonder if Skybox sounded too much like the similarly geospatial-minded Mapbox or the recently devalued Dropbox.)

Instead, both companies now talk about how imagery fits into their “Earth information platforms” that bring together lots of different kinds of data about the planet. Both companies offer APIs, aiming to provide something like “cloud” services for Earth information. (As opposed to “cloud” services for the Earth—that would be something else entirely.) Both companies are also cagey about what kind of non-imagery data could get included in these platforms, but meteorological and climate data would make sense.

“The product is information processing—real-time, fact-based data,” says Robbie Schlinger, co-founder of Planet Labs.

For Terra Bella, the uses of its eventual platform revolve around “economic transparency.” Their satellites have sufficiently high resolution to see vehicles, and they record high-definition video, not still frames. They mention executives working on logistics problems, or people checking on a construction project far away, when discussing their project. Their satellites’ resolution also puts a solution to “the Walmart parking-lot problem” in reach: an almost-infamous idea that financiers could scry the direction of the U.S. economy by tabulating how many cars obscure the blacktops of the nation’s big-box retailer.

Planet Labs tends to focus on different situations. As recently as last summer, it was a “unicorn,” valued at more than $1 billion. And unlike Terra Bella, which has leased out some of its manufacturing, Planet Labs still builds all of its extra-large Cubesats in its South of Market headquarters in San Francisco. Next month, the company will begin constructing 120 of them in a six-week span, the fastest manufacture of satellites in history, according to Schlinger.

Its satellites are a better fit for observing land use: the health and types of agricultural crops, the extent of logging and deforestation, the availability of water and the plumpness of reservoirs. (You can still often discern car and truck-sized objects in its photos.) Last year, it started giving imagery of newsworthy areas, like the Syria-Turkey border, to outlets like The New York Times.

 

“If you tell me precisely what it is a machine cannot do, then I can always make a machine which will do just that.” – John von Neumann

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter

For Your Consideration: Aphantasia, Learned Resilience, Other People’s Problems, Pirated Science

1. Aphantasia: How It Feels To Be Blind In Your Mind

“I just learned something about you and it is blowing my goddamned mind.This is not a joke. It is not “blowing my mind” a la BuzzFeed’s “8 Things You Won’t Believe About Tarantulas.” It is, I think, as close to an honest-to-goodness revelation as I will ever live in the flesh.

Here it is: You can visualize things in your mind.

If I tell you to imagine a beach, you can picture the golden sand and turquoise waves. If I ask for a red triangle, your mind gets to drawing. And mom’s face? Of course. You experience this differently, sure. Some of you see a photorealistic beach, others a shadowy cartoon. Some of you can make it up, others only “see” a beach they’ve visited. Some of you have to work harder to paint the canvas. Some of you can’t hang onto the canvas for long. But nearly all of you have a canvas.

I don’t. I have never visualized anything in my entire life. I can’t “see” my father’s face or a bouncing blue ball, my childhood bedroom or the run I went on ten minutes ago. I thought “counting sheep” was a metaphor. I’m 30 years old and I never knew a human could do any of this. And it is blowing my goddamned mind.

If you tell me to imagine a beach, I ruminate on the “concept” of a beach. I know there’s sand. I know there’s water. I know there’s a sun, maybe a lifeguard. I know facts about beaches. I know a beach when I see it, and I can do verbal gymnastics with the word itself.
But I cannot flash to beaches I’ve visited. I have no visual, audio, emotional or otherwise sensory experience. I have no capacity to create any kind of mental image of a beach, whether I close my eyes or open them, whether I’m reading the word in a book or concentrating on the idea for hours at a time—or whether I’m standing on the beach itself.

And I grew up in Miami.

This is how it’s always been for me, and this is how I thought it was for you. Then a “Related Article” link on Facebook led me to this bombshell in The New York Times. The piece unearths, with great curiosity, the mystery of a 65 year-old man who lost his ability to form mental images after a surgery.
What do you mean “lost” his ability? I thought. Shouldn’t we be amazed he ever that ability?
Neurologists at the University at Exeter in England showed the man a photo. Who is that? Tony Blair, of course. Brain scans showed the visual sectors of his brain lighting up.

Then they removed the photo and asked him to imagine Tony Blair. The man knew characteristics—his eye color, his hair—but he could not “see” the image in his mind’s eye. Brain scans showed the visual sectors didn’t activate this time. In fMRIs of other men, many of the same sectors activated whether the subjects were looking at a photo or simply imagining one.”

2. How People Learn to Become Resilient

One of the central elements of resilience, Bonanno has found, is perception: Do you conceptualize an event as traumatic, or as an opportunity to learn and grow? “Events are not traumatic until we experience them as traumatic,” Bonanno told me, in December. “To call something a ‘traumatic event’ belies that fact.” He has coined a different term: PTE, or potentially traumatic event, which he argues is more accurate. The theory is straightforward. Every frightening event, no matter how negative it might seem from the sidelines, has the potential to be traumatic or not to the person experiencing it. (Bonanno focusses on acute negative events, where we may be seriously harmed; others who study resilience, including Garmezy and Werner, look more broadly.) Take something as terrible as the surprising death of a close friend: you might be sad, but if you can find a way to construe that event as filled with meaning—perhaps it leads to greater awareness of a certain disease, say, or to closer ties with the community—then it may not be seen as a trauma. (Indeed, Werner found that resilient individuals were far more likely to report having sources of spiritual and religious support than those who weren’t.) The experience isn’t inherent in the event; it resides in the event’s psychological construal.

It’s for this reason, Bonanno told me, that “stressful” or “traumatic” events in and of themselves don’t have much predictive power when it comes to life outcomes. “The prospective epidemiological data shows that exposure to potentially traumatic events does not predict later functioning,” he said. “It’s only predictive if there’s a negative response.” In other words, living through adversity, be it endemic to your environment or an acute negative event, doesn’t guarantee that you’ll suffer going forward. What matters is whether that adversity becomes traumatizing.

The good news is that positive construal can be taught. “We can make ourselves more or less vulnerable by how we think about things,” Bonanno said. In research at Columbia, the neuroscientist Kevin Ochsner has shown that teaching people to think of stimuli in different ways—to reframe them in positive terms when the initial response is negative, or in a less emotional way when the initial response is emotionally “hot”—changes how they experience and react to the stimulus. You can train people to better regulate their emotions, and the training seems to have lasting effects.

3. The Reductive Seduction of Other People’s Problems

The “reductive seduction” is not malicious, but it can be reckless. For two reasons. First, it’s dangerous for the people whose problems you’ve mistakenly diagnosed as easily solvable. There is real fallout when well-intentioned people attempt to solve problems without acknowledging the underlying complexity.

There are so many examples. As David Bornstein wrote in The New York Times, over four decades of Westerners working on clean water has led to “billions of dollars worth of broken wells and pumps. Many of them functioned for less than two years.”

One classic example: in 2006, the U.S. government, The Clinton Foundation, The Case Foundation, and others pledged $16.4 million to PlayPump, essentially a merry-go-round pump that produced safe drinking water. Despite being touted as the (fun!) answer to the developing world’s water woes, by 2007, one-quarter of the pumps in Zambia alone were in disrepair. It was later estimated that children would need to “play” for 27 hours a day to produce the water PlayPump promised.

We are easily seduced by aid projects that promise play. The SOCCKET, an energy-generating soccer ball, made a splash in 2011 when it raised $92,296 on Kickstarter. Three short years later, the company that created it wrote to its backers: “Most of you received an incredibly underwhelming product with a slew of manufacturing and quality control errors… In summary, we totally f*#ked up this Kickstarter campaign.”

Reading their surprisingly candid mea culpa, I couldn’t help but wonder where the equivalent message was to the kids in energy-starved areas whose high hopes were darkened by a defunct ball.

In some cases, the reductive seduction can actively cause harm. In its early years, TOMS Shoes — which has become infamous for its “buy one give one” business model, wherein they give a pair of shoes for every one sold — donated American-made shoes, which put local shoe factory workers out of jobs (they’ve since changed their supply chain).

Some development workers even have an acronym that they use to describe these initiatives: SWEDOW (stuff we don’t want). AIDWATCH, a watchdog development blog, created a handy flow chart that helps do gooders reality check their altruistic instincts. It begins with the simplest of questions — “Is the stuff needed?” — and flows down to more sophisticated questions like, “Will buying locally cause shortages or other disruptions?”

4. Why one woman stole 50 million academic papers — and made them all free to read

Many academic journals are extremely expensive. Want to read just one article? That could cost you around $30. The best way to access academic papers is through universities or libraries. But those institutions can pay millions of dollars a year to subscribe to a comprehensive collection.

Alexandra Elbakyan has had enough.

Elbakyan is a Russia-based neuroscientist turned academic Robin Hood. In 2011 she founded the website Sci-Hub, which has grown to host some 50 million academic papers — Elbakyan claims this is nearly all the paywalled scientific knowledge that exists in the world. These papers are free for anyone to view and download.

For students and researchers around the globe who can’t afford academic journals, Elbakyan is a hero. For academic publishers that have historically been shielded from competition, she’s a villain.

Either way, what she’s doing is most definitely illegal.

Last year, leading journal publisher Elsevier took action against Sci-Hub, claiming it violated US copyright laws and the Computer Fraud and Abuse Act, which prohibits the fraudulent access of computer systems. In October, a New York district court orderedthat the site be taken down. Elbakyan was unfazed. Soon after, in November Sci-Hubreemerged with a new overseas domain.

This story is bigger than a single court ruling. It’s a new front in the academic publishing wars. What’s at stake is the question of who has access to scientific knowledge: wealthy institutions, or anyone with an internet connection?

If Sci-Hub wins, the age of academic paywalls may effectively be over.

No man ever looks at the world with pristine eyes. He sees it edited by a definite set of customs and institutions and ways of thinking.
-Ruth Benedict

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter

For Your Consideration: Laws of Life, Art of (Cyber)war, The Minecraft Generation, and Self or Selfie

Time to re-boot the newsletter. I’ve been meaning to start organizing it again for months but haven’t made the time. Not sure what interrupted the flow especially when I had a lot of encouragement from friends and family. One in particular who used to encourage me on this, to publish (or re-publish) the things that interested me, and to write even when I felt like it was an echo into the void. Miss you dude…

1. Jeremy England, the Man Who May One-Up Darwin

In town to give a lecture, the Harvard grad and Rhodes scholar speaks quickly, his voice rising a few pitches in tone, his long-fingered hands making sudden jerks when he’s excited. He’s skinny, with a long face, scraggly beard and carelessly groomed mop of sandy brown hair — what you might expect from a theoretical physicist. But then there’s the street-style Adidas on his feet and the kippah atop his head. And the fact that this scientist also talks a lot about God.

The 101 version of his big idea is this: Under the right conditions, a random group of atoms will self-organize, unbidden, to more effectively use energy. Over time and with just the right amount of, say, sunlight, a cluster of atoms could come remarkably close to what we call life. In fact, here’s a thought: Some things we consider inanimate actually may already be “alive.” It all depends on how we define life, something England’s work might prompt us to reconsider. “People think of the origin of life as being a rare process,” says Vijay Pande, a Stanford chemistry professor. “Jeremy’s proposal makes life a consequence of physical laws, not something random.”

England’s idea may sound strange, even incredible, but it’s drawn the attention of an impressive posse of high-level academics. After all, while Darwinism may explain evolution and the complex world we live in today, it doesn’t account for the onset of intelligent beings. England’s insistence on probing for the step that preceded all of our current assumptions about life is what makes him stand out, says Carl Franck, a Cornell physics professor, who’s been following England’s work closely. “Every 30 years or so we experience these gigantic steps forward,” Franck says. “We’re due for one. And this might be it.”

And all from a modern Orthodox Jew with fancy sneakers.

2. The New Art Of War: How trolls, hackers and spies are rewriting the rules of conflict

While there is no international law that directly refers to the ultra-modern concept of cyber warfare, there is plenty that applies. So CDCOE assembled a panel of international legal experts to go through this existing law and show how it applies to cyber warfare. This formed the basis of the Tallinn Manual and the 95 so-called ‘black letter rules’ it contains (so named because that’s how they appear in the text).

Through these rules the manual attempts to define some of the basics of cyber warfare. At the most fundamental level, the rules state that an online attack on a state can, in certain circumstances, be the equivalent of an armed attack. It also lays out that such an attack is against international law, and that a state attacked in such a way has the right to hit back.

Other rules the manual spells out: don’t target civilians or launch indiscriminate attacks that could cripple civilian infrastructure. While many of these sorts of rules are well understood when it comes to standard warfare, setting it out in the context of digital warfare was groundbreaking.

While the manual argues that a cyber attack can be considered to be the equivalent of an armed attack if it causes physical harm to people or property, other attacks can also be considered a use of force depending on their severity or impact. For example, breaking into a military system would be more likely to be seen as serious, as opposed to hacking into a small business. In contrast, cyber attacks that generate “mere inconvenience or irritation” would never be considered to be a use of force.
The manual also delves into some of the trickier questions of cyber war: would Country A be justified in launching a pre-emptive military strike against a Country B if it knew Country B planned to blow up Country A’s main oil pipeline by hacking the microcontrollers managing its pipeline pressure? (Answer: probably yes.)

The manual even considers the legality of some scenarios verging on the science-fictional.

If an army hacked into and took control of enemy drones, would those drones have to be grounded and marked with the capturers insignia before being allowed to carry out reconnaissance flights? (Answer: maybe.)

But what’s striking is that the Tallinn Manual sets the rules for a war that hasn’t been fought yet.

3. The Minecraft Generation

Minecraft is an incredibly complex game, but it’s also — at first — inscrutable. When you begin, no pop-ups explain what to do; there isn’t even a “help” section. You just have to figure things out yourself. (The exceptions are the Xbox and Play­Station versions, which in December added tutorials.) This unwelcoming air contrasts with most large games these days, which tend to come with elaborate training sessions on how to move, how to aim, how to shoot. In Minecraft, nothing explains that skeletons will kill you, or that if you dig deep enough you might hit lava (which will also kill you), or even that you can craft a pickax.

This “you’re on your own” ethos resulted from early financial limitations: Working alone, Persson had no budget to design tutorials. That omission turned out be an inadvertent stroke of genius, however, because it engendered a significant feature of Minecraft culture, which is that new players have to learn how to play. Minecraft, as the novelist and technology writer Robin Sloan has observed, is “a game about secret knowledge.” So like many modern mysteries, it has inspired extensive information-­­sharing. Players excitedly pass along tips or strategies at school. They post their discoveries in forums and detail them on wikis. (The biggest one, hosted at the site Gamepedia, has nearly 5,000 articles; its entry on Minecraft’s “horses,” for instance, is about 3,600 words long.) Around 2011, publishers began issuing handbooks and strategy guides for the game, which became runaway best sellers; one book on redstone has outsold literary hits like “The Goldfinch,” by Donna Tartt.

“In Minecraft, knowledge becomes social currency,” says Michael Dezuanni, an associate professor of digital media at Queensland University of Technology in Australia. Dezuanni has studied how middle-­school girls play the game, watching as they engaged in nuanced, Talmudic breakdowns of a particular creation. This is, he realized, a significant part of the game’s draw: It offers many opportunities to display expertise, when you uncover a new technique or strategy and share it with peers.

The single biggest tool for learning Minecraft lore is YouTube. The site now has more than 70 million Minecraft videos, many of which are explicitly tutorial. To make a video, players use “screencasting” software (some of which is free, some not) that records what’s happening on-screen while they play; they usually narrate their activity in voice-­over. The problems and challenges you face in Minecraft are, as they tend to be in construction or architecture, visual and three-­dimensional. This means, as many players told me, that video demonstrations have a particularly powerful explanatory force: It’s easiest to learn something by seeing someone else do it. In this sense, the game points to the increasing role of video as a rhetorical tool. (“Minecraft” is the second-­most-­searched-­for term on YouTube, after “music.”)

4. Saving the Self in the Age of the Selfie

Consider Erica, a full-time college student. The first thing she does when she wakes up in the morning is reach for her smartphone. She checks texts that came in while she slept. Then she scans Facebook, Snapchat, Tumblr, Instagram, and Twitter to see “what everybody else is doing.” At breakfast, she opens her laptop and goes to Spotify and her various email accounts. Once she gets to campus, Erica confronts more screen time: PowerPoints and online assignments, academic content to which she dutifully attends (she’s an A student). Throughout the day, she checks in with social media roughly every 10 minutes, even during class. “It’s a little overwhelming,” she says, “but you don’t want to feel left out.”

We’ve been worried about this type of situation for thousands of years. Socrates, for one, fretted that the written word would compromise our ability to retell stories. Such a radical shift in communication, he argued in Phaedrus, would favor cheap symbols over actual memories, ease of conveyance over inner depth. Philosophers have pondered the effect of information technology on human identity ever since. But perhaps the most trenchant modern expression of Socrates’ nascent technophobia comes from the 20th-century German philosopher Martin Heidegger, whose essays on the subject—notably “The Question Concerning Technology” (1954)—established a framework for scrutinizing our present situation.

Heidegger’s take on technology was dire. He believed that it constricted our view of the world by reducing all experience to the raw material of its operation. To prevent “an oblivion of being,” Heidegger urged us to seek solace in nontechnological space. He never offered prescriptive examples of exactly how to do this, but as the scholar Howard Eiland explains, it required seeing the commonplace as alien, or finding “an essential strangeness in … familiarity.” Easier said than done. Hindering the effort in Heidegger’s time was the fact that technology was already, as the contemporary political philosopher Mark Blitz puts it, “an event to which we belong.” In this view, one that certainly befits today’s digital communication, technology infuses real-world experience the way water mixes with water, making it nearly impossible to separate the human and technological perspectives, to find weirdness in the familiar. Such a blending means that, according to Blitz, technology’s domination “makes us forget our understanding of ourselves.”

The only hope for preserving a non-technological haven—and it was and remains a distant hope—was to cultivate what Heidegger called “nearness.” Nearness is a mental island on which we can stand and affirm that the phenomena we experience both embody and transcend technology. Consider it a privileged ontological stance, a way of knowing the world through a special kind of wisdom or point of view. Heidegger’s implicit hope was that the human ability to draw a distinction between technological and nontechnological perception would release us from “the stultified compulsion to push on blindly with technology.”

 

 

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter, Random, Writing