For Your Consideration: Laws of Life, Art of (Cyber)war, The Minecraft Generation, and Self or Selfie

Time to re-boot the newsletter. I’ve been meaning to start organizing it again for months but haven’t made the time. Not sure what interrupted the flow especially when I had a lot of encouragement from friends and family. One in particular who used to encourage me on this, to publish (or re-publish) the things that interested me, and to write even when I felt like it was an echo into the void. Miss you dude…

1. Jeremy England, the Man Who May One-Up Darwin

In town to give a lecture, the Harvard grad and Rhodes scholar speaks quickly, his voice rising a few pitches in tone, his long-fingered hands making sudden jerks when he’s excited. He’s skinny, with a long face, scraggly beard and carelessly groomed mop of sandy brown hair — what you might expect from a theoretical physicist. But then there’s the street-style Adidas on his feet and the kippah atop his head. And the fact that this scientist also talks a lot about God.

The 101 version of his big idea is this: Under the right conditions, a random group of atoms will self-organize, unbidden, to more effectively use energy. Over time and with just the right amount of, say, sunlight, a cluster of atoms could come remarkably close to what we call life. In fact, here’s a thought: Some things we consider inanimate actually may already be “alive.” It all depends on how we define life, something England’s work might prompt us to reconsider. “People think of the origin of life as being a rare process,” says Vijay Pande, a Stanford chemistry professor. “Jeremy’s proposal makes life a consequence of physical laws, not something random.”

England’s idea may sound strange, even incredible, but it’s drawn the attention of an impressive posse of high-level academics. After all, while Darwinism may explain evolution and the complex world we live in today, it doesn’t account for the onset of intelligent beings. England’s insistence on probing for the step that preceded all of our current assumptions about life is what makes him stand out, says Carl Franck, a Cornell physics professor, who’s been following England’s work closely. “Every 30 years or so we experience these gigantic steps forward,” Franck says. “We’re due for one. And this might be it.”

And all from a modern Orthodox Jew with fancy sneakers.

2. The New Art Of War: How trolls, hackers and spies are rewriting the rules of conflict

While there is no international law that directly refers to the ultra-modern concept of cyber warfare, there is plenty that applies. So CDCOE assembled a panel of international legal experts to go through this existing law and show how it applies to cyber warfare. This formed the basis of the Tallinn Manual and the 95 so-called ‘black letter rules’ it contains (so named because that’s how they appear in the text).

Through these rules the manual attempts to define some of the basics of cyber warfare. At the most fundamental level, the rules state that an online attack on a state can, in certain circumstances, be the equivalent of an armed attack. It also lays out that such an attack is against international law, and that a state attacked in such a way has the right to hit back.

Other rules the manual spells out: don’t target civilians or launch indiscriminate attacks that could cripple civilian infrastructure. While many of these sorts of rules are well understood when it comes to standard warfare, setting it out in the context of digital warfare was groundbreaking.

While the manual argues that a cyber attack can be considered to be the equivalent of an armed attack if it causes physical harm to people or property, other attacks can also be considered a use of force depending on their severity or impact. For example, breaking into a military system would be more likely to be seen as serious, as opposed to hacking into a small business. In contrast, cyber attacks that generate “mere inconvenience or irritation” would never be considered to be a use of force.
The manual also delves into some of the trickier questions of cyber war: would Country A be justified in launching a pre-emptive military strike against a Country B if it knew Country B planned to blow up Country A’s main oil pipeline by hacking the microcontrollers managing its pipeline pressure? (Answer: probably yes.)

The manual even considers the legality of some scenarios verging on the science-fictional.

If an army hacked into and took control of enemy drones, would those drones have to be grounded and marked with the capturers insignia before being allowed to carry out reconnaissance flights? (Answer: maybe.)

But what’s striking is that the Tallinn Manual sets the rules for a war that hasn’t been fought yet.

3. The Minecraft Generation

Minecraft is an incredibly complex game, but it’s also — at first — inscrutable. When you begin, no pop-ups explain what to do; there isn’t even a “help” section. You just have to figure things out yourself. (The exceptions are the Xbox and Play­Station versions, which in December added tutorials.) This unwelcoming air contrasts with most large games these days, which tend to come with elaborate training sessions on how to move, how to aim, how to shoot. In Minecraft, nothing explains that skeletons will kill you, or that if you dig deep enough you might hit lava (which will also kill you), or even that you can craft a pickax.

This “you’re on your own” ethos resulted from early financial limitations: Working alone, Persson had no budget to design tutorials. That omission turned out be an inadvertent stroke of genius, however, because it engendered a significant feature of Minecraft culture, which is that new players have to learn how to play. Minecraft, as the novelist and technology writer Robin Sloan has observed, is “a game about secret knowledge.” So like many modern mysteries, it has inspired extensive information-­­sharing. Players excitedly pass along tips or strategies at school. They post their discoveries in forums and detail them on wikis. (The biggest one, hosted at the site Gamepedia, has nearly 5,000 articles; its entry on Minecraft’s “horses,” for instance, is about 3,600 words long.) Around 2011, publishers began issuing handbooks and strategy guides for the game, which became runaway best sellers; one book on redstone has outsold literary hits like “The Goldfinch,” by Donna Tartt.

“In Minecraft, knowledge becomes social currency,” says Michael Dezuanni, an associate professor of digital media at Queensland University of Technology in Australia. Dezuanni has studied how middle-­school girls play the game, watching as they engaged in nuanced, Talmudic breakdowns of a particular creation. This is, he realized, a significant part of the game’s draw: It offers many opportunities to display expertise, when you uncover a new technique or strategy and share it with peers.

The single biggest tool for learning Minecraft lore is YouTube. The site now has more than 70 million Minecraft videos, many of which are explicitly tutorial. To make a video, players use “screencasting” software (some of which is free, some not) that records what’s happening on-screen while they play; they usually narrate their activity in voice-­over. The problems and challenges you face in Minecraft are, as they tend to be in construction or architecture, visual and three-­dimensional. This means, as many players told me, that video demonstrations have a particularly powerful explanatory force: It’s easiest to learn something by seeing someone else do it. In this sense, the game points to the increasing role of video as a rhetorical tool. (“Minecraft” is the second-­most-­searched-­for term on YouTube, after “music.”)

4. Saving the Self in the Age of the Selfie

Consider Erica, a full-time college student. The first thing she does when she wakes up in the morning is reach for her smartphone. She checks texts that came in while she slept. Then she scans Facebook, Snapchat, Tumblr, Instagram, and Twitter to see “what everybody else is doing.” At breakfast, she opens her laptop and goes to Spotify and her various email accounts. Once she gets to campus, Erica confronts more screen time: PowerPoints and online assignments, academic content to which she dutifully attends (she’s an A student). Throughout the day, she checks in with social media roughly every 10 minutes, even during class. “It’s a little overwhelming,” she says, “but you don’t want to feel left out.”

We’ve been worried about this type of situation for thousands of years. Socrates, for one, fretted that the written word would compromise our ability to retell stories. Such a radical shift in communication, he argued in Phaedrus, would favor cheap symbols over actual memories, ease of conveyance over inner depth. Philosophers have pondered the effect of information technology on human identity ever since. But perhaps the most trenchant modern expression of Socrates’ nascent technophobia comes from the 20th-century German philosopher Martin Heidegger, whose essays on the subject—notably “The Question Concerning Technology” (1954)—established a framework for scrutinizing our present situation.

Heidegger’s take on technology was dire. He believed that it constricted our view of the world by reducing all experience to the raw material of its operation. To prevent “an oblivion of being,” Heidegger urged us to seek solace in nontechnological space. He never offered prescriptive examples of exactly how to do this, but as the scholar Howard Eiland explains, it required seeing the commonplace as alien, or finding “an essential strangeness in … familiarity.” Easier said than done. Hindering the effort in Heidegger’s time was the fact that technology was already, as the contemporary political philosopher Mark Blitz puts it, “an event to which we belong.” In this view, one that certainly befits today’s digital communication, technology infuses real-world experience the way water mixes with water, making it nearly impossible to separate the human and technological perspectives, to find weirdness in the familiar. Such a blending means that, according to Blitz, technology’s domination “makes us forget our understanding of ourselves.”

The only hope for preserving a non-technological haven—and it was and remains a distant hope—was to cultivate what Heidegger called “nearness.” Nearness is a mental island on which we can stand and affirm that the phenomena we experience both embody and transcend technology. Consider it a privileged ontological stance, a way of knowing the world through a special kind of wisdom or point of view. Heidegger’s implicit hope was that the human ability to draw a distinction between technological and nontechnological perception would release us from “the stultified compulsion to push on blindly with technology.”

 

 

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter, Random, Writing

Remembering Jake Brewer

tops

 

You should be so lucky in your life to meet someone who makes you want to strive to be the best possible version of yourself. That they can do that with invisible grace and little more than a smile and a few words of encouragement. That they can do that by being an example of it themselves. That they can continue to exert that memory and influence over thousands of miles and infrequent contact. You should be so lucky.

 

I have been that lucky.

 

In 2006 I met Jake in the airport lobby in Hong Kong as our plane had deplaned for a lay-over on it’s way to Bali. I had my new camera out and was playing with it and he approached me to talk about photography. We started talking about cameras and taking pictures and discovered that we were on the way to the same conference in Bali. Also, that he had just moved to Portland to head up the Idealist.org office there.

 

At the conference he, Summer, and I spent a lot of time together and became fast friends. In Ubud, Bali at the conference we ate at the same pizza joint after the conference, danced at the karaoke night, where he did an amazing rendition of “Mountain Music” by Alabama before breakdancing with the group. We got lots of liter beers of Bintang and drank them on the side of the road somewhere in a small group. I remember having trouble with the caps and us having to figure out a way to open the bottles with a carabiner. We hid the bottles every time a car drove by and probably disturbed a sleeping neighborhood.

 

Back in Portland we got together whenever we were both in town. Both of us traveling a lot for work then.

 

I remember many coffees and Wednesday night drinks with Reno where we discussed the world and technology and life and everything else. If only I had transcripts of those conversations…

 

Jake was someone who accepted you for who you were. He might have some thoughts on it, and you could have a conversation about it, but he still accepted you. Loved you might be a more appropriate sentiment.
I remember a group of Campus Christians sitting at a table next to us at Powell’s Books and us overhearing some comments they were making. I left to go to the bathroom only to come back to find Jake holding court with the whole group having created a semicircle around him. We discussed heady theological/philosophical stuff for an hour before everyone parted happily.

 

Jake created conversations, created engagement, translated between groups, built bridges, and sometimes ferried people across those bridges.

 

I remember sitting in my ridiculous van in my driveway having a long talk about purpose and fear and life goals as he was preparing to move to DC and I preparing to start my round the world adventure.

 

He moved to DC and I got to check in with him every so often as his trajectory sped upwards at an ever increasing pace, his sphere of influence growing at each change in career. From Idealist when I met him, to the Energy Action Coalition, to the Sunlight Foundation, to Change.org, and finally, as should have been expected with Jake, a position advising the President of the United States.

 

I remember staying at his bottom floor apartment in DC in 2007 and him waking me up in the living room at full volume with the “Flower Duet” opera. Conducting it and making me wait for just the right parts before throwing his hands up in enthusiasm. I wanted to see what Super Tuesday was like in DC. He showed me around and was an amazing host.

 

I remember a long night out in DC where we were meeting up a group at a Bachelorette party. I believe I may have even awkwardly and drunkenly bought the group a couple bottles of champagne.

 

We returned to his house later in the night, necessarily via taxi, and had to have pizza, but were somewhat unable to acquire it ourselves. MK arrived shortly thereafter. I remember pleading with her: “We must pizza, can you help us?” or something like that. She took care of us with her usual grace and good humor.

 

In 2009 He flew out to be a photographer for my wedding to Summer. He was her wing-man through makeup and hair and documented the process. Taking so many wonderful pictures that the batteries in the flash were ejected to hot to touch more than once.

 

In 2011 I had the honor of meeting his whole family and soon to be in-laws at his wedding to the amazing Mary Katherine, I’ve never had so much fun with a family in my entire life. Summer and I were honored beyond words to be included.

 

I remember his open heart and his wise nature. His optimism and seemingly boundless ability to connect people, to see the best version of them through their own eyes.

 

I remember many quick calls and text exchanges. Each standing out on their own now.

 

As the legion of people that loved Jake descend on Washington DC to pay their last respects to a great man whose time was cut too short, I think we will see something Jake himself would have loved. So many of the people he loved and respected together in one place connecting or re-connecting, talking, exchanging ideas, memories, stories.

 

Jake is not done changing the world. The people he has touched will carry his ideas and his intentions on to the best of their ability. He has connected me to amazing people who I will keep close. His memory will continue to make me strive to be a better person and to affect positive change. I’ll use his example to teach my children to be better people and with that he will continue to change the world.

 

You should be so lucky. I have been so lucky. We who were able to orbit his star if even for a short time.

 

Qui Moede quoted Ralph Waldo Emmerson when remembering Jake and it was so perfect I felt I needed to as well.

 

“To laugh often and much; To win the respect of intelligent people and the affection of children; To earn the appreciation of honest critics and endure the betrayal of false friends; To appreciate beauty, to find the best in others; To leave the world a bit better, whether by a healthy child, a garden patch, or a redeemed social condition; To know even one life has breathed easier because you have lived. This is to have succeeded.”

 

Jake succeeded, more than anyone I’ve ever known. And I’m really going to miss him.
Courtesty of Charlotte Hill

Courtesty of Charlotte Hill

Leave a Comment

Filed under Random

For Your Consideration: IoT gets political, Podcasts Save NPR, End of Work, Ultralympics

1. Politics won’t know what hit it

This might sound unlikely at first, and it won’t be felt right away. But it’s important to realize that when we look at the Internet of Things, we’re seeing a technology, or rather a technological system, that will not just pose challenges for governments, but change them completely. In all of history, there has never been anything like the constant and intimate feedback loop that the Internet of Things is creating between citizens and whoever is on the other end of their data.

The conclusion I couldn’t escape is that the Internet of Things will be the most powerful political tool we’ve ever created.  For democracies, the Internet of Things will transform how we as voters affect government — and how government touches (and tracks) our lives. Authoritarian governments will have their own uses for it, some of which are already appearing. And for everyone, both citizens and leaders, it’s important to realize where it could head long before we get there.

This next Internet is going to make Big Data truly gargantuan, with real consequences for our political lives. Instead of small survey samples with noticeable error margins and carefully worded questions, the device networks will generate many details about our lives — all the time. The end result will not be a stream of data, it will be a tsunami of information that will offer governments and politicians overwhelming evidence about our real-world behavior, not just our attitudes and aspirations.

From a political perspective, this is a radical change.

The basis of a democracy is voluntary civic engagement: A person’s participation in setting government policy is intentional and a matter of choice. In democracies, citizens express their preference through activism and voting.  Historically, governments and politicians eager to do a good job interpreting citizen intent also relied on opinion polls, conversations with civic groups, social science research, and huge record-keeping projects like the census. Politicians have long tried to interpret citizen intent and manipulate it through rhetoric and campaign tricks.

But pervasive device networks will change the rules, making voluntary conversations among elected officials, political parties, lobbyists and civic groups less important than the plethora of near-perfect data generated by the objects around us. Occasional activism and petition-signing will be overshadowed by volumes of behavioral information cleverly extracted from the Internet of Things.

2. Podcasts Are Saving NPR
Seriously, so many good podcasts. Marketplace, Invisibilia, This American Life, RadioLab, Note to Self, Serial, Actuality, On the Media….

For the first time in six years, National Public Radio, better known as NPR, is on track to break even financially thanks in part to the rising popularity of podcasts.

While the nonprofit’s stations are primarily dependent on federal funding, corporate sponsorship, and individual donations to stay on the air, the company has suffered from deficits and leadership changes in the past few years, leading to cutbacks and layoffs of its talented staff. But not this year. Along with some steps to reduce costs and develop new strategies, the Internet is helping to save the radio star.

NPR president and CEO Jarl Mohn first shared the news with The Associated Press. A longtime radio and TV executive, Mohn told the AP that podcasts are attracting younger listeners to the network, but not because it’s altering its message—just its medium.

“We don’t have to change the essence of who we are to get a younger audience. We just need to tell great stories,” Mohn told the AP.

This is a pretty big deal—NPR, founded in 1970, stands as one of the great American symbols of old media, along with network television and print newspapers. But even as new media upstarts have rapidly accumulated millions of dollars in venture capital to “disrupt” those stodgy incumbents, NPR has held steady, making inroads with  younger audiences and new revenue opportunities. For NPR, evolving with listeners’ changing choice of platform has allowed the company not just to adapt to the new digital media era, but to thrive, at least for now.

3. A World Without Work
No TL;DR. It’s a long piece worth reading.

Futurists and science-fiction writers have at times looked forward to machines’ workplace takeover with a kind of giddy excitement, imagining the banishment of drudgery and its replacement by expansive leisure and almost limitless personal freedom. And make no mistake: if the capabilities of computers continue to multiply while the price of computing continues to decline, that will mean a great many of life’s necessities and luxuries will become ever cheaper, and it will mean great wealth—at least when aggregated up to the level of the national economy.

But even leaving aside questions of how to distribute that wealth, the widespread disappearance of work would usher in a social transformation unlike any we’ve seen. If John Russo is right, then saving work is more important than saving any particular job. Industriousness has served as America’s unofficial religion since its founding. The sanctity and preeminence of work lie at the heart of the country’s politics, economics, and social interactions. What might happen if work goes away?

he U.S. labor force has been shaped by millennia of technological progress. Agricultural technology birthed the farming industry, the industrial revolution moved people into factories, and then globalization and automation moved them back out, giving rise to a nation of services. But throughout these reshufflings, the total number of jobs has always increased. What may be looming is something different: an era of technological unemployment, in which computer scientists and software engineers essentially invent us out of work, and the total number of jobs declines steadily and permanently.

This fear is not new. The hope that machines might free us from toil has always been intertwined with the fear that they will rob us of our agency. In the midst of the Great Depression, the economist John Maynard Keynes forecast that technological progress might allow a 15-hour workweek, and abundant leisure, by 2030. But around the same time, President Herbert Hoover received a letter warning that industrial technology was a “Frankenstein monster” that threatened to upend manufacturing, “devouring our civilization.” (The letter came from the mayor of Palo Alto, of all places.) In 1962, President John F. Kennedy said, “If men have the talent to invent new machines that put men out of work, they have the talent to put those men back to work.” But two years later, a committee of scientists and social activists sent an open letter to President Lyndon B. Johnson arguing that “the cybernation revolution” would create “a separate nation of the poor, the unskilled, the jobless,” who would be unable either to find work or to afford life’s necessities.

4. Cybathalon 2016
Ok… The name is not great but the concept is solid. They probably only chose that name because I own the name Ultralympics. Right? Right… Besides, I would suggest that drugs are technology as well.

The Olympic Games are a competition for the fittest and most talented able-bodied humans on Earth. The Paralympic Games are a competition for the fittest and most talented humans on Earth with physical and intellectual disabilities. To compete, paralympians take advantage of assistive systems, some of which are becoming increasingly cybernetic, combining traditional prosthetics with robotics. ETH Zurich and the Swiss National Competence Center of Research in Robotics have an idea of where we can take this.

It’s already the case that the Olympics are heavily influenced by technology. Aside from the 2012 controversy over whether paralympic sprinter Oscar Pistorius had an advantage in his carbon-fiber prosthetic legs, there are continual incremental advances in drag-reducing suits for swimmers and runners. Any event which requires hardware (shooting, archery, cycling, and so forth) is only going to become more heavily skewed towards tech, as human performance improvements will inevitably be eclipsed by technology, simply because it’s a lot easier to improve technology than it is to improve humans.

The athletes who have benefited the most from technology are arguably paralympians, who have a significantly heavier dependence on tech, and therefore more to gain as tech improves. Things like prosthetics are transitioning from passive systems to active ones, capable of sensing a user’s intent (through nerve or brain interfaces) and use motors and actuators to more effectively replace a real limb. The goal right now is to be able to provide capabilities similar to that of a human limb, but eventually, we’ll transcend biology, which is part of the reason why we need an entirely new type of competition.

The Cybathlon is a championship for racing pilots with disabilities (i.e. parathletes) who are using advanced assistive devices including robotic technologies. The competitions are comprised by different disciplines that apply the most modern powered knee prostheses, wearable arm prostheses, powered exoskeletons, powered wheelchairs, electrically stimulated muscles and novel brain-computer interfaces. The assistive devices can include commercially available products provided by companies, but also prototypes developed by research labs. There will be two medals for each competition, one for the pilot, who is driving the device, and one for the provider of the device. The event is organized on behalf of the Swiss National Competence Center of Research in Robotics (NCCR Robotics).

“You can tell whether a man is clever by his answers. You can tell whether a man is wise by his questions.” (Naguib Mahfouz)



If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter

For Your Consideration: Brain/Body Exercise, Stop Trying To Be Happy, LEGO turnaround, A/B Illusion

1. High Intensity (Functional) Exercise and Neurogenesis
“The typical CrossFit box is a defoliated orangutang habitat.”

The modern gym has been deliberately designed to not require any coordination, accuracy, agility, or balance. The attributes of fitness that bind the body and brain together have become the exclusive province of athletes, dancers, and the few lucky children who still climb trees, pop bicycle wheelies, and hang upside down from monkey bars. The stripping-away of coordination, accuracy agility and balance from physical culture – from our modern notion of fitness – has made us weaker, because power, the ability to apply maximum force, requires neural circuitry that’s impossible to develop on a pulley cable.

But it’s worse than that. If all we lost in the transition from functional fitness to circuit-trained muscle development was power, we’d be losing something the modern world doesn’t demand. Most of us can live pretty well, in a physical sense, without building huge amounts of physical power.

The problem is, the area of our brain that’s responsible for full-body movement…that’s not all it does. The brain controls movement in three areas, depending on the complexity of the movement. The primary motor cortex, the lowest-level switch box, is responsible for simple movements like shifting the position of your head. Slightly in front of this area is a more sophisticated set of controls for integrated movements, like reaching for an object. In front of this is a third, even more intricate control center called the attention association area. The attention association area is the part of the brain that controls complex movements that involve the entire body. This is where coordination, accuracy, agility, and balance live. That’s what it evolved to do. That’s what it does in animals. When a predator leaps to latch onto a piece of prey and snap its neck, that complex coordinated pounce comes from the attention association area of the predator’s brain. The neural “go signal” to pounce comes from the same place in the animal’s brain that controls the physical execution of the movement.

In human beings, the attention association area, like many parts of the brain, has evolved in a way that transcends its original function. This area that controls complex movements, that generates the “go signal” to execute them, is also the source of human will, goal-setting behavior, and purposeful organization of thought. “To put it bluntly,” writes Andrew Newberg, a neuroscience professor who researches the neural mechanisms of consciousness, “a great part of what one sees with injury to the attention association area is a loss of will and an inability to form intention. If any part of the brain can be said to be the seat of the will or of intentionality, it is certainly the attention association area.”

2. Stop Trying To Be Happy
You have to do the work yourself. Nobody can do it for you.

Happiness is the process of becoming your ideal self

Completing a marathon makes us happier than eating a chocolate cake. Raising a child makes us happier than beating a video game. Starting a small business with friends and struggling to make money makes us happier than buying a new computer.

And the funny thing is that all three of the activities above are exceedingly unpleasant and require setting high expectations and potentially failing to always meet them. Yet, they are some of the most meaningful moments and activities of our lives. They involve pain, struggle, even anger and despair, yet once we’ve done them we look back and get misty-eyed about them.

Because it’s these sort of activities which allow us to become our ideal selves. It’s the perpetual pursuit of fulfilling our ideal selves which grants us happiness, regardless of superficial pleasures or pain, regardless of positive or negative emotions. This is why some people are happy in war and others are sad at weddings. It’s why some are excited to work and others hate parties. The traits they’re inhabiting don’t align with their ideal selves.

The end results don’t define our ideal selves. It’s not finishing the marathon that makes us happy, it’s achieving a difficult long-term goal that does. It’s not having an awesome kid to show off that makes us happy, but knowing that you gave yourself up to the growth of another human being that is special. It’s not the prestige and money from the new business that makes you happy, it’s the process of overcoming all odds with people you care about.

3. LEGO turned itself around by analyzing overbearing parents
Also, buy the LEGOs, burn the instructions.

During a session with the photo diaries, for example, the researchers noted that the children’s bedrooms in New Jersey tended to be meticulously designed by the mothers. “They look like they’re from the pages of Elle Décor,” noted one participant. Another child’s bedroom in Los Angeles was suspiciously tidy with a stylish airplane mobile hanging down. “That looks staged,” an anthropologist observed, and the team discussed what that might mean. These were children who were driven everywhere in SUVs with carefully managed after-school activities. The researchers noted that the moms were also “staging” their children’s development. They were trying to shape children who were creative, fun, outgoing, humorous, intelligent, and quiet all at the same time. Throughout the conversation, critical theory from the human sciences provided a framework for the observations. The researchers discussed how these “staged” childhoods resembled Foucault’s “panopticon,” where activities were under surveillance and subject to disciplinary measures. One of the analysts drew a picture with a large circle and a very tiny circle. “This is the space we used to have for playing,” he said, pointing to the large circle, “and this ever-diminishing circle is the space these kids have right now.”

“These kids were bubble-wrapped,” one team member recalled. “Every physical space in their life was curated, managed, or staged by an adult. Whereas children in the past used to find freedom and an appropriate level of danger on the streets, playing on sidewalks throughout the neighborhood or roaming free in the country, these children needed to find their freedom in virtual spaces through online gaming or in imaginary zones (like the box of magic mushrooms).”

An important insight came to the group through the discussion of all of these observations. One role of play for these children was to find pockets of oxygen, away from adult supervision. The group realized that kids were desperate to sneak some element of danger into their lives. If the researchers had used a more linear process—one focused on the properties of the children’s play—the team would never have thought to put poisonous mushrooms and booby traps in the same category. But the nonlinear act of connecting the dots revealed that the underlying phenomenon of both behaviors was the same.

These and other findings led the researchers to identify the key patterns: children play to get oxygen, to understand hierarchy, to achieve mastery at a skill, and to socialize. The patterns were simplified into four categories: under the radar, hierarchy, mastery, and social play.

4. The ethics and morality of the A/B illusion
For more, the authors respond to some questions.

CAN it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent?

The conventional answer — of course not! — animated public outrage last year after Facebook published a study in which it manipulated how much emotional content more than half a million of its users saw. Similar indignation followed the revelation by the dating site OkCupid that, as an experiment, it briefly told some pairs of users that they were good matches when its algorithm had predicted otherwise.

But this outrage is misguided. Indeed, we believe that it is based on a kind of moral illusion.

Companies — and other powerful actors, including lawmakers, educators and doctors — “experiment” on us without our consent every time they implement a new policy, practice or product without knowing its consequences. When Facebook started, it created a radical new way for people to share emotionally laden information, with unknown effects on their moods. And when OkCupid started, it advised users to go on dates based on an algorithm without knowing whether it worked.

Why does one “experiment” (i.e., introducing a new product) fail to raise ethical concerns, whereas a true scientific experiment (i.e., introducing a variation of the product to determine the comparative safety or efficacy of the original) sets off ethical alarms?

In a forthcoming article in the Colorado Technology Law Journal, one of us (Professor Meyer) calls this the “A/B illusion” — the human tendency to focus on the risk, uncertainty and power asymmetries of running a test that compares A to B, while ignoring those factors when A is simply imposed by itself.

“In looking for people to hire, you look for three qualities: integrity, intelligence, and energy. And if they don’t have the first, the other two will kill you.” (Warren Buffet)

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter

For Your Consideration: Data Breach and Future Harm, Gov. 2.0, Digital Commons, Let Kids Play

1. Data Theft Today Poses Indefinite Threat of “Future Harm”
FYI. You can get a new Social Security Number for free from the SSA but the follow-on ramifications of doing so are unclear.

Benjamin Nuss was one of the nearly 80 million people whose social security number and personal information were compromised in this year’s Anthem data breach. He seems to have taken things in stride, continuing his daily routine of sharing computer time with his brother, eating healthy snacks and making crafts. Benjamin is four years old.

While it may seem trivial to think about the harm a preschooler will suffer from a data breach, the question is not what happens to him now, but what will happen years from now. Data theft poses an indefinite threat of future harm, as birthdate, full name and social security number remain a skeleton key of identity in many systems.

Benjamin’s mother, Jennifer Nuss, gave birth while the family had Blue Cross insurance, which was linked to Anthem’s databases. “They sent us a letter saying that Benjamin’s information may have been compromised. All they offered is, ‘We can watch Ben’s credit for you,’” she says. “But you can check that yourself for free.” A stay-at-home mother of two and an accounting student, Nuss is disciplined about family finances and checks her and her husband’s credit records and accounts regularly. “With Benjamin,” she adds, “well, we’re going to have to watch his information forever.”

While data breach victims like Nuss and his adult counterparts face open-ended questions about what lies ahead, the data wars are running hot, with each week seemingly bringing news of vast new breaches, victims and potential victims gripped with anxiety, and debate raging about the vulnerability of companies and government. All the uncertainty is raising thorny legal questions. The Supreme Court is readying to hear a case that could set new precedent on whether data breach lawsuits can be based on future harm.

[And] data breach victims aren’t only concerned with the financial bottom line. Many are more worried about doing the digital-era equivalent of constantly looking over their shoulder, waiting for someone to appropriate their identity, or dredge up some intimate, haunting secret they thought was long buried. It’s not likely that legislation or the courts can fix that.

2. Inside Obama’s Stealth Startup

Outsiders often make the mistake of perceiving Washington’s technical problems as the result of a dearth of engineering talent. This makes it tempting to frame the current wave of hires from Google and elsewhere as a wartime tactical team moving in to save us from the city’s existing coding barbarians. But this is not quite correct. For one thing, the people Park and Dickerson are luring here aren’t just software engineers; they’re data scientists, user-­experience gurus, product managers, and design savants. For another, these people are being matched with government insiders who can advise them on how to deploy private-sector tools like Amazon Web Services, for instance, that have long been considered forbidden within the Beltway, or how the procurement of contractors can be improved. Usually this involves cutting a jungle path through thousands of pages of overgrown government regulations. As Park says, “We need both kinds: people who can hack the technology, as well as people who can hack the bureaucracy.”

The complexity is formidable. If you put your engineer’s hat on, Dickerson says, you can look at government’s approach to tech and decide that it’s pretty much insane. But if you consider it as an anthropologist might (“If you’re studying this alien culture,” he says, “and you ask, Why do they behave so strangely?”), you see that D.C. has developed its dysfunctions for deep, structural reasons. For instance, Washington has plenty of smart people, Dickerson says. But they have been removed from the extraordinary growth—only occurring during the past decade, really—of the handful of West Coast companies that can now manage “planet-scale websites,” as Dickerson puts it.

Above all, there is the inertia of the past. One of the first lessons Dickerson learned about D.C. when he arrived was that the city traditionally conflates the importance of a task with its cost. Healthcare.gov ultimately became an $800 million project, with 55 contracting companies involved. “And of course it didn’t work,” he says. “They set aside hundreds of millions of dollars to build a website because it was a big, important website. But compare that to Twitter, which took three rounds of funding before it got to about the same number of users as ­Healthcare.gov—8 million to 10 million users. In those three rounds of funding, the whole thing added up to about $60 million.” Dickerson believes that the Healthcare.gov project could have been done with a similar size budget. But there wasn’t anyone to insist that the now-well-established Silicon Valley practice of building “agile” ­software—rolling out a digital product in stages; testing it; improving it; and repeating the process for continuous ­improvement—would be vastly superior to (and much, much cheaper than) a patchwork of contractors building out a complete and monolithic website. In his Fast Company interview, President Obama remarks that he made a significant mistake in thinking that government could use traditional methods to build something—Healthcare.gov—that had never been built before. “When you’re dealing with IT and software and program design,” the president explains, “it’s a creative process that can’t be treated the same way as a bulk purchase of pencils.”

Obama says. “And my pitch is that the tech community is more creative, more innovative, more collaborative and open to new ideas than any sector on earth. But sometimes what’s missing is purpose. To what end are we doing this?” As the president explains, he asks potential recruits, “Is there a way for us to harness this incredible set of tools you’re developing for more than just cooler games or a quicker way for my teenage daughters to send pictures to each other?”

3. The Tragedy of the Digital Commons

“How do you fix a broken system that isn’t yours to repair?” That’s the question that motivated the researchers Lilly Irani and Six Silberman to create Turkopticon, and it’s one that comes up frequently in digital environments dominated by large platforms with hands-off policies. (On social networks like Twitter, for example, harassment is a problem for many users.) Irani and Silberman describe Turkopticon as a “mutual aid for accountability” technology, a system that coordinates peer support to hold others accountable when platforms choose not to step in.

Academics advancing the idea of digital commons have tended to focus on how to prevent or regulate these problems—after they’re identified. In Code and Other Laws of Cyberspace, Larry Lessig describes software design as a kind of regulation separate from top-down policies or community norms. Sixteen years after Lessig’s book, belief in the power of code and social psychology to shape successful online communities is widespread among the design teams who govern our digital lives. Their growing toolbox of design options is detailed in a recent law review article by James Grimmelman, who covers everything from banning and shaming to reputation and rewards.

What might it mean for digital citizens to play a greater role in the long term operation of online platforms? In Europe, lawmakers and courts have a history of regulating the details of algorithms like Google search. Another idea is a Magna Carta for “consent of the networked,” according to the journalist and anti-censorship advocate Rebecca MacKinnon. This idea, backed by the web’s creator Tim Berners-Lee, might bind platforms to the consent of their users, even when companies span multiple countries and jurisdictions. One example of this might be the Wikimedia Foundation, which reserves half of its board positions for elected Wikipedians. Wikimedia also leaves many governance details to its community in each of its language groups, like a federal government comprised of many states.

Managing a commons is more complex than users versus platforms. In cases like Mechanical Turk, Amazon helps its users hold each other accountable by sharing data with systems like Turkopticon. Perhaps similar data sharing could help researchers and citizen groups audit algorithms from the outside. Nor does this work need to happen entirely outside platforms. Public research like Facebook’s recent study on political bias helps the public understand and debate the state of our shared digital lives.

4. Let the kids learn through play

TWENTY years ago, kids in preschool, kindergarten and even first and second grade spent much of their time playing: building with blocks, drawing or creating imaginary worlds, in their own heads or with classmates. But increasingly, these activities are being abandoned for the teacher-led, didactic instruction typically used in higher grades. In many schools, formal education now starts at age 4 or 5. Without this early start, the thinking goes, kids risk falling behind in crucial subjects such as reading and math, and may never catch up.

The idea seems obvious: Starting sooner means learning more; the early bird catches the worm.

But a growing group of scientists, education researchers and educators say there is little evidence that this approach improves long-term achievement; in fact, it may have the opposite effect, potentially slowing emotional and cognitive development, causing unnecessary stress and perhaps even souring kids’ desire to learn.

The stakes in this debate are considerable. As the skeptics of teacher-led early learning see it, that kind of education will fail to produce people who can discover and innovate, and will merely produce people who are likely to be passive consumers of information, followers rather than inventors. Which kind of citizen do we want for the 21st century?

“Our destiny is frequently met in the very paths we take to avoid it.”  – Jean de La Fontaine

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter

For Your Consideration: Education Tech, Russian Troll Farms, Robotic Sewing, Athletic Tourettes

1. Why Technology Alone Won’t Fix Schools

Amplification seems like an obvious idea—all it says is that technology is a tool that augments human power. But, if it’s obvious, it nevertheless has profound consequences that are routinely overlooked. For example, amplification explains why large-scale roll-outs of educational technology rarely result in positive outcomes. In any representative set of schools, some are doing well and others poorly. Introducing computers may result in benefit for some (the ones highlighted in pilot studies), but it distracts the weaker schools from their core mission. On average, the outcome is a wash.

An even bigger problem is that administrators rarely allocate enough resources to adapt curricula or train teachers. Where teachers don’t know how to incorporate digital tools appropriately, there is little capacity for the technology to amplify.

If a private company is failing to make a profit, no one expects that state-of-the-art data centers, better productivity software, and new laptops for all of the employees will turn things around. Yet, that is exactly the logic of so many attempts to fix education with technology.

To wonder what ails American education is to open a Pandora’s box of wicked problems. It could be poverty in early childhood or school districts funded by inadequate property taxes. Maybe it’s poorly designed incentives for teachers or elite flight into the private school system. The truth likely lies in some combination of these factors and more, but the problem is definitely not a lack of computers. Even tech proponents don’t argue that U.S. educational decline was caused by a decline of technology.

2. Troll Farming in Russia – The Agency
When nation states start playing psyops with social media. It gives a whole new flavor to “Manufacturing Consent”.

And the hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. The campaign followed the same pattern of fake news reports and videos, this time under the hashtag #EbolaInAtlanta, which briefly trended in Atlanta. Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

On the same day as the Ebola hoax, a totally different group of accounts began spreading a rumor that an unarmed black woman had been shot to death by police. They all used the hashtag #shockingmurderinatlanta. Here again, the hoax seemed designed to piggyback on real public anxiety; that summer and fall were marked by protests over the shooting of Michael Brown in Ferguson, Mo. In this case, a blurry video purports to show the shooting, as an onlooker narrates. Watching it, I thought I recognized the voice — it sounded the same as the man watching TV in the Columbian Chemicals video, the one in which ISIS supposedly claims responsibility. The accent was unmistakable, if unplaceable, and in both videos he was making a very strained attempt to sound American. Somehow the result was vaguely Australian.

Who was behind all of this? When I stumbled on it last fall, I had an idea. I was already investigating a shadowy organization in St. Petersburg, Russia, that spreads false information on the Internet. It has gone by a few names, but I will refer to it by its best known: the Internet Research Agency. The agency had become known for employing hundreds of Russians to post pro-Kremlin propaganda online under fake identities, including on Twitter, in order to create the illusion of a massive army of supporters; it has often been called a “troll farm.” The more I investigated this group, the more links I discovered between it and the hoaxes. In April, I went to St. Petersburg to learn more about the agency and its brand of information warfare, which it has aggressively deployed against political opponents at home, Russia’s perceived enemies abroad and, more recently, me.

Savchuk told me she shared an office with about a half-dozen teammates. It was smaller than most, because she worked in the elite Special Projects department. While other workers churned out blandly pro-Kremlin comments, her department created appealing online characters who were supposed to stand out from the horde. Savchuk posed as three of these creations, running a blog for each one on LiveJournal. One alter ego was a fortuneteller named Cantadora. The spirit world offered Cantadora insight into relationships, weight loss, feng shui — and, occasionally, geopolitics. Energies she discerned in the universe invariably showed that its arc bent toward Russia. She foretold glory for Vladimir Putin, defeat for Barack Obama and Petro Poroshenko. The point was to weave propaganda seamlessly into what appeared to be the nonpolitical musings of an everyday person.

In fact, she was a troll. The word “troll” was popularized in the early 1990s to denounce the people who derailed conversation on Usenet discussion lists with interminable flame wars, or spammed chat rooms with streams of disgusting photos, choking users with a cloud of filth. As the Internet has grown, the problem posed by trolls has grown more salient even as their tactics have remained remarkably constant. Today an ISIS supporter might adopt a pseudonym to harass a critical journalist on Twitter, or a right-wing agitator in the United States might smear demonstrations against police brutality by posing as a thieving, violent protester. Any major conflict is accompanied by a raging online battle between trolls on both sides.

3. Made to Measure

HUMAN hands are extremely good at making clothes. While many manufacturing processes have been automated, stitching together garments remains a job for millions of people around the world. As with most labour-intensive tasks, much of the work has migrated to low-wage countries, especially in Asia. Factory conditions can be gruelling. As nations develop and wages rise, the trade moves on to the next cheapest location: from China, to Bangladesh and, now that it is opening up, Myanmar. Could that migration be about to end with the development of a robotic sewing machine?

There have been many attempts to automate sewing. Some processes can now be carried out autonomously: the cutting of fabric, for instance, and sometimes sewing buttons or pockets. But it is devilishly difficult to make a machine in which fabric goes in one end and finished garments, such as jeans and T-shirts, come out the other. The particularly tricky bit is stitching two pieces of material together. This involves aligning the material correctly to the sewing head, feeding it through and constantly adjusting the fabric to prevent it slipping and buckling, while all the time keeping the stitches neat and the thread at the right tension. Nimble fingers invariably prove better at this than cogs, wheels and servo motors.

“The distortion of the fabric is no longer an issue. That’s what prevented automatic sewing in the past,” says Steve Dickerson, the founder of SoftWear Automation, a textile-equipment manufacturer based in Atlanta, where Dr Dickerson was a professor at the Georgia Institute of Technology.

The real test of how successful robots will be at making clothing and shoes will depend on how efficient and reliable they will be, and how fully they can automate the process. If time-to-market and customisation are priorities, then the robots might win—even if some manual intervention in production is required. But for mass-produced lines, where every cent matters, any human involvement could keep manufacturing offshore. The lesson from industrial automation in other sorts of factories, though, shows that robots keep getting better and cheaper. It may be a while coming, but the writing seems to be on the wall for sweatshops.

4. Amaris Tyynismaa: The Human Body Is A Miracle, The Human Body Is A Curse

Some athletes with TS attribute near-magical powers to their condition. Tim Howard, the goalkeeper of last year’s U.S. World Cup soccer team, says that TS has given him vision and reflexes that other players simply don’t have. Famed physician Oliver Sacks once wrote about a ping-pong player whose abnormal quickness and ability to knock back unreturnable shots, he believed, had to be connected to TS. One reason is that people with Tourette’s also tend to have Obsessive Compulsive Disorder (Amaris included). They need to repeat behaviors—whether it’s preventing balls from going into the net or running improbably long distances—until they do it just right. “I’m not saying it’s a good thing to have,” Sacks told a reporter last year, “but if one has Tourette’s, there are advantages.” New research out of the University of Nottingham shows that the brains of TS patients are physically different from everyone else’s, transformed by years of operating under much greater than normal resistance and better at controlling the body.

Neurologists at the Tourette Syndrome Association aren’t quite ready to embrace a connection between TS and superior athleticism. They are more comfortable saying that people with TS often see their symptoms subside when they’re playing sports or otherwise engaged in something that focuses their attention away from the urge to tic.

Soccer quieted the noise in Amaris’ head. After taking up the game, she began to tic less off the field. She did better in school. She talked more. Actually, she talked a lot, like she does now. In her last game in England, she scored three goals and the other kids lifted her up on their shoulders and carried her around. She would have had a major problem with that just months before—too many germs—but she loved it. And then her family moed to Alabama.

Her tics intensified with the stress and anxiety of being relocated to a new base, a new house, a school with no friends. More than at any other time in her life, her tics wore her out. But England had taught her something. She decided to join two different soccer teams and a swim team.

Soon enough, Mike and Kristen began to hear tales of athletic feats that seemed impossible. Specifically, they were told that their sixth-grader had run a mile at school in well under six minutes.

“Miracles sometimes occur, but one has to work terribly hard for them.” -Chaim Weizmann

 

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter