Category Archives: Random

For Your Consideration: Exercise, Insecurity, and Cyberwar

I hope that you’ll read these articles if the snippets catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

1. Is Cyberwar Turning Out to Be Very Different From What We Thought?

The potential weapons are here, the bad guys are here, and the vulnerabilities are certainly here.

Furthermore, the weapons of war might change. This happens fairly regularly. New tools and concepts are invented and then come online, from gunpowder and cavalry to blitzkrieg, aircraft carriers and drones. As George Orwell noted, “the history of civilization is largely the history of weapons.” Cyber may follow the path of so many other weapons platforms, such as the scout, the airplane and the drone—what started out as a reconnaissance platform eventually was weaponized. The question is, when and why will cyber truly be weaponized?

The combatants would certainly do that. This might be no more complicated than flipping a switch, or toggling a function between “monitor” and “destroy.” In his speech, Panetta said “we know that foreign cyber actors are probing America’s critical infrastructure networks. They are targeting the computer control systems that operate chemical, electricity and water plants and those that guide transportation throughout this country. … We also know that they are seeking to create advanced tools to attack these systems and cause panic and destruction and even the loss of life.”

Yet even now, we’re prone to hyperbole—every ATM hack is cyberterrorism, if you believe some media outlets and politicians. The infrastructure is rife with technical vulnerabilities—headlines remind us of this daily. But perhaps the greatest vulnerability we face, in preventing dark futures of cyberwars, is the fragility and scarcity of precious resources. Wars often originate from avarice over something precious—land, mineral resources, beliefs in God, Helen of Troy, etc.

But another resource precious to cyberspace, and one that could launch a conflict, is trust—a resource that underpins democratic societies, critical infrastructure and peace among nations every bit as much as mineral resources or land.

2. How Your Insecurity is Bought and Sold

Through Freud, Bernays understood something nobody else in business ever understood before him: that if you can tap into people’s insecurities — if you can needle at their deepest feelings of inadequacy — then they will buy just about any damn thing you tell them to.

This form of marketing became the blueprint of all future advertising. Trucks are marketed to men as ways to assert strength and reliability. Makeup is marketed to women as a way to be more loved and garner more attention. Beer is marketed as a way to have fun and be the center of attention at the party.

The only real long-term solution is for people to develop enough self-awareness to understand when mass media is prodding at their weaknesses and vulnerabilities and to make conscious decisions in the face of those fears. The success of our free markets has burdened us with the responsibility of exercising our freedom to choose. And that responsibility is far heavier than we often realize.

3. How Exercise Shapes You, Far Beyond the Gym

When I first started training for marathons a little over ten years ago, my coach told me something I’ve never forgotten: that I would need to learn how to be comfortable with being uncomfortable. I didn’t know it at the time, but that skill, cultivated through running, would help me as much, if not more, off the road as it would on it.

It’s not just me, and it’s not just running. Ask anyone whose day regularly includes a hard bike ride, sprints in the pool, a complex problem on the climbing wall, or a progressive powerlifting circuit, and they’ll likely tell you the same: A difficult conversation just doesn’t seem so difficult anymore. A tight deadline not so intimidating. Relationship problems not so problematic.

Maybe it’s that if you’re regularly working out, you’re simply too tired to care. But that’s probably not the case. Research shows that, if anything, physical activity boosts short-term brain function and heightens awareness. And even on days they don’t train — which rules out fatigue as a factor — those who habitually push their bodies tend to confront daily stressors with a stoic demeanor. While the traditional benefits of vigorous exercise — like prevention and treatment of diabetes, heart disease, stroke, hypertension, and osteoporosis — are well known and often reported, the most powerful benefit might be the lesson that my coach imparted to me: In a world where comfort is king, arduous physical activity provides a rare opportunity to practice suffering.

A Quote I Love:

 

“If a child is to keep alive his inborn sense of wonder … he needs the companionship of at least one adult who can share it, rediscovering with him the joy, excitement, and mystery of the world we live in.”

—Rachel Carson

Tip of the week:

One trick you may or may not have picked up about Gmail is that you can add in periods anywhere in the front part of your address and it makes no difference whatsoever: john.smith@gmail.com works just the same as johnsmith@gmail.com. What’s more, you can add a plus sign and any word before the @ sign (e.g. johnsmith+hello@gmail.com) and messages will still reach you. If these tweaks make no difference, then why use them? One major reason: filters.

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

5 Comments

Filed under Random

“Hey Google”

A few fun things have happened in the past week.

  1. I started reading “Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations” by Thomas L. Friedman
  2. My dad got me a Google Home for Christmas
  3. We finished our annual calendar trip around the sun (it’s now 2017)
After getting the Google home set up and playing with what I could ask it (I became giddy with finding the melting points of different elements and how many species of different types of animals there were), what it could do with other connected objects around the house (integration with multiple Chromecast Audio speaker groups is magic), and then I introduced my kids to it.

 

This was both immediately exciting and showed me that I was going to need to put some rules around it.

 

The first things I showed them we could ask it were what sounds different animals could make. My son went through all his top animals: monkey, pig, dinosaur, cat, dog, cheetah, lion, octopus (this last one google couldn’t help with). Then we started asking how far away certain things were. The moon, Iowa, China, etc.

 

I noticed a few things as my son was talking to it. First I had to explain that he had to be very clear with his enunciation, that if he didn’t speak clearly Google wouldn’t be able to answer. This caused him to ask me how things were pronounced if he wasn’t sure. (Even though it did amazingly well with even my two year olds limited pronunciations.) Second was that he had to think about his question before asking and not figure it out while speaking. A long enough delay, or rambling, after saying “hey google” led to no answer. Third, repeated requests for “what sound does a turkey make” could get old and shouldn’t be allowed during dinner.

 

This got me to thinking about how important something like this could be for developing a skill for asking good questions in children too young to read or write. Learning to ask good questions is the basis for discovery and needs to be taught and encouraged as early as possible.

 

With humans we are able to parse what the desired result is from a child’s question even with muddled words and intent. As a parent you learn to distinguish your child’s specific word choice, pronunciation, etc. Each child has their own language as they learn language. A search engine, even with excellent natural language recognition, still doesn’t yet have the ability to intuit what the goal of the question is. As such the questions must be well formed and somewhat in the range of reasonable. For example I had to explain to my son why google probably didn’t have a sound on record for an octopus.

 

For me watching my son converse with the Google Home Assistant was akin to watching him use a search engine for the first time. He would try different ways of asking a question if he ran in to no answer or “I don’t know how to help with that yet.” He saw me ask it to play a genre of music and quickly learned he could ask it to play songs he liked. He also started telling it stories and telling it he loved it. The youngest kids today will not remember a life where they couldn’t talk to their computers. Just like the generation just before them won’t remember life without the Internet, or TV for the one before them, or radio before them.

 

As a parent I think it is an excellent resource to have around as a teaching aid and conversation starter. We got out his Picturepedia when he ran out of animals he could think off off the top of his head and started asking for ones we found pictures of. Ostrich, Gorilla, Lemur, Llama, etc… Llama is still annoyingly popular. It also started conversations, amongst others, about where Google lived, what Google Home was, what the Internet is, how things are spelled, and why all Lemurs sound the same… (they don’t but Google doesn’t have sound files for all of them yet.)

 

Also, I came across this quote in “Thank you for being late” after we had started playing with it.

 

“In the twenty-first century, knowing all the answers won’t distinguish someone’s intelligence — rather, the ability to ask all the right questions will be the mark of a true genius.” – John E. Kelly III, SVP Cognitive Systems and Research at IBM

 

Ask good questions.

32 Comments

January 4, 2017 · 12:24 pm

For Your Consideration: Cotton Robots, Datamining for Literacy, and Childhood Memories

1. Automation and the Cotton Gin

When the cotton gin was invented, many people thought that it would reduce our new nation’s dependence on slavery by removing the painstaking work of separating the usable cotton from seeds, hulls, stems, etc.

But ironically, it resulted in the growth of slavery.

The gin could process cotton so efficiently that more cotton goods could be produced, and it turned out that there was massive latent demand for cotton goods. So while the robots did indeed reduce the reliance on slaves to do the finishing work, they also increased demand for cotton, which resulted in many more cotton fields, and many more slaves to tend them.

I don’t know enough history to know whether this was a core issue that led to our Civil War or just a contributing factor. Probably somewhere in between. But it took us more than 100 years to really process all the implications of just this one technology advance (and I think really you’d argue that we haven’t fully come to terms with them even today.)

So you see where I’m going with this.

Fast forward to our own era, and we’re working our way through software automation instead of cotton processing automation. And it seems obvious to me that as we’re making systems and processes easier and easier to automate, we’re also generating massive new previously latent demand for software driven systems.

I’m not arguing at all that this will result in anything like the growth of slavery in the first half of the 19th century — more that we’re in a time of profound change. And that worries over whether robots will take all our jobs I think will prove to be ultimately misplaced. I think that if you look not just at the cotton gin, but most technology automation advances what you’ll find is that the demand for labor nearly always increases.

2. Mobile Phone Data Reveals Literacy Rates in Developing Countries

One of the millennium development goals of the United Nations is to eradicate extreme poverty by 2030. That’s a complex task, since poverty has many contributing factors. But one of the more significant is the 750 million people around the world who are unable to read and write, two-thirds of which are women.

There are plenty of organizations that can help, provided they know where to place their resources. So identifying areas where literacy rates are low is an important challenge…

The usual method is to carry out household surveys. But this is time-consuming and expensive work, and difficult to repeat on a regular basis. And in any case, data from the developing world is often out of date before it can be used effectively. So a faster, cheaper way of mapping literacy rates would be hugely welcome.

Pål Sundsøy at Telenor Group Research in Fornebu, Norway, says he’s worked out how to determine literacy rates using mobile phone call records. His method is straightforward number crunching. He starts with a standard household survey of 76,000 mobile phone users living in an unidentified developing country in Asia. The survey was carried out for a mobile phone operator by a professional agency and logs each person’s mobile phone number and whether or not they can read.

Sundsøy then matches this data set with call data records from the mobile phone company. This provides data such as the numbers each person has called or texted, the length of these calls, air time purchases, cell tower locations, and so on.

From this data, Sundsøy can work out where all the individuals were when they made their calls or texts, who they were calling or texting, the number of texts received, at what time of day, and so on. This allows him to construct a social network for each user, working out who they called, how often, and so on.

Finally, he used 75 percent of the data to search for patterns associated with users who are illiterate, using a variety of number crunching and machine learning techniques. He used the remaining 25 percent to test whether it is possible to use these patterns to identify illiterate people and areas where there is a higher proportion of illiterate people.

3. Why Childhood Memories Disappear

“People used to think that the reason that we didn’t have early memories was because children didn’t have a memory system or they were unable to remember things, but it turns out that’s not the case,” Peterson said. “Children have a very good memory system. But whether or not something hangs around long-term depends on on several other factors.” Two of the most important factors, Peterson explained, are whether the memory “has emotion infused in it,” and whether the memory is coherent: Does the story our memory tells us actually hang together and make sense when we recall it later?

But then, this event- or story-based memory isn’t the only kind, although it’s the one people typically focus on when discussing “first” memories. Indeed, when I asked the developmental psychologist Steven Reznick about why childhood amnesia exists, he disputed the very use of that term: “I would say right now that is a rather archaic statement.” A professor at the University of North Carolina-Chapel Hill, Reznick explained that shortly after birth, infants can start forming impressions of faces and react when they see those faces again; this is recognition memory. The ability to understand words and learn language relies on working memory, which kicks in at around six months old. More sophisticated forms of memory develop in the child’s second year, as semantic memory allows children to retain understanding of concepts and general knowledge about the world.

“When people were accusing infants of having amnesia, what they were talking about is what we refer to as episodic memory,” Reznick explained. Our ability to remember events that happened to us relies on more complicated mental infrastructure than other kinds of memory. Context is all-important. We need to understand the concepts that give meaning to an event: For the memory of my brother’s birth, I have to understand the meanings of concepts like “hospital,” “brother,” “cot,” and even Thomas the Tank Engine. More than that, for the memory to remain accessible, my younger self had to remember those concepts in the same language-based way that my adult self remembers information. I formed earlier memories using more rudimentary, pre-verbal means, and that made those memories unreachable as the acquisition of language reshaped how my mind works, as it does for everyone.

“Now comes the second machine age. Computers and other digital advances are doing for mental power—the ability to use our brains to understand and shape our environments—what the steam engine and its descendants did for muscle power.” – Erik Brynjolfsson

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter, Random

For Your Consideration: Hijacking Minds, Chinese Robot Army, Tomorrow’s Internet, and Multiple Realities

Two pieces by Kevin Kelly this week and both are components of his new book (required reading if you plan to be around in the future) The Inevitable.

1. How Technology Hijacks People’s Minds – from a Magician and Google’s Design Ethicist

I’m an expert on how technology hijacks our psychological vulnerabilities. That’s why I spent the last three years as a Design Ethicist at Google caring about how to design things in a way that defends a billion people’s minds from getting hijacked.

When using technology, we often focus optimistically on all the things it does for us. But I want you to show you where it might do the opposite.

Where does technology exploit our minds’ weaknesses?

I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.

And this is exactly what product designers do to your mind. They play your psychological vulnerabilities (consciously and unconsciously) against you in the race to grab your attention.

I want to show you how they do it.

#1 [of 10] If You Control the Menu, You Control the Choices

Western Culture is built around ideals of individual choice and freedom. Millions of us fiercely defend our right to make “free” choices, while we ignore how those choices are manipulated upstream by menus we didn’t choose in the first place.

This is exactly what magicians do. They give people the illusion of free choice while architecting the menu so that they win, no matter what you choose. I can’t emphasize enough how deep this insight is.

When people are given a menu of choices, they rarely ask:

  • “what’s not on the menu?”
  • “why am I being given these options and not others?”
  • “do I know the menu provider’s goals?”
  • “is this menu empowering for my original need, or are the choices actually a distraction?” (e.g. an overwhelming array of toothpastes)

2. China Is Building a Robot Army of Model Workers

“The system is down,” explains Nie Juan, a woman in her early 20s who is responsible for quality control. Her team has been testing the robot for the past week. The machine is meant to place stickers on the boxes containing new routers, and it seemed to have mastered the task quite nicely. But then it suddenly stopped working. “The robot does save labor,” Nie tells me, her brow furrowed, “but it is difficult to maintain.”

The hitch reflects a much bigger technological challenge facing China’s manufacturers today. Wages in Shanghai have more than doubled in the past seven years, and the company that owns the factory, Cambridge Industries Group, faces fierce competition from increasingly high-tech operations in Germany, Japan, and the United States. To address both of these problems, CIG wants to replace two-thirds of its 3,000 workers with machines this year. Within a few more years, it wants the operation to be almost entirely automated, creating a so-called “dark factory.” The idea is that with so few people around, you could switch the lights off and leave the place to the machines.

But as the idle robot arm on CIG’s packaging line suggests, replacing humans with machines is not an easy task. Most industrial robots have to be extensively programmed, and they will perform a job properly only if everything is positioned just so. Much of the production work done in Chinese factories requires dexterity, flexibility, and common sense. If a box comes down the line at an odd angle, for instance, a worker has to adjust his or her hand before affixing the label. A few hours later, the same worker might be tasked with affixing a new label to a different kind of box. And the following day he or she might be moved to another part of the line entirely.

Despite the huge challenges, countless manufacturers in China are planning to transform their production processes using robotics and automation at an unprecedented scale. In some ways, they don’t really have a choice. Human labor in China is no longer as cheap as it once was, especially compared with labor in rival manufacturing hubs growing quickly in Asia. In Vietnam, Thailand, and Indonesia, factory wages can be less than a third of what they are in the urban centers of China. One solution, many manufacturers—and government officials—believe, is to replace human workers with machines.

3. You are not late

But, but…here is the thing. In terms of the internet, nothing has happened yet. The internet is still at the beginning of its beginning. If we could climb into a time machine and journey 30 years into the future, and from that vantage look back to today, we’d realize that most of the greatest products running the lives of citizens in 2044 were not invented until after 2014. People in the future will look at their holodecks, and wearable virtual reality contact lenses, and downloadable avatars, and AI interfaces, and say, oh, you didn’t really have the internet (or whatever they’ll call it) back then.

And they’d be right. Because from our perspective now, the greatest online things of the first half of this century are all before us. All these miraculous inventions are waiting for that crazy, no-one-told-me-it-was-impossible visionary to start grabbing the low-hanging fruit — the equivalent of the dot com names of 1984.

Because here is the other thing the greybeards in 2044 will tell you: Can you imagine how awesome it would have been to be an entrepreneur in 2014? It was a wide-open frontier! You could pick almost any category X and add some AI to it, put it on the cloud. Few devices had more than one or two sensors in them, unlike the hundreds now. Expectations and barriers were low. It was easy to be the first. And then they would sigh, “Oh, if only we realized how possible everything was back then!”

So, the truth: Right now, today, in [2016] is the best time to start something on the internet.

4. Hyper Vision – A survey of VR/AR/MR

One of the first things I learned from my recent tour of the synthetic-reality waterfront is that virtual reality is creating the next evolution of the Internet. Today the Internet is a network of information. It contains 60 trillion web pages, remembers 4 zettabytes of data, transmits millions of emails per second, all interconnected by sextillions of transistors. Our lives and work run on this internet of information. But what we are building with artificial reality is an internet of experiences. What you share in VR or MR gear is an experience. What you encounter when you open a magic window in your living room is an experience. What you join in a mixed-reality teleconference is an experience. To a remarkable degree, all these technologically enabled experiences will rapidly intersect and inform one another.

The recurring discovery I made in each virtual world I entered was that although every one of these environments was fake, the experiences I had in them were genuine. VR does two important things: One, it generates an intense and convincing sense of what is generally called presence. Virtual landscapes, virtual objects, and virtual characters seem to be there—a perception that is not so much a visual illusion as a gut feeling. That’s magical. But the second thing it does is more important. The technology forces you to be present—in a way flatscreens do not—so that you gain authentic experiences, as authentic as in real life. People remember VR experiences not as a memory of something they saw but as something that happened to them. 

…Not immediately, but within 15 years, the bulk of our work and play time will touch the virtual to some degree. Systems for delivering these shared virtual experiences will become the largest enterprises we have ever made. Fully immersive VR worlds already generate and consume gigabytes of data per experience. In the next 10 years the scale will increase from gigabytes per minute to terabytes per minute. The global technology industry—chip designers, consumer device makers, communication conglomerates, component manufacturers, content studios, software creators—will all struggle to handle the demands of this vast system as it blossoms. And only a few companies will dominate the VR networks because, as is so common in networks, success is self-reinforcing. The bigger the virtual society becomes, the more attractive it is. And the more attractive, the bigger yet it becomes. These artificial-reality winners will become the largest companies in history, dwarfing the largest companies today by any measure.

“My interest is in the future because I am going to spend the rest of my life there.” – Charles Kettering

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Random

For Your Consideration: Epigenetics and Identity, Vulgar Vocabulary, Learning to Learn, and the Sublimity of Mike Rowe

1. The Science of Identity and Difference

Why are identical twins alike? In the late nineteen-seventies, a team of scientists in Minnesota set out to determine how much these similarities arose from genes, rather than environments—from “nature,” rather than “nurture.” Scouring thousands of adoption records and news clips, the researchers gleaned a rare cohort of fifty-six identical twins who had been separated at birth. Reared in different families and different cities, often in vastly dissimilar circumstances, these twins shared only their genomes. Yet on tests designed to measure personality, attitudes, temperaments, and anxieties, they converged astonishingly. Social and political attitudes were powerfully correlated: liberals clustered with liberals, and orthodoxy was twinned with orthodoxy. The same went for religiosity (or its absence), even for the ability to be transported by an aesthetic experience. Two brothers, separated by geographic and economic continents, might be brought to tears by the same Chopin nocturne, as if responding to some subtle, common chord struck by their genomes.

One pair of twins both suffered crippling migraines, owned dogs that they had named Toy, married women named Linda, and had sons named James Allan (although one spelled the middle name with a single “l”). Another pair—one brought up Jewish, in Trinidad, and the other Catholic, in Nazi Germany, where he joined the Hitler Youth—wore blue shirts with epaulets and four pockets, and shared peculiar obsessive behaviors, such as flushing the toilet before using it. Both had invented fake sneezes to diffuse tense moments. Two sisters—separated long before the development of language—had invented the same word to describe the way they scrunched up their noses: “squidging.” Another pair confessed that they had been haunted by nightmares of being suffocated by various metallic objects—doorknobs, fishhooks, and the like.

The Minnesota twin study raised questions about the depth and pervasiveness of qualities specified by genes: Where in the genome, exactly, might one find the locus of recurrent nightmares or of fake sneezes? Yet it provoked an equally puzzling converse question: Why are identical twins different? Because, you might answer, fate impinges differently on their bodies. One twin falls down the crumbling stairs of her Calcutta house and breaks her ankle; the other scalds her thigh on a tipped cup of coffee in a European station. Each acquires the wounds, calluses, and memories of chance and fate. But how are these changes recorded, so that they persist over the years? We know that the genome can manufacture identity; the trickier question is how it gives rise to difference…

2. Is Swearing a Sign of a Limited Vocabulary? | Scientific American

When words fail us, we curse. At least this is what the “poverty-of-vocabulary” (POV) hypothesis would have us believe. On this account, swearing is the “sign of a weak vocabulary”, a result of a lack of education, laziness or impulsiveness. In line with this idea, we tend to judge vulgarians quite harshly, rating them as lower on socio-intellectual status, less effective at their jobs and less friendly.

But this view of the crass does not square with recent research in linguistics. For example, the POV hypothesis would predict that when people struggle to come up with the right words, they are more likely to spew swears left and right. But research shows that people tend to fill the awkward gaps in their language with “ers” and “ums” not “sh*ts” and “godd*mnits.” This research has led to a competing explanation for swearing: fluency with taboo words might be a sign of general verbal fluency. Those who are exceptionally vulgar might also be exceptionally eloquent and intelligent.  Indeed, taboo words hold a particular purpose in our lexicon that other words cannot as effectively accomplish: to deliver intense, succinct and directed emotional expression. So, those who swear frequently might just be more sophisticated in the linguistic resources they can draw from in order to make their point.

New research by cognitive scientists at Marist College and the Massachusetts College of Liberal Arts attempts to test this possibility, and further debunk the POV hypothesis, by measuring how taboo word fluency relates to general verbal fluency. The POV hypothesis suggests that there should be a negative correlation: the more you swear, the lower your verbal prowess. But the researchers hypothesized just the opposite: the more you swear the more comprehensive your vocabulary would be.

“The ability to learn faster than your competitors may be the only sustainable competitive advantage.”

I’m not talking about relaxed armchair or even structured classroom learning. I’m talking about resisting the bias against doing new things, scanning the horizon for growth opportunities, and pushing yourself to acquire radically different capabilities—while still performing your job. That requires a willingness to experiment and become a novice again and again: an extremely discomforting notion for most of us.

Over decades of coaching and consulting to thousands of executives in a variety of industries, however, my colleagues and I have come across people who succeed at this kind of learning. We’ve identified four attributes they have in spades: aspiration, self-awareness, curiosity, and vulnerability. They truly want to understand and master new skills; they see themselves very clearly; they constantly think of and ask good questions; and they tolerate their own mistakes as they move up the learning curve.

Of course, these things come more naturally to some people than to others. But, drawing on research in psychology and management as well as our work with clients, we have identified some fairly simple mental tools anyone can develop to boost all four attributes—even those that are often considered fixed (aspiration, curiosity, and vulnerability).

4. The Importance of Being Dirty: Lessons from Mike Rowe

**If you didn’t already adore Mike Rowe this conversation will make you. Amazingly interesting guy on top of everything you thought you knew. Also, The Tim Ferriss Show is hands down one of my favorite podcasts. Light in tone but deep in intellectual curiosity about an immense variety of topics.
—-

“Just because you love something doesn’t mean you can’t suck at it.” – Mike Rowe

Stream Here: http://traffic.libsyn.com/timferriss/Tim_Ferriss_Show_-_Mike_Rowe.mp3

Mike Rowe (@mikeroweworks) is perhaps the best storyteller and pitchman I’ve ever had on the show.

You might know Mike from his eight seasons of Dirty Jobs, but that’s just a tiny piece of the story.

His performing career began in 1984 when he faked his way into the Baltimore Opera to get his union card and meet girls, both of which he accomplished during a performance of Rigoletto. His transition to television occurred in 1990 when — to settle a bet — he auditioned for the QVC Shopping Channel and was promptly hired after talking about a pencil for nearly eight minutes. There, he worked the graveyard shift for three years, until he was ultimately fired for making fun of products and belittling viewers.  Now, he is a massively successful TV host, writer, narrator, producer, actor, and spokesman.

Why listen to this episode? You will learn:

  • Secrets of the perfect pitch
  • How Mike flew around the world for free (until he got caught)
  • Why to pursue opportunity instead of passion
  • How being different can help you win in business and life
  • The business of Mike Rowe
  • Favorite books, voice-over artists, and much, much more…

If you’re in a rush and just want a fantastic 5-minute story about his selling pencils for the QVC audition, click here.

“We are infected by our own misunderstanding of how our own minds work.” – Kevin Kelly

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Random

For Your Consideration: Laws of Life, Art of (Cyber)war, The Minecraft Generation, and Self or Selfie

Time to re-boot the newsletter. I’ve been meaning to start organizing it again for months but haven’t made the time. Not sure what interrupted the flow especially when I had a lot of encouragement from friends and family. One in particular who used to encourage me on this, to publish (or re-publish) the things that interested me, and to write even when I felt like it was an echo into the void. Miss you dude…

1. Jeremy England, the Man Who May One-Up Darwin

In town to give a lecture, the Harvard grad and Rhodes scholar speaks quickly, his voice rising a few pitches in tone, his long-fingered hands making sudden jerks when he’s excited. He’s skinny, with a long face, scraggly beard and carelessly groomed mop of sandy brown hair — what you might expect from a theoretical physicist. But then there’s the street-style Adidas on his feet and the kippah atop his head. And the fact that this scientist also talks a lot about God.

The 101 version of his big idea is this: Under the right conditions, a random group of atoms will self-organize, unbidden, to more effectively use energy. Over time and with just the right amount of, say, sunlight, a cluster of atoms could come remarkably close to what we call life. In fact, here’s a thought: Some things we consider inanimate actually may already be “alive.” It all depends on how we define life, something England’s work might prompt us to reconsider. “People think of the origin of life as being a rare process,” says Vijay Pande, a Stanford chemistry professor. “Jeremy’s proposal makes life a consequence of physical laws, not something random.”

England’s idea may sound strange, even incredible, but it’s drawn the attention of an impressive posse of high-level academics. After all, while Darwinism may explain evolution and the complex world we live in today, it doesn’t account for the onset of intelligent beings. England’s insistence on probing for the step that preceded all of our current assumptions about life is what makes him stand out, says Carl Franck, a Cornell physics professor, who’s been following England’s work closely. “Every 30 years or so we experience these gigantic steps forward,” Franck says. “We’re due for one. And this might be it.”

And all from a modern Orthodox Jew with fancy sneakers.

2. The New Art Of War: How trolls, hackers and spies are rewriting the rules of conflict

While there is no international law that directly refers to the ultra-modern concept of cyber warfare, there is plenty that applies. So CDCOE assembled a panel of international legal experts to go through this existing law and show how it applies to cyber warfare. This formed the basis of the Tallinn Manual and the 95 so-called ‘black letter rules’ it contains (so named because that’s how they appear in the text).

Through these rules the manual attempts to define some of the basics of cyber warfare. At the most fundamental level, the rules state that an online attack on a state can, in certain circumstances, be the equivalent of an armed attack. It also lays out that such an attack is against international law, and that a state attacked in such a way has the right to hit back.

Other rules the manual spells out: don’t target civilians or launch indiscriminate attacks that could cripple civilian infrastructure. While many of these sorts of rules are well understood when it comes to standard warfare, setting it out in the context of digital warfare was groundbreaking.

While the manual argues that a cyber attack can be considered to be the equivalent of an armed attack if it causes physical harm to people or property, other attacks can also be considered a use of force depending on their severity or impact. For example, breaking into a military system would be more likely to be seen as serious, as opposed to hacking into a small business. In contrast, cyber attacks that generate “mere inconvenience or irritation” would never be considered to be a use of force.
The manual also delves into some of the trickier questions of cyber war: would Country A be justified in launching a pre-emptive military strike against a Country B if it knew Country B planned to blow up Country A’s main oil pipeline by hacking the microcontrollers managing its pipeline pressure? (Answer: probably yes.)

The manual even considers the legality of some scenarios verging on the science-fictional.

If an army hacked into and took control of enemy drones, would those drones have to be grounded and marked with the capturers insignia before being allowed to carry out reconnaissance flights? (Answer: maybe.)

But what’s striking is that the Tallinn Manual sets the rules for a war that hasn’t been fought yet.

3. The Minecraft Generation

Minecraft is an incredibly complex game, but it’s also — at first — inscrutable. When you begin, no pop-ups explain what to do; there isn’t even a “help” section. You just have to figure things out yourself. (The exceptions are the Xbox and Play­Station versions, which in December added tutorials.) This unwelcoming air contrasts with most large games these days, which tend to come with elaborate training sessions on how to move, how to aim, how to shoot. In Minecraft, nothing explains that skeletons will kill you, or that if you dig deep enough you might hit lava (which will also kill you), or even that you can craft a pickax.

This “you’re on your own” ethos resulted from early financial limitations: Working alone, Persson had no budget to design tutorials. That omission turned out be an inadvertent stroke of genius, however, because it engendered a significant feature of Minecraft culture, which is that new players have to learn how to play. Minecraft, as the novelist and technology writer Robin Sloan has observed, is “a game about secret knowledge.” So like many modern mysteries, it has inspired extensive information-­­sharing. Players excitedly pass along tips or strategies at school. They post their discoveries in forums and detail them on wikis. (The biggest one, hosted at the site Gamepedia, has nearly 5,000 articles; its entry on Minecraft’s “horses,” for instance, is about 3,600 words long.) Around 2011, publishers began issuing handbooks and strategy guides for the game, which became runaway best sellers; one book on redstone has outsold literary hits like “The Goldfinch,” by Donna Tartt.

“In Minecraft, knowledge becomes social currency,” says Michael Dezuanni, an associate professor of digital media at Queensland University of Technology in Australia. Dezuanni has studied how middle-­school girls play the game, watching as they engaged in nuanced, Talmudic breakdowns of a particular creation. This is, he realized, a significant part of the game’s draw: It offers many opportunities to display expertise, when you uncover a new technique or strategy and share it with peers.

The single biggest tool for learning Minecraft lore is YouTube. The site now has more than 70 million Minecraft videos, many of which are explicitly tutorial. To make a video, players use “screencasting” software (some of which is free, some not) that records what’s happening on-screen while they play; they usually narrate their activity in voice-­over. The problems and challenges you face in Minecraft are, as they tend to be in construction or architecture, visual and three-­dimensional. This means, as many players told me, that video demonstrations have a particularly powerful explanatory force: It’s easiest to learn something by seeing someone else do it. In this sense, the game points to the increasing role of video as a rhetorical tool. (“Minecraft” is the second-­most-­searched-­for term on YouTube, after “music.”)

4. Saving the Self in the Age of the Selfie

Consider Erica, a full-time college student. The first thing she does when she wakes up in the morning is reach for her smartphone. She checks texts that came in while she slept. Then she scans Facebook, Snapchat, Tumblr, Instagram, and Twitter to see “what everybody else is doing.” At breakfast, she opens her laptop and goes to Spotify and her various email accounts. Once she gets to campus, Erica confronts more screen time: PowerPoints and online assignments, academic content to which she dutifully attends (she’s an A student). Throughout the day, she checks in with social media roughly every 10 minutes, even during class. “It’s a little overwhelming,” she says, “but you don’t want to feel left out.”

We’ve been worried about this type of situation for thousands of years. Socrates, for one, fretted that the written word would compromise our ability to retell stories. Such a radical shift in communication, he argued in Phaedrus, would favor cheap symbols over actual memories, ease of conveyance over inner depth. Philosophers have pondered the effect of information technology on human identity ever since. But perhaps the most trenchant modern expression of Socrates’ nascent technophobia comes from the 20th-century German philosopher Martin Heidegger, whose essays on the subject—notably “The Question Concerning Technology” (1954)—established a framework for scrutinizing our present situation.

Heidegger’s take on technology was dire. He believed that it constricted our view of the world by reducing all experience to the raw material of its operation. To prevent “an oblivion of being,” Heidegger urged us to seek solace in nontechnological space. He never offered prescriptive examples of exactly how to do this, but as the scholar Howard Eiland explains, it required seeing the commonplace as alien, or finding “an essential strangeness in … familiarity.” Easier said than done. Hindering the effort in Heidegger’s time was the fact that technology was already, as the contemporary political philosopher Mark Blitz puts it, “an event to which we belong.” In this view, one that certainly befits today’s digital communication, technology infuses real-world experience the way water mixes with water, making it nearly impossible to separate the human and technological perspectives, to find weirdness in the familiar. Such a blending means that, according to Blitz, technology’s domination “makes us forget our understanding of ourselves.”

The only hope for preserving a non-technological haven—and it was and remains a distant hope—was to cultivate what Heidegger called “nearness.” Nearness is a mental island on which we can stand and affirm that the phenomena we experience both embody and transcend technology. Consider it a privileged ontological stance, a way of knowing the world through a special kind of wisdom or point of view. Heidegger’s implicit hope was that the human ability to draw a distinction between technological and nontechnological perception would release us from “the stultified compulsion to push on blindly with technology.”

 

 

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter, Random, Writing