“Hey Google”

A few fun things have happened in the past week.

  1. I started reading “Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations” by Thomas L. Friedman
  2. My dad got me a Google Home for Christmas
  3. We finished our annual calendar trip around the sun (it’s now 2017)
After getting the Google home set up and playing with what I could ask it (I became giddy with finding the melting points of different elements and how many species of different types of animals there were), what it could do with other connected objects around the house (integration with multiple Chromecast Audio speaker groups is magic), and then I introduced my kids to it.

 

This was both immediately exciting and showed me that I was going to need to put some rules around it.

 

The first things I showed them we could ask it were what sounds different animals could make. My son went through all his top animals: monkey, pig, dinosaur, cat, dog, cheetah, lion, octopus (this last one google couldn’t help with). Then we started asking how far away certain things were. The moon, Iowa, China, etc.

 

I noticed a few things as my son was talking to it. First I had to explain that he had to be very clear with his enunciation, that if he didn’t speak clearly Google wouldn’t be able to answer. This caused him to ask me how things were pronounced if he wasn’t sure. (Even though it did amazingly well with even my two year olds limited pronunciations.) Second was that he had to think about his question before asking and not figure it out while speaking. A long enough delay, or rambling, after saying “hey google” led to no answer. Third, repeated requests for “what sound does a turkey make” could get old and shouldn’t be allowed during dinner.

 

This got me to thinking about how important something like this could be for developing a skill for asking good questions in children too young to read or write. Learning to ask good questions is the basis for discovery and needs to be taught and encouraged as early as possible.

 

With humans we are able to parse what the desired result is from a child’s question even with muddled words and intent. As a parent you learn to distinguish your child’s specific word choice, pronunciation, etc. Each child has their own language as they learn language. A search engine, even with excellent natural language recognition, still doesn’t yet have the ability to intuit what the goal of the question is. As such the questions must be well formed and somewhat in the range of reasonable. For example I had to explain to my son why google probably didn’t have a sound on record for an octopus.

 

For me watching my son converse with the Google Home Assistant was akin to watching him use a search engine for the first time. He would try different ways of asking a question if he ran in to no answer or “I don’t know how to help with that yet.” He saw me ask it to play a genre of music and quickly learned he could ask it to play songs he liked. He also started telling it stories and telling it he loved it. The youngest kids today will not remember a life where they couldn’t talk to their computers. Just like the generation just before them won’t remember life without the Internet, or TV for the one before them, or radio before them.

 

As a parent I think it is an excellent resource to have around as a teaching aid and conversation starter. We got out his Picturepedia when he ran out of animals he could think off off the top of his head and started asking for ones we found pictures of. Ostrich, Gorilla, Lemur, Llama, etc… Llama is still annoyingly popular. It also started conversations, amongst others, about where Google lived, what Google Home was, what the Internet is, how things are spelled, and why all Lemurs sound the same… (they don’t but Google doesn’t have sound files for all of them yet.)

 

Also, I came across this quote in “Thank you for being late” after we had started playing with it.

 

“In the twenty-first century, knowing all the answers won’t distinguish someone’s intelligence — rather, the ability to ask all the right questions will be the mark of a true genius.” – John E. Kelly III, SVP Cognitive Systems and Research at IBM

 

Ask good questions.

Leave a Comment

January 4, 2017 · 12:24 pm

For Your Consideration: Cotton Robots, Datamining for Literacy, and Childhood Memories

1. Automation and the Cotton Gin

When the cotton gin was invented, many people thought that it would reduce our new nation’s dependence on slavery by removing the painstaking work of separating the usable cotton from seeds, hulls, stems, etc.

But ironically, it resulted in the growth of slavery.

The gin could process cotton so efficiently that more cotton goods could be produced, and it turned out that there was massive latent demand for cotton goods. So while the robots did indeed reduce the reliance on slaves to do the finishing work, they also increased demand for cotton, which resulted in many more cotton fields, and many more slaves to tend them.

I don’t know enough history to know whether this was a core issue that led to our Civil War or just a contributing factor. Probably somewhere in between. But it took us more than 100 years to really process all the implications of just this one technology advance (and I think really you’d argue that we haven’t fully come to terms with them even today.)

So you see where I’m going with this.

Fast forward to our own era, and we’re working our way through software automation instead of cotton processing automation. And it seems obvious to me that as we’re making systems and processes easier and easier to automate, we’re also generating massive new previously latent demand for software driven systems.

I’m not arguing at all that this will result in anything like the growth of slavery in the first half of the 19th century — more that we’re in a time of profound change. And that worries over whether robots will take all our jobs I think will prove to be ultimately misplaced. I think that if you look not just at the cotton gin, but most technology automation advances what you’ll find is that the demand for labor nearly always increases.

2. Mobile Phone Data Reveals Literacy Rates in Developing Countries

One of the millennium development goals of the United Nations is to eradicate extreme poverty by 2030. That’s a complex task, since poverty has many contributing factors. But one of the more significant is the 750 million people around the world who are unable to read and write, two-thirds of which are women.

There are plenty of organizations that can help, provided they know where to place their resources. So identifying areas where literacy rates are low is an important challenge…

The usual method is to carry out household surveys. But this is time-consuming and expensive work, and difficult to repeat on a regular basis. And in any case, data from the developing world is often out of date before it can be used effectively. So a faster, cheaper way of mapping literacy rates would be hugely welcome.

Pål Sundsøy at Telenor Group Research in Fornebu, Norway, says he’s worked out how to determine literacy rates using mobile phone call records. His method is straightforward number crunching. He starts with a standard household survey of 76,000 mobile phone users living in an unidentified developing country in Asia. The survey was carried out for a mobile phone operator by a professional agency and logs each person’s mobile phone number and whether or not they can read.

Sundsøy then matches this data set with call data records from the mobile phone company. This provides data such as the numbers each person has called or texted, the length of these calls, air time purchases, cell tower locations, and so on.

From this data, Sundsøy can work out where all the individuals were when they made their calls or texts, who they were calling or texting, the number of texts received, at what time of day, and so on. This allows him to construct a social network for each user, working out who they called, how often, and so on.

Finally, he used 75 percent of the data to search for patterns associated with users who are illiterate, using a variety of number crunching and machine learning techniques. He used the remaining 25 percent to test whether it is possible to use these patterns to identify illiterate people and areas where there is a higher proportion of illiterate people.

3. Why Childhood Memories Disappear

“People used to think that the reason that we didn’t have early memories was because children didn’t have a memory system or they were unable to remember things, but it turns out that’s not the case,” Peterson said. “Children have a very good memory system. But whether or not something hangs around long-term depends on on several other factors.” Two of the most important factors, Peterson explained, are whether the memory “has emotion infused in it,” and whether the memory is coherent: Does the story our memory tells us actually hang together and make sense when we recall it later?

But then, this event- or story-based memory isn’t the only kind, although it’s the one people typically focus on when discussing “first” memories. Indeed, when I asked the developmental psychologist Steven Reznick about why childhood amnesia exists, he disputed the very use of that term: “I would say right now that is a rather archaic statement.” A professor at the University of North Carolina-Chapel Hill, Reznick explained that shortly after birth, infants can start forming impressions of faces and react when they see those faces again; this is recognition memory. The ability to understand words and learn language relies on working memory, which kicks in at around six months old. More sophisticated forms of memory develop in the child’s second year, as semantic memory allows children to retain understanding of concepts and general knowledge about the world.

“When people were accusing infants of having amnesia, what they were talking about is what we refer to as episodic memory,” Reznick explained. Our ability to remember events that happened to us relies on more complicated mental infrastructure than other kinds of memory. Context is all-important. We need to understand the concepts that give meaning to an event: For the memory of my brother’s birth, I have to understand the meanings of concepts like “hospital,” “brother,” “cot,” and even Thomas the Tank Engine. More than that, for the memory to remain accessible, my younger self had to remember those concepts in the same language-based way that my adult self remembers information. I formed earlier memories using more rudimentary, pre-verbal means, and that made those memories unreachable as the acquisition of language reshaped how my mind works, as it does for everyone.

“Now comes the second machine age. Computers and other digital advances are doing for mental power—the ability to use our brains to understand and shape our environments—what the steam engine and its descendants did for muscle power.” – Erik Brynjolfsson

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter, Random

For Your Consideration: American Political Insanity, Google Machine Learning, Gaming Education, and Instagram Hell

1. How Google Is Remaking Itself As A “Machine Learning First” Company – Backchannel

Google’s bear-hug-level embrace of machine learning does not simply represent a shift in programming technique. It’s a serious commitment to techniques that will bestow hitherto unattainable powers to computers. The leading edge of this are “deep learning” algorithms built around sophisticated neural nets inspired by brain architecture. The Google Brain is a deep learning effort, and DeepMind, the AI company Google bought for a reported $500 million in January 2014, also concentrates on that end of the spectrum. It was DeepMind that created the AlphaGo system that beat a champion of Go, shattering expectations of intelligent machine performance and sending ripples of concern among those fearful of smart machines and killer robots.

While Giannandrea dismisses the “AI-is-going-to-kill us” camp as ill-informed Cassandras, he does contend that machine learning systems are going to be transformative, in everything from medical diagnoses to driving our cars. While machine learning won’t replace humans, it will change humanity.

The example Giannandrea cites to demonstrate machine learning power is Google Photos, a product whose definitive feature is an uncanny — maybe even disturbing — ability to locate an image of something specified by the user. Show me pictures of border collies. “When people see that for the first time they think something different is happening because the computer is not just computing a preference for you or suggesting a video for you to watch,” says Giannandrea. “It’s actually understanding what’s in the picture.” He explains that through the learning process, the computer “knows” what a border collie looks like, and it will find pictures of it when it’s a puppy, when its old, when it’s long-haired, and when it’s been shorn. A person could do that, of course. But no human could sort through a million example and simultaneously identify ten thousand dog breeds. But a machine learning system can. If it learns one breed, it can use the same technique to identify the other 9999 using the same technique. “that’s really what’s new here,” says Giannandrea. “For those narrow domains, you’re seeing what some people call super human performance in these learned systems.”

2. How American Politics Went Insane – The Atlantic

Our intricate, informal system of political intermediation, which took many decades to build, did not commit suicide or die of old age; we reformed it to death. For decades, well-meaning political reformers have attacked intermediaries as corrupt, undemocratic, unnecessary, or (usually) all of the above. Americans have been busy demonizing and disempowering political professionals and parties, which is like spending decades abusing and attacking your own immune system. Eventually, you will get sick.

The disorder has other causes, too: developments such as ideological polarization, the rise of social media, and the radicalization of the Republican base. But chaos syndrome compounds the effects of those developments, by impeding the task of organizing to counteract them. Insurgencies in presidential races and on Capitol Hill are nothing new, and they are not necessarily bad, as long as the governing process can accommodate them. Years before the Senate had to cope with Ted Cruz, it had to cope with Jesse Helms. The difference is that Cruz shut down the government, which Helms could not have done had he even imagined trying.

Like many disorders, chaos syndrome is self-reinforcing. It causes governmental dysfunction, which fuels public anger, which incites political disruption, which causes yet more governmental dysfunction. Reversing the spiral will require understanding it. Consider, then, the etiology of a political disease: the immune system that defended the body politic for two centuries; the gradual dismantling of that immune system; the emergence of pathogens capable of exploiting the new vulnerability; the symptoms of the disorder; and, finally, its prognosis and treatment.

3. Gaming Reality – Games and the Education System – CNN

To understand why Quest to Learn thinks games are crucial to the educational system, one first must take a look at why games are so compelling in the first place.

That’s easy if you talk to James Gee, a presidential professor at Arizona State University’s department of education, who has made a career out of assessing the usefulness of games.

“What a video game is, is it’s just a set of problems to solve — that’s it,” he said. “And it has a win state. You get feedback and you know when you’ve solved the problem. And then the game designer has to create good motivation for you to do that.”

Take “Angry Birds” as an example. It’s a game where smartphone owners are asked to use a digital slingshot to catapult birds across the screen and into towers of sticks with evil pigs — their enemies — hiding inside them.

That game is compelling, Gee would argue, because there is a clear problem (pigs that need to be pummeled), a feedback system (after each level, players are given stars based on how quickly they’re able to pummel said pigs) and, to some degree, an inherent motivational structure. Players who want to advance to harder levels (to “level-up” in gamer-speak) must improve their scores to do so — maximizing efficiency.

Something about it works: The game has been downloaded more than 1 billion times.

Quest is appropriating those ideas in a system it calls “game-like learning.” Instead of regular classes, kids are sent on missions where they’re expected to make their own discoveries and compete against other students or classrooms from the school.

Sometimes students here use video games and other high-tech tools — there’s a 3-D printer on site; laptops and iPads seem to be everywhere; kids play “The Sims” as part of class — but often they don’t. Duke’s “The Way Things Work” classroom looks pretty much like any other eclectic science class in America: bright posters on the walls, class pets in the back corner, a model skeleton leaning against the window.

It’s how the class is framed that’s different. Kids are told they are no longer students but explorers. They’re put inside a narrative that’s bigger than them — they’re dispatched on a “mission” to discover, in this case, the many mysteries of the metric system.

Then they play games in order to make those discoveries.

The magic of the school is that, just like in a video game, when one challenge ends another begins, co-founder Salen said. You move to the next level. The school is designed to create challenges that the students actually want to tackle, without worrying about grades or tests, just because they’re actually interested in the world.

Games “create a reason for young people to want to engage in a problem or around a set of content,” Salen said. “And then you make those resources around them available so they can do work and practice around that problem.”

Put another way: The carrot is always in front of the horse.

4. Is Instagram Ruining Travel? – BackChannel

Like a bad motivational speaker, I’m a sucker for a good sunrise and Angkor Wat was delivering. This was the edifice that had survived since the 12th century and had miraculously been spared the Khmer Rouge’s rod of destruction during Cambodia’s 1970s genocide. It has inspired sages for generations and graces everything from the labels of local beers to the set of “Tomb Raider,” an architectural wonder draped in a sheet of spiritual mystery and marvel.

I took a breath and a sip, then raised my iPhone to the sky. Thirty seconds of cropping and captioning later, I posted on Instagram an odious, travel-envy shot of the moment, knowing very well that most of my friends back in New York City were cold, miserable urban yetis. Meanwhile, I was eating, praying and loving and now there was a 1080-by-1080-pixel image to prove it.

It sounds like a moment of pure trekking ecstasy. But it was fraudulent.

I didn’t post what was behind me.

That scene — the fight for the perfect Instagram — is one I’ve witnessed over and over, on at least three continents during the last year or so. At times, it felt like destinations were morphing into mere photo sets. In New Zealand, I saw adventure companies that made getting the perfect photo-op part of their pitch for kayaking, hiking or ziplining expeditions. In Thailand, a woman next to me on a beach squealed to her friends about getting her hair just right for a shot destined for her Tinder profile. Back home in New York, I have more than once found myself in the crosshairs of a narcisstick aimed at a scraggly Elmo in Times Square.

Our desire to get the perfect sharable photo has even led to deaths by selfie, with Mashable reporting that in 2015 more people died from taking selfies than shark attacks. Last year, Disneyland banned selfie sticks at its theme parks, citing safety as a concern. The Russian government even released a guide for how not to die taking one, kicking the “safe selfies” movement into motion.

To be sure, these are extremes. But as I stood there in Cambodia, swatted by selfie sticks, bruised by elbows, perfumed by the body odor of my fellow photogs, I realized the irony of being at a temple in which no one was really present. Was Instagram ruining travel?

“Most humans are never fully present in the now, because unconsciously they believe that the next moment must be more important than this one. But then you miss your whole life, which is never not now.” – Eckhart Tolle 

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter

For Your Consideration: Words, Libraries, Reading, Writing, and Futures for Kids

1. Neil Gaiman: Why our future depends on libraries, reading and daydreaming​

I’m here giving this talk tonight, under the auspices of the Reading Agency: a charity whose mission is to give everyone an equal chance in life by helping people become confident and enthusiastic readers. Which supports literacy programs, and libraries and individuals and nakedly and wantonly encourages the act of reading. Because, they tell us, everything changes when we read.

And it’s that change, and that act of reading that I’m here to talk about tonight. I want to talk about what reading does. What it’s good for.

I was once in New York, and I listened to a talk about the building of private prisons – a huge growth industry in America. The prison industry needs to plan its future growth – how many cells are they going to need? How many prisoners are there going to be, 15 years from now? And they found they could predict it very easily, using a pretty simple algorithm, based on asking what percentage of 10 and 11-year-olds couldn’t read. And certainly couldn’t read for pleasure.

It’s not one to one: you can’t say that a literate society has no criminality. But there are very real correlations.

And I think some of those correlations, the simplest, come from something very simple. Literate people read fiction.

Fiction has two uses. Firstly, it’s a gateway drug to reading. The drive to know what happens next, to want to turn the page, the need to keep going, even if it’s hard, because someone’s in trouble and you have to know how it’s all going to end … that’s a very real drive. And it forces you to learn new words, to think new thoughts, to keep going. To discover that reading per se is pleasurable. Once you learn that, you’re on the road to reading everything. And reading is key. There were noises made briefly, a few years ago, about the idea that we were living in a post-literate world, in which the ability to make sense out of written words was somehow redundant, but those days are gone: words are more important than they ever were: we navigate the world with words, and as the world slips onto the web, we need to follow, to communicate and to comprehend what we are reading. People who cannot understand each other cannot exchange ideas, cannot communicate, and translation programs only go so far.

The simplest way to make sure that we raise literate children is to teach them to read, and to show them that reading is a pleasurable activity. And that means, at its simplest, finding books that they enjoy, giving them access to those books, and letting them read them.

2. What is writing? Why Telepathy, of Course

In his book On Writing, Stephen King explains what writing is in three words: “Telepathy, of course.”

Then again, this isn’t an entirely new concept. When we were kids, writing was explained as the act of transmitting ideas from our brains onto a sheet of paper. But I don’t know, telepathy just sounds better. For one thing, it means transmitting real objects from one space and time to another, I like this. It means writing doesn’t end or even begin with the writer’s internal struggle, but with the notion that the writer has something to show and can do so by making his mind connect with that of the reader.

And while this may sound ridiculous, even “cute,” as King says other people might call it, we’ve experienced what he’s talking about.

To make the point, King writes a description of a bunny, munching a carrot in a cage with the number 8 written on his back in blue ink and says this afterward, “The most interesting thing here isn’t even the carrot-munching rabbit in the cage, but the number on its back… This is what we’re looking at, and we all see it. I didn’t tell you. You didn’t ask me… We’re having a meeting of the minds.” And he was right. We are looking at the number on the bunny’s back. We feel connected to him as if we were present with him examining the number.

The question I ask now is, how did he do that? How did he know the number was the subject of our focus? It was his, but how did he know he had successfully pulled our gaze from the cage, bunny and the carrot and onto the blue number?

Of course, you could say, “it’s obviously the most interesting thing in the piece” or “Well, he is the writer, after all. He knew we would want to look at something as out of place as the blue number on the bunny.” And yes, the example is a very easy one to see. But his acclaimed fame as a writer would defend that this is not something he does by accident, but knows exactly what we are seeing. And my question is how did he learn this?

3. Philip K Dick on Disneyland, reality and science fiction (1978)

It was always my hope, in writing novels and stories which asked the question “What is reality?”, to someday get an answer. This was the hope of most of my readers, too. Years passed. I wrote over thirty novels and over a hundred stories, and still I could not figure out what was real. One day a girl college student in Canada asked me to define reality for her, for a paper she was writing for her philosophy class. She wanted a one-sentence answer. I thought about it and finally said, “Reality is that which, when you stop believing in it, doesn’t go away.” That’s all I could come up with. That was back in 1972. Since then I haven’t been able to define reality any more lucidly.

But the problem is a real one, not a mere intellectual game. Because today we live in a society in which spurious realities are manufactured by the media, by governments, by big corporations, by religious groups, political groups—and the electronic hardware exists by which to deliver these pseudo-worlds right into the heads of the reader, the viewer, the listener. Sometimes when I watch my eleven-year-old daughter watch TV, I wonder what she is being taught. The problem of miscuing; consider that. A TV program produced for adults is viewed by a small child. Half of what is said and done in the TV drama is probably misunderstood by the child. Maybe it’s all misunderstood. And the thing is, Just how authentic is the information anyhow, even if the child correctly understood it? What is the relationship between the average TV situation comedy to reality? What about the cop shows? Cars are continually swerving out of control, crashing, and catching fire. The police are always good and they always win. Do not ignore that point: The police always win. What a lesson that is. You should not fight authority, and even if you do, you will lose. The message here is, Be passive. And—cooperate. If Officer Baretta asks you for information, give it to him, because Officer Beratta is a good man and to be trusted. He loves you, and you should love him.

4. Education Needs a Digital-Age Upgrade​

If you have a child entering grade school this fall, file away just one number with all those back-to-school forms: 65 percent.

Chances are just that good that, in spite of anything you do, little Oliver or Abigail won’t end up a doctor or lawyer — or, indeed, anything else you’ve ever heard of. According to Cathy N. Davidson, co-director of the annual MacArthur Foundation Digital Media and Learning Competitions, fully 65 percent of today’s grade-school kids may end up doing work that hasn’t been invented yet.

So Abigail won’t be doing genetic counseling. Oliver won’t be developing Android apps for currency traders or co-chairing Google’s philanthropic division. Even those digital-age careers will be old hat. Maybe the grown-up Oliver and Abigail will program Web-enabled barrettes or quilt with scraps of Berber tents. Or maybe they’ll be plying a trade none of us old-timers will even recognize as work.

For those two-thirds of grade-school kids, if for no one else, it’s high time we redesigned American education.

As Ms. Davidson puts it: “Pundits may be asking if the Internet is bad for our children’s mental development, but the better question is whether the form of learning and knowledge-making we are instilling in our children is useful to their future.”

In her galvanic new book, “Now You See It,” Ms. Davidson asks, and ingeniously answers, that question. One of the nation’s great digital minds, she has written an immensely enjoyable omni-manifesto that’s officially about the brain science of attention. But the book also challenges nearly every assumption about American education.

…Simply put, we can’t keep preparing students for a world that doesn’t exist. We can’t keep ignoring the formidable cognitive skills they’re developing on their own. And above all, we must stop disparaging digital prowess just because some of us over 40 don’t happen to possess it. An institutional grudge match with the young can sabotage an entire culture.

When we criticize students for making digital videos instead of reading “Gravity’s Rainbow,” or squabbling on Politico.com instead of watching “The Candidate,” we are blinding ourselves to the world as it is. And then we’re punishing students for our blindness. Those hallowed artifacts — the Thomas Pynchon novel and the Michael Ritchie film — had a place in earlier social environments. While they may one day resurface as relevant, they are now chiefly of interest to cultural historians. But digital video and Web politics are intellectually robust and stimulating, profitable and even pleasurable.

 

Albert Einstein was asked once how we could make our children intelligent. His reply was both simple and wise. “If you want your children to be intelligent,” he said, “read them fairy tales. If you want them to be more intelligent, read them more fairy tales.” 

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassingI hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter

For Your Consideration: Hijacking Minds, Chinese Robot Army, Tomorrow’s Internet, and Multiple Realities

Two pieces by Kevin Kelly this week and both are components of his new book (required reading if you plan to be around in the future) The Inevitable.

1. How Technology Hijacks People’s Minds – from a Magician and Google’s Design Ethicist

I’m an expert on how technology hijacks our psychological vulnerabilities. That’s why I spent the last three years as a Design Ethicist at Google caring about how to design things in a way that defends a billion people’s minds from getting hijacked.

When using technology, we often focus optimistically on all the things it does for us. But I want you to show you where it might do the opposite.

Where does technology exploit our minds’ weaknesses?

I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.

And this is exactly what product designers do to your mind. They play your psychological vulnerabilities (consciously and unconsciously) against you in the race to grab your attention.

I want to show you how they do it.

#1 [of 10] If You Control the Menu, You Control the Choices

Western Culture is built around ideals of individual choice and freedom. Millions of us fiercely defend our right to make “free” choices, while we ignore how those choices are manipulated upstream by menus we didn’t choose in the first place.

This is exactly what magicians do. They give people the illusion of free choice while architecting the menu so that they win, no matter what you choose. I can’t emphasize enough how deep this insight is.

When people are given a menu of choices, they rarely ask:

  • “what’s not on the menu?”
  • “why am I being given these options and not others?”
  • “do I know the menu provider’s goals?”
  • “is this menu empowering for my original need, or are the choices actually a distraction?” (e.g. an overwhelming array of toothpastes)

2. China Is Building a Robot Army of Model Workers

“The system is down,” explains Nie Juan, a woman in her early 20s who is responsible for quality control. Her team has been testing the robot for the past week. The machine is meant to place stickers on the boxes containing new routers, and it seemed to have mastered the task quite nicely. But then it suddenly stopped working. “The robot does save labor,” Nie tells me, her brow furrowed, “but it is difficult to maintain.”

The hitch reflects a much bigger technological challenge facing China’s manufacturers today. Wages in Shanghai have more than doubled in the past seven years, and the company that owns the factory, Cambridge Industries Group, faces fierce competition from increasingly high-tech operations in Germany, Japan, and the United States. To address both of these problems, CIG wants to replace two-thirds of its 3,000 workers with machines this year. Within a few more years, it wants the operation to be almost entirely automated, creating a so-called “dark factory.” The idea is that with so few people around, you could switch the lights off and leave the place to the machines.

But as the idle robot arm on CIG’s packaging line suggests, replacing humans with machines is not an easy task. Most industrial robots have to be extensively programmed, and they will perform a job properly only if everything is positioned just so. Much of the production work done in Chinese factories requires dexterity, flexibility, and common sense. If a box comes down the line at an odd angle, for instance, a worker has to adjust his or her hand before affixing the label. A few hours later, the same worker might be tasked with affixing a new label to a different kind of box. And the following day he or she might be moved to another part of the line entirely.

Despite the huge challenges, countless manufacturers in China are planning to transform their production processes using robotics and automation at an unprecedented scale. In some ways, they don’t really have a choice. Human labor in China is no longer as cheap as it once was, especially compared with labor in rival manufacturing hubs growing quickly in Asia. In Vietnam, Thailand, and Indonesia, factory wages can be less than a third of what they are in the urban centers of China. One solution, many manufacturers—and government officials—believe, is to replace human workers with machines.

3. You are not late

But, but…here is the thing. In terms of the internet, nothing has happened yet. The internet is still at the beginning of its beginning. If we could climb into a time machine and journey 30 years into the future, and from that vantage look back to today, we’d realize that most of the greatest products running the lives of citizens in 2044 were not invented until after 2014. People in the future will look at their holodecks, and wearable virtual reality contact lenses, and downloadable avatars, and AI interfaces, and say, oh, you didn’t really have the internet (or whatever they’ll call it) back then.

And they’d be right. Because from our perspective now, the greatest online things of the first half of this century are all before us. All these miraculous inventions are waiting for that crazy, no-one-told-me-it-was-impossible visionary to start grabbing the low-hanging fruit — the equivalent of the dot com names of 1984.

Because here is the other thing the greybeards in 2044 will tell you: Can you imagine how awesome it would have been to be an entrepreneur in 2014? It was a wide-open frontier! You could pick almost any category X and add some AI to it, put it on the cloud. Few devices had more than one or two sensors in them, unlike the hundreds now. Expectations and barriers were low. It was easy to be the first. And then they would sigh, “Oh, if only we realized how possible everything was back then!”

So, the truth: Right now, today, in [2016] is the best time to start something on the internet.

4. Hyper Vision – A survey of VR/AR/MR

One of the first things I learned from my recent tour of the synthetic-reality waterfront is that virtual reality is creating the next evolution of the Internet. Today the Internet is a network of information. It contains 60 trillion web pages, remembers 4 zettabytes of data, transmits millions of emails per second, all interconnected by sextillions of transistors. Our lives and work run on this internet of information. But what we are building with artificial reality is an internet of experiences. What you share in VR or MR gear is an experience. What you encounter when you open a magic window in your living room is an experience. What you join in a mixed-reality teleconference is an experience. To a remarkable degree, all these technologically enabled experiences will rapidly intersect and inform one another.

The recurring discovery I made in each virtual world I entered was that although every one of these environments was fake, the experiences I had in them were genuine. VR does two important things: One, it generates an intense and convincing sense of what is generally called presence. Virtual landscapes, virtual objects, and virtual characters seem to be there—a perception that is not so much a visual illusion as a gut feeling. That’s magical. But the second thing it does is more important. The technology forces you to be present—in a way flatscreens do not—so that you gain authentic experiences, as authentic as in real life. People remember VR experiences not as a memory of something they saw but as something that happened to them. 

…Not immediately, but within 15 years, the bulk of our work and play time will touch the virtual to some degree. Systems for delivering these shared virtual experiences will become the largest enterprises we have ever made. Fully immersive VR worlds already generate and consume gigabytes of data per experience. In the next 10 years the scale will increase from gigabytes per minute to terabytes per minute. The global technology industry—chip designers, consumer device makers, communication conglomerates, component manufacturers, content studios, software creators—will all struggle to handle the demands of this vast system as it blossoms. And only a few companies will dominate the VR networks because, as is so common in networks, success is self-reinforcing. The bigger the virtual society becomes, the more attractive it is. And the more attractive, the bigger yet it becomes. These artificial-reality winners will become the largest companies in history, dwarfing the largest companies today by any measure.

“My interest is in the future because I am going to spend the rest of my life there.” – Charles Kettering

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Random

For Your Consideration: Epigenetics and Identity, Vulgar Vocabulary, Learning to Learn, and the Sublimity of Mike Rowe

1. The Science of Identity and Difference

Why are identical twins alike? In the late nineteen-seventies, a team of scientists in Minnesota set out to determine how much these similarities arose from genes, rather than environments—from “nature,” rather than “nurture.” Scouring thousands of adoption records and news clips, the researchers gleaned a rare cohort of fifty-six identical twins who had been separated at birth. Reared in different families and different cities, often in vastly dissimilar circumstances, these twins shared only their genomes. Yet on tests designed to measure personality, attitudes, temperaments, and anxieties, they converged astonishingly. Social and political attitudes were powerfully correlated: liberals clustered with liberals, and orthodoxy was twinned with orthodoxy. The same went for religiosity (or its absence), even for the ability to be transported by an aesthetic experience. Two brothers, separated by geographic and economic continents, might be brought to tears by the same Chopin nocturne, as if responding to some subtle, common chord struck by their genomes.

One pair of twins both suffered crippling migraines, owned dogs that they had named Toy, married women named Linda, and had sons named James Allan (although one spelled the middle name with a single “l”). Another pair—one brought up Jewish, in Trinidad, and the other Catholic, in Nazi Germany, where he joined the Hitler Youth—wore blue shirts with epaulets and four pockets, and shared peculiar obsessive behaviors, such as flushing the toilet before using it. Both had invented fake sneezes to diffuse tense moments. Two sisters—separated long before the development of language—had invented the same word to describe the way they scrunched up their noses: “squidging.” Another pair confessed that they had been haunted by nightmares of being suffocated by various metallic objects—doorknobs, fishhooks, and the like.

The Minnesota twin study raised questions about the depth and pervasiveness of qualities specified by genes: Where in the genome, exactly, might one find the locus of recurrent nightmares or of fake sneezes? Yet it provoked an equally puzzling converse question: Why are identical twins different? Because, you might answer, fate impinges differently on their bodies. One twin falls down the crumbling stairs of her Calcutta house and breaks her ankle; the other scalds her thigh on a tipped cup of coffee in a European station. Each acquires the wounds, calluses, and memories of chance and fate. But how are these changes recorded, so that they persist over the years? We know that the genome can manufacture identity; the trickier question is how it gives rise to difference…

2. Is Swearing a Sign of a Limited Vocabulary? | Scientific American

When words fail us, we curse. At least this is what the “poverty-of-vocabulary” (POV) hypothesis would have us believe. On this account, swearing is the “sign of a weak vocabulary”, a result of a lack of education, laziness or impulsiveness. In line with this idea, we tend to judge vulgarians quite harshly, rating them as lower on socio-intellectual status, less effective at their jobs and less friendly.

But this view of the crass does not square with recent research in linguistics. For example, the POV hypothesis would predict that when people struggle to come up with the right words, they are more likely to spew swears left and right. But research shows that people tend to fill the awkward gaps in their language with “ers” and “ums” not “sh*ts” and “godd*mnits.” This research has led to a competing explanation for swearing: fluency with taboo words might be a sign of general verbal fluency. Those who are exceptionally vulgar might also be exceptionally eloquent and intelligent.  Indeed, taboo words hold a particular purpose in our lexicon that other words cannot as effectively accomplish: to deliver intense, succinct and directed emotional expression. So, those who swear frequently might just be more sophisticated in the linguistic resources they can draw from in order to make their point.

New research by cognitive scientists at Marist College and the Massachusetts College of Liberal Arts attempts to test this possibility, and further debunk the POV hypothesis, by measuring how taboo word fluency relates to general verbal fluency. The POV hypothesis suggests that there should be a negative correlation: the more you swear, the lower your verbal prowess. But the researchers hypothesized just the opposite: the more you swear the more comprehensive your vocabulary would be.

“The ability to learn faster than your competitors may be the only sustainable competitive advantage.”

I’m not talking about relaxed armchair or even structured classroom learning. I’m talking about resisting the bias against doing new things, scanning the horizon for growth opportunities, and pushing yourself to acquire radically different capabilities—while still performing your job. That requires a willingness to experiment and become a novice again and again: an extremely discomforting notion for most of us.

Over decades of coaching and consulting to thousands of executives in a variety of industries, however, my colleagues and I have come across people who succeed at this kind of learning. We’ve identified four attributes they have in spades: aspiration, self-awareness, curiosity, and vulnerability. They truly want to understand and master new skills; they see themselves very clearly; they constantly think of and ask good questions; and they tolerate their own mistakes as they move up the learning curve.

Of course, these things come more naturally to some people than to others. But, drawing on research in psychology and management as well as our work with clients, we have identified some fairly simple mental tools anyone can develop to boost all four attributes—even those that are often considered fixed (aspiration, curiosity, and vulnerability).

4. The Importance of Being Dirty: Lessons from Mike Rowe

**If you didn’t already adore Mike Rowe this conversation will make you. Amazingly interesting guy on top of everything you thought you knew. Also, The Tim Ferriss Show is hands down one of my favorite podcasts. Light in tone but deep in intellectual curiosity about an immense variety of topics.
—-

“Just because you love something doesn’t mean you can’t suck at it.” – Mike Rowe

Stream Here: http://traffic.libsyn.com/timferriss/Tim_Ferriss_Show_-_Mike_Rowe.mp3

Mike Rowe (@mikeroweworks) is perhaps the best storyteller and pitchman I’ve ever had on the show.

You might know Mike from his eight seasons of Dirty Jobs, but that’s just a tiny piece of the story.

His performing career began in 1984 when he faked his way into the Baltimore Opera to get his union card and meet girls, both of which he accomplished during a performance of Rigoletto. His transition to television occurred in 1990 when — to settle a bet — he auditioned for the QVC Shopping Channel and was promptly hired after talking about a pencil for nearly eight minutes. There, he worked the graveyard shift for three years, until he was ultimately fired for making fun of products and belittling viewers.  Now, he is a massively successful TV host, writer, narrator, producer, actor, and spokesman.

Why listen to this episode? You will learn:

  • Secrets of the perfect pitch
  • How Mike flew around the world for free (until he got caught)
  • Why to pursue opportunity instead of passion
  • How being different can help you win in business and life
  • The business of Mike Rowe
  • Favorite books, voice-over artists, and much, much more…

If you’re in a rush and just want a fantastic 5-minute story about his selling pencils for the QVC audition, click here.

“We are infected by our own misunderstanding of how our own minds work.” – Kevin Kelly

If you were forwarded this newsletter and enjoyed it, please subscribe here: https://tinyletter.com/peopleinpassing

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Random