For Your Consideration: Letting Kids Fail, Surveillant Anxiety, Stagnovation, Computational Options

It’s been a ridiculous couple of weeks. Hopefully the twice weekly publication schedule will be back on track soon.

1. Why Parents Need to Let Their Children Fail

This is what we teachers see most often: what the authors term “high responsiveness and low demandingness” parents.” These parents are highly responsive to the perceived needs and issues of their children, and don’t give their children the chance to solve their own problems. These parents “rush to school at the whim of a phone call from their child to deliver items such as forgotten lunches, forgotten assignments, forgotten uniforms” and “demand better grades on the final semester reports or threaten withdrawal from school.” One study participant described the problem this way:

“I have worked with quite a number of parents who are so overprotective of their children that the children do not learn to take responsibility (and the natural consequences) of their actions. The children may develop a sense of entitlement and the parents then find it difficult to work with the school in a trusting, cooperative and solution focused manner, which would benefit both child and school.”

These are the parents who worry me the most — parents who won’t let their child learn. You see, teachers don’t just teach reading, writing, and arithmetic. We teach responsibility, organization, manners, restraint, and foresight. These skills may not get assessed on standardized testing, but as children plot their journey into adulthood, they are, by far, the most important life skills I teach.

I’m not suggesting that parents place blind trust in their children’s teachers; I would never do such a thing myself. But children make mistakes, and when they do, it’s vital that parents remember that the educational benefits of consequences are a gift, not a dereliction of duty. Year after year, my “best” students — the ones who are happiest and successful in their lives — are the students who were allowed to fail, held responsible for missteps, and challenged to be the best people they could be in the face of their mistakes.

2. The Anxieties of Big Data

the lived reality of big data is suffused with a kind of surveillant anxiety — the fear that all the data we are shedding every day is too revealing of our intimate selves but may also misrepresent us. Like a fluorescent light in a dark corridor, it can both show too much and not enough. Anxiety, as Sianne Ngai has written, has a temporality that is future oriented: it is an expectation emotion, and the expectation is generally of risk, exposure, and failure. British group Plan C in their blistering manifesto “We Are All Very Anxious” argue that anxiety is the dominant affect of our current phase of capitalism, engendering political hopelessness, insecurity, and social separation.

But the trick of a dominant cultural affect is that it functions as a kind of open secret: Everyone knows it, but nobody talks about it. In order to work against it, we first have to recognize the condition and trace its contours.

Surveillant anxiety is always a conjoined twin: The anxiety of those surveilled is deeply connected to the anxiety of the surveillers. But the anxiety of the surveillers is generally hard to see; it’s hidden in classified documents and delivered in highly coded languages in front of Senate committees. This is part of why Snowden’s revelations are so startling: They make it possible for us to see the often-obscured concerns of the intelligence agencies. And while there is an enormous structural power asymmetry between the surveillers and surveilled, neither are those with the greatest power free from being haunted by a very particular kind of data anxiety: that no matter how much data they have, it is always incomplete, and the sheer volume can overwhelm the critical signals in a fog of possible correlations.

If we take these twinned anxieties — those of the surveillers and the surveilled — and push them to their natural extension, we reach an epistemological end point: on one hand, the fear that there can never be enough data, and on the other, the fear that one is standing out in the data. These fears reinforce each other in a feedback loop, becoming stronger with each turn of the ratchet. As people seek more ways to blend in — be it through normcore dressing or hardcore encryption — more intrusive data collection techniques are developed. And yet, this is in many ways the expected conclusion of big data’s neopositivist worldview. As historians of science Lorraine Daston and Peter Galison once wrote, all epistemology begins in fear — fear that the world cannot be threaded by reason, fear that memory fades, fear that authority will not be enough.

3. The Whirlpool Economy

In previous eras, from the middle ages through to the 1970s, stagnation went hand in hand with low innovation: the economy stagnated because there was little underlying dynamism, few new ideas and limited opportunities for entrepreneurship. Could we now live in an era where the economy is stagnating in part because there is so much innovation? Stagnation and innovation are combining to create a vicious whirlpool in which everything moves very fast and yet stays in the same place. Perhaps that helps to explain the dissonant feelings the always-on rush of modern life creates for so many people.

Technological tools that offer to make us more productive by undertaking tasks for us just end up helping us work longer hours, answering a torrent of emails, bleeps, updates and alerts. We feel busier than ever as digital diaries fill our days with meetings, yet oddly unproductive as achieving anything substantial requires extended periods of focus. One minute we’re over-stimulated by the screens that are our constant companions; the next we’re rendered powerless and listless by signal loss or the system going down.

All of these common feelings reflect a deeper disquiet: many feel richer and poorer at the same time. While wages stagnate, the squeezed middle classes hunt for bargains on; rent out their spare rooms on Airbnb; get driven around by someone earning a little extra by working for Uber on his day off; and entertain themselves for free on YouTube.

This would not be the first time our economies have suffered from a toxic mix thought impossible by orthodox economics. The 70s were a time of stagflation: slow growth combined with stubbornly high unemployment andhigh inflation. Now we live in a time of stagnovation: slow growth combined with incessant innovation and rising inequality. Are stagnation, innovation and inequality becoming locked together?

4. Moore’s Law Is About to Get Weird
A “computer” used to be a job title for a person with paper, a slide rule, and a writing utensil.

In the nearly 70 years since the first modern digital computer was built, the above specs have become all but synonymous with computing. But they need not be. A computer is defined not by a particular set of hardware, but by being able to take information as input; to change, or “process,” the information in some controllable way; and to deliver new information as output. This information and the hardware that processes it can take an almost endless variety of physical forms. Over nearly two centuries, scientists and engineers have experimented with designs that use mechanical gears, chemical reactions, fluid flows, light, DNA, living cells, and synthetic cells.

Such now-unconventional means of computation collectively form the intuitively named realm of, well, unconventional computing. One expert has defined it as the study of “things which are already well forgotten or not discovered yet.” It is thus a field both anachronistic and ahead of its time.

But given the astounding success of conventional computing, which is now supported by a massive manufacturing industry, why study unconventional computing techniques at all? The answer, researchers say, is that one or more of these techniques could become conventional, in the not-so-distant future. Moore’s Law, which states that the number of transistors that can be squeezed onto a semiconductor chip of a given size doubles roughly every two years, has held true since the mid 1960s, but past progress is no guarantee of future success: Further attempts at miniaturization will soon run into the hard barrier of quantum physics, as transistors get so small they can no longer be made out of conventional materials. At that point, which could be no more than a decade away, new ideas will be needed.

So which unconventional technique will run our computers, phones, cars, and washing machines in the future? Here are a few possibilities [read the article for more detail]

“Whatever good things we build end up building us.” – Jim Rohn

If you were forwarded this newsletter and enjoyed it, please subscribe here:

I hope that you’ll read these articles if they catch your eye and that you’ll learn as much as I did. Please email me questions, feedback or raise issues for discussion. Better yet, if you know of something on a related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.

Leave a Comment

Filed under Newsletter