I like the author’s title but I think something is lost when he uses the phrase “20th Century”. It sounds decidedly too modern. If it were instead “Stop Late 1900’s thinking” I believe it would better carry the weight of the piece.
The model of education that most of us are products of was designed for a different time and for a different purpose. The system was created to benefit industry as much, if not more so, than it was to create a freethinking society.
Technology, contrary to science fiction writers’ predictions, will not replace teachers. It will however change the model of how we teach from the 19th and 20th centuries, which was teacher-controlled and teacher-directed learning to a 21st century model of learner-directed learning. The teacher becomes more of a mentor and co learner with students. When it comes to teaching students in the 21st century I have come to believe that it is more important to teach kids how to learn than it is to teach them what to learn.
A very great disconnect in all of this occurs when we try to use the 21st century technology tools for learning and fit them into the 19th & 20th century model of teaching. I have witnessed English teachers having students do a composition assignment. They had students do a handwritten rough draft, revise it, do a final handwritten copy, and then put it on a word processor without accessing a spell check or grammar check. Those teachers learned that way, and taught that way, and added the technology to their 20th century model of teaching. The tech tool was not used for learning. In their future lives those students will certainly use word processors for any writing that they do. Is it not incumbent on their teachers to teach students how to do it correctly? (Yes, as an adult I effectively use a grammar check and a spell check on everything I write. Most people do, even the really smart ones.)
“The media’s coverage of Big Data is often dense, jargon-heavy and difficult to understand,” said Mark Coatney, Senior Vice President, Digital Media, Al Jazeera America. “Keller and Neufeld’s Terms of Service is a fun, graphic way to cut through the noise and boost the signal.”
Between social media profiles, browsing histories, discount programs and new tools controlling our energy use, there’s no escape. As we put ourselves into our technology through text messages and photos, and use technology to record new information about ourselves such as FitBit data, what are the questions a smart consumer should be asking? What is the tradeoff between giving up personal data and how that data could be used against you? And what are the technologies that might seem invasive today that five years from now will seem quaint? How do we as technology users keep up with the pace while not letting our data determine who we are?
Artificial intelligence researchers have been tinkering with reinforcement learning for decades. But until DeepMind’s Atari demo, no one had built a system capable of learning anything nearly as complex as how to play a computer game, says Hassabis. One reason it was possible was a trick borrowed from his favorite area of the brain. Part of the Atari-playing software’s learning process involved replaying its past experiences over and over to try and extract the most accurate hints on what it should do in the future. “That’s something that we know the brain does,” says Hassabis. “When you go to sleep your hippocampus replays the memory of the day back to your cortex.”
A year later, Russell and other researchers are still puzzling over exactly how that trick, and others used by DeepMind, led to such remarkable results, and what else they might be used for. Google didn’t take long to recognize the importance of the effort, announcing a month after the Tahoe demonstration that it had acquired DeepMind
Interesting to note that they are still trying to figure out how their system accomplished their remarkable results. How to they improve something they don’t understand? How long before the code of the more sophisticated systems is beyond us and requires another AI to interpret it?
In 1999, in the Journal of Personality and Social Psychology, my then graduate student Justin Kruger and I published a paper that documented how, in many areas of life, incompetent people do not recognize—scratch that, cannot recognize—just how incompetent they are, a phenomenon that has come to be known as the Dunning-Kruger effect. Logic itself almost demands this lack of self-insight: For poor performers to recognize their ineptitude would require them to possess the very expertise they lack. To know how skilled or unskilled you are at using the rules of grammar, for instance, you must have a good working knowledge of those rules, an impossibility among the incompetent. Poor performers—and we are all poor performers at some things—fail to see the flaws in their thinking or the answers they lack.
What’s curious is that, in many cases, incompetence does not leave people disoriented, perplexed, or cautious. Instead, the incompetent are often blessed with an inappropriate confidence, buoyed by something that feels to them like knowledge.
Because it’s so easy to judge the idiocy of others, it may be sorely tempting to think this doesn’t apply to you. But the problem of unrecognized ignorance is one that visits us all. And over the years, I’ve become convinced of one key, overarching fact about the ignorant mind. One should not think of it as uninformed. Rather, one should think of it as misinformed.
An ignorant mind is precisely not a spotless, empty vessel, but one that’s filled with the clutter of irrelevant or misleading life experiences, theories, facts, intuitions, strategies, algorithms, heuristics, metaphors, and hunches that regrettably have the look and feel of useful and accurate knowledge. This clutter is an unfortunate by-product of one of our greatest strengths as a species. We are unbridled pattern recognizers and profligate theorizers. Often, our theories are good enough to get us through the day, or at least to an age when we can procreate. But our genius for creative storytelling, combined with our inability to detect our own ignorance, can sometimes lead to situations that are embarrassing, unfortunate, or downright dangerous—especially in a technologically advanced, complex democratic society that occasionally invests mistaken popular beliefs with immense destructive power (See: crisis, financial; war, Iraq). As the humorist Josh Billings once put it, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” (Ironically, one thing many people “know” about this quote is that it was first uttered by Mark Twain or Will Rogers—which just ain’t so.)
“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” – Max Planck