For Your Consideration : How To Think, Computational Kindergarten, Post-Industrial Design, and 1980’s Venture Capital

1. How To Think

Thinking is not IQ. When people talk about thinking they make the mistake of thinking that people with high IQs think better. That’s not what I’m talking about. I hate to break it to you but unless you’re trying to get into MENSA, IQ tests don’t matter. That’s not the type of knowledge or brainpower that makes you better at life, happier, or more successful. It’s a measure sure, but a relatively useless one.

If you want to outsmart people who are smarter than you, temperament and life-long learning are more important than IQ.

Two of the guiding principles that I follow on my path towards seeking wisdom are: (1) Go to bed smarter than when you woke up; and (2) I’m not smart enough to figure everything out myself, so I want to ‘master the best of what other people have already figured out.’

Acquiring wisdom, is hard. Learning how to think is hard. It means sifting through information, filtering the bunk, and connecting it to a framework that you can use. A lot of people want to get their opinions from someone else. I know this because whenever anyone blurts out an opinion and I ask why, I get some hastily re-phrased sound-byte that doesn’t contextualize the problem, identify the forces at play, demonstrate differences or similarities with previous situations, consider base rates, or … anything else that would demonstrate some level of thinking. (One of my favorite questions to probe thinking is to ask what information would cause someone to change their mind. Immediately stop listening and leave if they say ‘I can’t think of anything.’)

Thinking is hard work. I get it. You don’t have time to think but that doesn’t mean you get a pass from me. I want to think for myself, thank you.

2. Computational Thinking And Why It’s Important

It’s a grounding in computational thinking—not a facility with the latest feature or product—that fosters future success in the field, whether students go on to become engineers or inventors or entrepreneurs.

That’s a powerful rationale for teaching computational thinking to our young people. But there’s a problem. In conventional computer science instruction, these principles are only accessible to those who learn how to program. This poses a big hurdle, especially for younger students. Enter Computer Science Unplugged, which has been developed at the University of Canterbury in New Zealand over the past two decades.

Professors Tim Bell, Mike Fellows and Ian H. Witten have figured out how to teach the concepts of computer science through games, puzzles and magic tricks. Taking the computer out of the picture—for the time being—allows children as young as five to learn about the basic ideas that undergird computer science. Youngsters can tackle topics as apparently abstruse as algorithms, binary numbers, Boolean circuits, and cryptographic protocols. The activities offered by Computer Science Unplugged are aimed at students in kindergarten through seventh grade, though they have been used by students in high school and even college.

Younger children might learn about “finite state automata”—sequential sets of choices—by following a pirates’ map, dashing around a playground in search of the fastest route to Treasure Island. Older kids can learn how computers compress text to save storage space by taking it upon themselves to compress the text of a book. This is done by marking repetitions of a word within a text, crossing out the word each time it reappears, and drawing an arrow back to its first appearance on the page. (Dr. Seuss books, like Green Eggs and Ham, compress especially efficiently because of their frequent repetitions.)

3. Heat Death: Venture Capital in the 1980s | Reaction Wheel
A fascinating read of the venture funding run of the 80’s

The history repeats itself crowd thinks that that there must be a bubble sooner or later. “Now?” they constantly ask, “Is it a bubble now?” as if history has to repeat whatever was most memorable about the last time. History may repeat itself, but there’s an awful lot of history that this particular venture capital cycle could repeat. Below is a short history of venture capital in the 1980s, my interpretation and comparison to the ’90s and today, and some thoughts about what that means. It’s long. If you’re attention-deprived, skip to ‘1980s v. 1990s’, about four-fifths of the way down.

4. Design for the Post-Industrial Era

Design is entering its golden age. Now, like never before, the value of the discipline is recognized. This recognition is both a welcome change and a challenge for designers as they move to designing for networked systems. Jon Follett, editor of Designing for Emerging Technologies, recently sat down with Matt Nish-Lapidus, partner and design director at Normative Design, who contributed to the book. Nish-Lapidus discusses the changing role of design and designers in emerging technology.

As Nish-Lapidus describes, we’re witnessing the evolution of product development from one crafts-person, one customer; to a one crafts-person, many customers; to a one craft-person, one product that many people will customize. He explains how the crafted object and the nature of design has changed, beginning with the pre-industrial era:

“We go from having a single pair of glasses made for a single person, handmade usually, to a pair of glasses designed and then mass-manufactured for a countless number of people, to having a pair of glasses that expresses a lot of different things. On one hand, you have something like Google Glass, which is still mass-produced, but the glasses actually contain embedded functionality. Then we also have, with the emergence of 3D printing and small-scale manufacturing, a return to a little bit of that artisan, one-to-one relationship, where you could get something that someone’s made just for you.”

Disclaimer: The selections I use to describe the links are snippets, often edited together to better describe the original piece, each of which is worth reading on it’s original site.

 

 

 

My interest is in the future because I am going to spend the rest of my life there. – Charles Kettering

powered by TinyLetter

Leave a Comment

Filed under Newsletter

For Your Consideration : Emotion Sensing Machines, DIY Drugs, and Spider Tanks

1. We know how you feel

Incredibly interesting piece on teaching computers to sense human emotion. I kept thinking about how you could combine this with thermal imaging and voice stress analysis to get a pretty reliable remote lie detector. When would this be admissible in court? Could emotion sensing systems be used on archival video for historians? To review congressional testimony? To analyze CEOs during quarterly meetings to see if they are being truthful or enthusiastic about their prospects?

Affectiva is the most visible among a host of competing boutique startups: Emotient, Realeyes, Sension. After Kaliouby and I sat down, she told me, “I think that, ten years down the line, we won’t remember what it was like when we couldn’t just frown at our device, and our device would say, ‘Oh, you didn’t like that, did you?’ ” She took out an iPad containing a version of Affdex, her company’s signature software, which was simplified to track just four emotional “classifiers”: happy, confused, surprised, and disgusted. The software scans for a face; if there are multiple faces, it isolates each one. It then identifies the face’s main regions—mouth, nose, eyes, eyebrows—and it ascribes points to each, rendering the features in simple geometries. When I looked at myself in the live feed on her iPad, my face was covered in green dots. “We call them deformable and non-deformable points,” she said. “Your lip corners will move all over the place—you can smile, you can smirk—so these points are not very helpful in stabilizing the face. Whereas these points, like this at the tip of your nose, don’t go anywhere.” Serving as anchors, the non-deformable points help judge how far other points move.

Affdex also scans for the shifting texture of skin—the distribution of wrinkles around an eye, or the furrow of a brow—and combines that information with the deformable points to build detailed models of the face as it reacts. The algorithm identifies an emotional expression by comparing it with countless others that it has previously analyzed. “If you smile, for example, it recognizes that you are smiling in real time,” Kaliouby told me. I smiled, and a green bar at the bottom of the screen shot up, indicating the program’s increasing confidence that it had identified the correct expression. “Try looking confused,” she said, and I did. The bar for confusion spiked. “There you go,” she said.

Many companies are moving to take advantage of this shift. “We put together a patent application for a system that could dynamically price advertising depending on how people responded to it,” Kaliouby told me one afternoon. I found more than a hundred other patents for emotion-sensing technology, many of them tied to advertising. Represented: A.O.L., Hitachi, eBay, I.B.M., Yahoo!, and Motorola. Sony had filed several; its researchers anticipated games that build emotional maps of players, combining data from sensors and from social media to create “almost dangerous kinds of interactivity.” There were patents for emotion-sensing vending machines, and for A.T.M.s that would understand if users were “in a relaxed mood,” and receptive to advertising. Anheuser-Busch had designed a responsive beer bottle, because sports fans at games “wishing to use their beverage containers to express emotion are limited to, for example, raising a bottle to express solidarity with a team.”

Not long ago, Verizon drafted plans for a media console packed with sensors, including a thermographic camera (to measure body temperature), an infrared laser (to gauge depth), and a multi-array microphone. By scanning a room, the system could determine the occupants’ age, gender, weight, height, skin color, hair length, facial features, mannerisms, what language they spoke, and whether they had an accent. It could identify pets, furniture, paintings, even a bag of chips. It could track “ambient actions”: eating, exercising, reading, sleeping, cuddling, cleaning, playing a musical instrument. It could probe other devices—to learn what a person might be browsing on the Web, or writing in an e-mail. It could scan for affect, tracking moments of laughter or argument. All this data would then shape the console’s choice of TV ads. A marital fight might prompt an ad for a counsellor. Signs of stress might prompt ads for aromatherapy candles. Upbeat humming might prompt ads “configured to target happy people.” The system could then broadcast the ads to every device in the room.

In 2013, Representative Mike Capuano, of Massachusetts, drafted the We Are Watching You Act, to compel companies to indicate when sensing begins, and to give consumers the right to disable it.

2. DIY Drugs Digital Future

I’ve been asked to do this a dozen times. Every editor said to me, “Can you make us a drug?” and I said, “Yes, but what would the point be?” And they couldn’t give me an answer. [Then] I started talking with Bobby Johnson, an editor at Matter. I said, “What was the point at which, culturally, drugs actually became part of the weave of everyday society?”

My contention would be that that was the birth of LSD and the Beatles and the ’60s. So I thought, What was the first drug experience that the Beatles had? And it was Benzedrine, but I didn’t fancy making Benzedrine [an amphetamine] because it isn’t as unusual.

The legal high story represents a pivotal change in the way that drugs are manufactured, consumed, experienced, and mediated in a society, and I wanted to find a drug that was taken by the man who introduced LSD to the United Kingdom.

There is a picture of the Beatles holding tubes of Preludin. I thought, You know? That’s no different than young kids now posing on Facebook with a pile of mephredrone—except for way that the whole world is so interconnected.

It just tied together a few strings for me: privacy, publicity, the consequences of drug use. [I wanted] to make a legal version of John Lennon’s favorite drug. It’s a great headline, isn’t it?

What was your scariest moment when you were having the drug made?

When I went to collect it, walking through the streets of London with a bag of five grams of white powder. If the police stopped me, I would have to tell them that it was actually a legal version of Preludin that I had had synthesized in a Shanghai laboratory.

I don’t fancy my chances that the police would have believed me. I think I would have been taken to the cells while they sent it off for testing. That was really scary.

3. Spider Tanks

Because, Spider Tanks… These renderings are just stunning in their detail and design.

 

Disclaimer: The selections I use to describe the links are snippets, often edited together to better describe the original piece, each of which is worth reading on it’s original site.

 

Learning is not compulsory but neither is survival. – W. Edwards Demming

 

powered by TinyLetter

Leave a Comment

Filed under Newsletter

For Your Consideration : 4 Links : 1/13/2015

1. The Tools Of Their Tools

Carr’s topic in the new book is automation. Although the word can ultimately be traced back to the Greek automatos, typically rendered “self-moving” or “self-acting,” Carr notes that our English word “automation” is of surprisingly recent vintage: engineers at the Ford Motor Company reportedly coined the term in 1946 after struggling to refer to the new machinery churning out cars on the assembly lines. A little over a decade later, the word had already become freighted with the hopes and anxieties of the age. Carr tells of a Harvard business professor who wrote in 1958, “It has been used as a technological rallying cry, a manufacturing goal, an engineering challenge, an advertising slogan, a labor campaign banner, and as the symbol of ominous technological progress.” Carr aims to investigate automation in all these variegated senses, and more.

The conventional wisdom about technology — or at least one popular, mainstream view — holds that new technologies almost always better our lives. Carr, however, thinks that the changes we take to be improvements in our lives can obscure more nuanced and ambiguous changes, and that the dominant narrative of inevitable technological progress misconstrues our real relationship with technology. Philosophers of technology, as Albert Borgmann said in a 2003 interview, tend not to celebrate beneficial technological developments, “because they get celebrated all the time. Philosophers point out the liabilities — what happens when technology moves beyond lifting genuine burdens and starts freeing us from burdens that we should not want to be rid of.” A true philosopher of technology, Carr argues that the liabilities associated with automation threaten to impair the conditions required for meaningful work and action, and ultimately for leading meaningful lives.

Carr’s foray into aviation is not an isolated case study. He sees it as a window into a future in which automation becomes increasingly pervasive. “As we begin to live our lives inside glass cockpits,” he warns, “we seem fated to discover what pilots already know: a glass cockpit can also be a glass cage.” The consequences will not always be so catastrophic and the systems will not always be so totalizing. But that does not make the range of technologies any less worthy of the crucial questions that Carr asks: “Am I the master of the machine, or its servant? Am I an actor in the world, or an observer? Am I an agent, or an object?”

2. Sebastian Seung’s Quest to Map the Human Brain

In 2012, Seung started EyeWire, an online game that challenges the public to trace neuronal wiring — now using computers, not pens — in the retina of a mouse’s eye. Seung’s artificial-­intelligence algorithms process the raw images, then players earn points as they mark, paint-by-numbers style, the branches of a neuron through a three-dimensional cube. The game has attracted 165,000 players in 164 countries. In effect, Seung is employing artificial intelligence as a force multiplier for a global, all-volunteer army that has included Lorinda, a Missouri grandmother who also paints watercolors, and Iliyan (a.k.a. @crazyman4865), a high-school student in Bulgaria who once played for nearly 24 hours straight. Computers do what they can and then leave the rest to what remains the most potent pattern-recognition technology ever discovered: the human brain.

Ultimately, Seung still hopes that artificial intelligence will be able to handle the entire job. But in the meantime, he is working to recruit more help. In August, South Korea’s largest telecom company announced a partnership with EyeWire, running nationwide ads to bring in more players. In the next few years, Seung hopes to go bigger by enticing a company to turn EyeWire into a game with characters and a story line that people play purely for fun. “Think of what we could do,” Seung said, “if we could capture even a small fraction of the mental effort that goes into Angry Birds.”

3. King of Clickbait | The Virologist

Much of the company’s success online can be attributed to a proprietary algorithm that it has developed for “headline testing”—a practice that has become standard in the virality industry. When a Dose post is created, it initially appears under as many as two dozen different headlines, distributed at random. Whereas one person’s Facebook news feed shows a link to “You Won’t Believe What This Guy Did with an Abandoned Factory,” another person, two feet away, might see “At First It Looks Like an Old Empty Factory. But Go Inside and . . . WHOA.” Spartz’s algorithm measures which headline is attracting clicks most quickly, and after a few hours, when a statistically significant threshold is reached, the “winning” headline automatically supplants all others. “I’m really, really good at writing headlines,” he told me. “But any human’s intuition can only be so good. If you can build a machine that can solve the problem better than you can, then you really understand the problem.” …

Earlier, in Casterly Rock, Spartz and I had spoken about targeted advertising. “The future of media is an ever-increasing degree of personalization,” he said. “My CNN won’t look like your CNN. So we want Dose, eventually, to be tailored to each user. You shouldn’t have to choose what you want, because we will be able to get enough data to know what you want better than you do.”

On a whiteboard behind him were the phrases “old media,” “Tribune,” and “$100 M.” “The lines between advertising and content are blurring,” he said. “Right now, if you go to any Web site, it will know where you live, your shopping history, and it will use that to give you the best ad. I can’t wait to start doing that with content. It could take a few months, a few years—but I am motivated to get started on it right now, because I know I’ll kill it.”

4. The problem isn’t that life is unfair – it’s your broken idea of fairness

We’re all in competition, although we prefer not to realise it. Most achievements are only notable relative to others. You swam more miles, or can dance better, or got more Facebook Likes than the average. Well done.

It’s a painful thing to believe, of course, which is why we’re constantly assuring each other the opposite. “Just do your best”, we hear. “You’re only in competition with yourself”. The funny thing about platitudes like that is they’re designed to make you try harder anyway. If competition really didn’t matter, we’d tell struggling children to just give up.

Fortunately, we don’t live in a world where everyone has to kill each other to prosper. The blessing of modern civilisation is there’s abundant opportunities, and enough for us all to get by, even if we don’t compete directly.

Society judges people by what they can do for others. Can you save children from a burning house, or remove a tumour, or make a room of strangers laugh? You’ve got value right there.

That’s not how we judge ourselves though. We judge ourselves by our thoughts.

“I’m a good person”. “I’m ambitious”. “I’m better than this.” These idle impulses may comfort us at night, but they’re not how the world sees us. They’re not even how we see other people.

People like to invent moral authority. It’s why we have referees in sports games and judges in courtrooms: we have an innate sense of right and wrong, and we expect the world to comply. Our parents tell us this. Our teachers teach us this. Be a good boy, and have some candy.

But reality is indifferent. You studied hard, but you failed the exam. You worked hard, but you didn’t get promoted. You love her, but she won’t return your calls.

The problem isn’t that life is unfair; it’s your broken idea of fairness.

Disclaimer: The selections I use to describe the links are snippets, often edited together to better describe the original piece, each of which is worth reading on it’s original site.

Miracles sometimes occur, but one has to work terribly hard for them. - Chaim Weizmann

 

Leave a Comment

Filed under Newsletter

For Your Consideration : 4 Links : 1/6/2015

Subscribe here: https://tinyletter.com/peopleinpassing

1. How Laws Restricting Tech Actually Expose Us to Greater Harm

The general-purpose computer is one of the crowning achievements of industrial society. Prior to its invention, electronic calculating engines were each hardwired to do just one thing, like calculate ballistics tables. John von Neumann’s “von Neumann architecture” and Alan Turing’s “Turing-complete computer” provided the theoretical basis for building a calculating engine that could run any program that could be expressed in symbolic language. That breakthrough still ripples through society, revolutionizing every corner of our world. When everything is made of computers, an improvement in computers makes everything better.

But there’s a terrible corollary to that virtuous cycle: Any law or regulation that undermines computers’ utility or security also ripples through all the systems that have been colonized by the general-purpose computer. And therein lies the potential for untold trouble and mischief.

Because while we’ve spent the past 70 years perfecting the art of building computers that can run every single program, we have no idea how to build a computer that can run every program except the one that infringes copyright or prints out guns or lets a software-based radio be used to confound air-traffic control signals or cranks up the air-conditioning even when the power company sends a peak-load message to it.

The closest approximation we have for “a computer that runs all the programs except the one you don’t like” is “a computer that is infected with spyware out of the box.”

2. Accelerating Drug Development with Organ-on-a-Chip Technology
Something that may help reverse Eroom’s Law : that’s Moore’s law backward—which observes that the number of new drugs approved per billion dollars spent on R&D has halved every nine years since 1950.

The idea is to authentically replicate, or “bioemulate” in science-speak, the workings of human organs. This way, scientists and even clinicians without high-level expertise can determine the efficacy and safety of potential new drugs, chemicals and cosmetics, with no animal models in the process.

“This advanced technology is the beginning of a revolution in the way we study human biology and disease,” said senior scientist Geraldine Hamilton. She added that Emulate is more predictive of the human situation than animal models, besides being more cost-effective and less time-consuming. Therefore, new pharmaceuticals could get to market, and to those in need of them, more rapidly.

Another aspect of the new technology is that it paves the way for more personalized treatment with stem cells. “Our vision is we can one day put each patient’s cells on chips that mimic the function of organs, and this will open up new ways for us to design truly personalized treatment with stem cells, based on each patient’s unique genetic profile on their own individualized Organs-on-Chips,” added Shlomo Melmed, senior vice president for Academic Affairs and Dean of the medical faculty at Cedars-Sinai, one of the institutional investors in the new company.

Besides the lung-on-chip seen at the top of the page, in the last four years the researchers have also developed more than 10 types of organ/chips, including some that emulate the liver, gut, kidney, and bone marrow.

3. We Still Don’t Know Who Hacked Sony

The blurring of lines between individual actors and national governments has been happening more and more in cyberspace. What has been called the first cyberwar, Russia vs. Estonia in 2007, was partly the work of a 20-year-old ethnic Russian living in Tallinn, and partly the work of a pro-Kremlin youth group associated with the Russian government. Many of the Chinese hackers targeting Western networks seem to be unaffiliated with the Chinese government. And in 2011, the hacker group Anonymous threatened NATO.

It’s a strange future we live in when we can’t tell the difference between random hackers and major governments, or when those same random hackers can credibly threaten international military organizations.

This is why people around the world should care about the Sony hack. In this future, we’re going to see an even greater blurring of traditional lines between police, military, and private actions as technology broadly distributes attack capabilities across a variety of actors. This attribution difficulty is here to stay, at least for the foreseeable future.

If North Korea is responsible for the cyberattack, how is the situation different than a North Korean agent breaking into Sony’s office, photocopying a lot of papers, and making them available to the public? Is Chinese corporate espionage a problem for governments to solve, or should we let corporations defend themselves? Should the National Security Agency defendU.S. corporate networks, or only U.S. military networks? How much should we allow organizations like the NSA to insist that we trust them without proof when they claim to have classified evidence that they don’t want to disclose? How should we react to one government imposing sanctions on another based on this secret evidence? More importantly, when we don’t know who is launching an attack or why, who is in charge of the response and under what legal system should those in charge operate?

4. Can Genius.com (Minus the Rap) Annotate The World?

Lehman and Zechory have spent much of 2014 trying to scrub their past clean. They’ve shortened the company’s name to Genius and secured $40 million in funding to plunge fully into a Silicon Valley “pivot”: the transition from doing one thing better than anyone else—annotating rap lyrics—to doing something bigger and bolder—“annotating the world,” a capaciously vague ambition that no one, themselves included, is certain they can pull off. Annotation has been a Silicon Valley dream since the invention of the first web browser, but it has yet to produce an elegant solution comparable to what Wikipedia did with the crowdsourced encyclopedia. The Genius founders see their platform as a means for enlightened discussion in contrast to the dark world of the internet comment. Users can upload a text, click on any word, and add whatever context they deem worthwhile. Most annotations must be approved by other members of the Genius community, so that only valuable commentary, grounded in specific parts of a given text, will pass muster and appear on the site. But the Genius founders’ ultimate goal is bigger still. If their plans come to fruition, users will visit genius.com to annotate Shakespeare, Apple earnings reports, and the State of the Union, but the Genius platform will be built into the code of every website in the world, allowing users to mark up any text, anywhere. “It’s gonna take a decade to build or more,” the investor Horowitz said. “But it ought to be as long-lasting as any technology company that’s getting built right now and any that’s in existence.” In grasping for an analogy to encompass their ambition, the Genius founders have variously described the project as a “wall of history” and an “internet Talmud.” Critics, who consider the majority of its annotations sophomoric at best, have called it an “internet decoder ring.”

“All Rhodes Scholars had a great future in their past.”
― Peter Thiel, Zero to One: Notes on Startups, or How to Build the Future

Leave a Comment

Filed under Newsletter

For Your Consideration : 4 Links : 1/2/2015

Subscribe here: https://tinyletter.com/peopleinpassing

1. You Are Not Late

But, but…here is the thing. In terms of the internet, nothing has happened yet. The internet is still at the beginning of its beginning. If we could climb into a time machine and journey 30 years into the future, and from that vantage look back to today, we’d realize that most of the greatest products running the lives of citizens in 2044 were not invented until after 2014. People in the future will look at their holodecks, and wearable virtual reality contact lenses, and downloadable avatars, and AI interfaces, and say, oh, you didn’t really have the internet (or whatever they’ll call it) back then.

And they’d be right. Because from our perspective now, the greatest online things of the first half of this century are all before us. All these miraculous inventions are waiting for that crazy, no-one-told-me-it-was-impossible visionary to start grabbing the low-hanging fruit — the equivalent of the dot com names of 1984.

Because here is the other thing the greybeards in 2044 will tell you: Can you imagine how awesome it would have been to be an entrepreneur in 2014? It was a wide-open frontier! You could pick almost any category X and add some AI to it, put it on the cloud. Few devices had more than one or two sensors in them, unlike the hundreds now. Expectations and barriers were low. It was easy to be the first. And then they would sigh, “Oh, if only we realized how possible everything was back then!”

So, the truth: Right now, today, in 2014 is the best time to start something on the internet. There has never been a better time in the whole history of the world to invent something. There has never been a better time with more opportunities, more openings, lower barriers, higher benefit/risk ratios, better returns, greater upside, than now. Right now, this minute.

2. The Most Futuristic Predictions That Came True In 2014
A really truly amazing list, proof that we live in a time when science fiction is becoming reality. However, based on your personal filter bubble, 2014 may feel like it was full of chaos and terror, instead of broadly good. Links within the article for details.

1. Technologically-assisted telepathy was successfully demonstrated in humans
2. NASA emailed a wrench to the space station
3. Surgeons began using suspended animation
4. The U.S. Navy deployed a functional laser weapon
5. Scientists “uploaded” a worm’s mind into a robot
6. A computer solved a math problem that we can’t check
7. An artificial chromosome was built from scratch
8. A venture capitalist firm appointed an AI to the board
9. A double amputee received two mind-controlled arms
10. A cloaking device that hides objects in the visible spectrum
11. An orangutan became a legally recognized person
12. Self-guiding sniper bullets became a reality
13. A proto-cyber war erupted between the U.S. and N. Korea (based on some pretty crappy intel)
14. Humanity landed a robot on a comet

3. The Oil Crisis Explained In 3 Minutes
Crisis? Gas is the cheapest it’s been in years! Well… that’s true but let’s take a few geopolitical steps back.

Shale. Technological improvements in drilling have enabled access to previously difficult-to-reach reserves in many parts of the US and abroad. In particular, drillers can now extract oil from shale. When supply goes up, prices come down. And that’s what prices did.

The Sucker Punch: OPEC does not come to the rescue.Then came the curve ball. What usually happens when oil prices drop fast is that OPEC steps in and saves the day. (OPEC is an “Organization of Petroleum Exporting Countries” that works as a cartel to control oil prices to their benefit.) The logic goes like this: OPEC wants higher oil prices so they can sell at a higher price. So when prices fall, they usually come together to cut supply in unison. But this time, they refused.

As oil continued to plummet, OPEC made a public statement that basically went like this:

“Too bad. Deal with it.”This caught a lot of traders and investors flat-footed, as many expected OPEC to intervene. Reflecting this new reality, oil continued its plunge, breaking $70 per barrel.

By refusing to intervene, OPEC exacerbated the oil price collapse. And many drillers that were counting on high oil prices would now be losing money.

4. By 2025, the Definition of ‘Privacy’ Will Have Changed
A new area of haves and have nots will emerge between those that have encryption and know how use it and those that do not.

Experts agreed, though, that our expectations about personal privacy are changing dramatically. While privacy once generally meant, “I assume no one is looking,” as one respondent put it, the public is beginning to accept the opposite: that someone usually is. And whether or not people accept it, that new normal—public life and mass surveillance as a default—will become a component of the ever-widening socioeconomic divide. Privacy as we know it today will become a luxury commodity. Opting out will be for the rich. To some extent that’s already true. Consider the supermarkets that require you to fill out an application—including your name, address, phone number, and so on—in order to get a rewards card that unlocks coupons. Here’s what Kate Crawford, a researcher who focuses on ethics in the age of big data, told Pew:

“In the next 10 years, I would expect to see the development of more encryption technologies and boutique services for people prepared to pay a premium for greater control over their data. This is the creation of privacy as a luxury good. It also has the unfortunate effect of establishing a new divide: the privacy rich and the privacy poor. Whether genuine control over your information will be extended to the majority of people—and for free—seems very unlikely, without a much stronger policy commitment.”

And there’s little incentive for the entities that benefit from a breakdown in privacy to change the way they operate. In order to get more robust privacy protections—like terms of service agreements that are actually readable to non-lawyers, or rules that let people review the personal information that data brokers collect about them—many experts agree that individuals will have to demand them. But even that may not work.

Where there’s tension between convenience and privacy, individuals are already primed to give up their right to be left alone. For instance, consider the Facebook user who feels uneasy about the site’s interest in her personal data but determines quitting isn’t an option because she’d be giving up the easiest way to stay in touch with friends and family.

 

“. . . [T]hou wilt not trust the air with secrets.”
— Shakespeare, Titus Andronicus 

Leave a Comment

Filed under Newsletter

For Your Consideration : 4 Links : 12/26/2014

Subscribe here: https://tinyletter.com/peopleinpassing

1. Hacking The President’s DNA

More to the point, consider that the DNA of world leaders is already a subject of intrigue. According to Ronald Kessler, the author of the 2009 book In the President’s Secret Service, Navy stewards gather bedsheets, drinking glasses, and other objects the president has touched—they are later sanitized or destroyed—in an effort to keep would‑be malefactors from obtaining his genetic material. (The Secret Service would neither confirm nor deny this practice, nor would it comment on any other aspect of this article.) And according to a 2010 release of secret cables by WikiLeaks, Secretary of State Hillary Clinton directed our embassies to surreptitiously collect DNA samples from foreign heads of state and senior United Nations officials. Clearly, the U.S. sees strategic advantage in knowing the specific biology of world leaders; it would be surprising if other nations didn’t feel the same.

While no use of an advanced, genetically targeted bio-weapon has been reported, the authors of this piece—including an expert in genetics and microbiology (Andrew Hessel) and one in global security and law enforcement (Marc Goodman)—are convinced we are drawing close to this possibility. Most of the enabling technologies are in place, already serving the needs of academic R&D groups and commercial biotech organizations. And these technologies are becoming exponentially more powerful, particularly those that allow for the easy manipulation of DNA.

2. How To Teach All Students To Think Critically

The problem is that critical thinking is the Cheshire Cat of educational curricula – it is hinted at in all disciplines but appears fully formed in none. As soon as you push to see it in focus, it slips away.

If you ask curriculum designers exactly how critical thinking skills are developed, the answers are often vague and unhelpful for those wanting to teach it.

This is partly because of a lack of clarity about the term itself and because there are some who believe that critical thinking cannot be taught in isolation, that it can only be developed in a discipline context – after all, you have think critically about something.

So what should any mandatory first year course in critical thinking look like? There is no single answer to that, but let me suggest a structure with four key areas:

1. Argumentation
Arguing, as opposed to simply disagreeing, is the process of intellectual engagement with an issue and an opponent with the intention of developing a position justified by rational analysis and inference.

2. Logic
People generally speak of formal logic – basically the logic of deduction – and informal logic – also called induction. Deduction is most of what goes on in mathematics or Suduko puzzles and induction is usually about generalising or analogising and is integral to the processes of science.

3. Psychology
We are masses of cognitive biases as much as we are rational beings. This does not mean we are flawed, it just means we don’t think in the nice, linear way that educators often like to think we do.

4. The nature of science.
Learning about what the differences are between hypotheses, theories and laws, for example, can help people understand why science has credibility without having to teach them what a molecule is, or about Newton’s laws of motion.

3. David Foster Wallace And The Nature Of Fact

Before he sat down with the best tennis player on the planet for a noonday interview in the middle of the 2006 Wimbledon fortnight, David Foster Wallace prepared a script. Atop a notebook page he wrote, “R.Federer Interview Qs.” and below he jotted in very fine print 13 questions. After three innocuous ice breakers, Wallace turned his attention to perhaps the most prominent theme in all his writing: consciousness. Acknowledging the abnormal interview approach, Wallace prefaced these next nine inquires with a printed subhead: “Non-Journalist Questions.” Each interrogation is a paragraph long, filled with digressions, asides, and qualifications; several contain superscripted addendums.  In short, they read like they’re written by David Foster Wallace. He asks Roger Federer if he’s aware of his own greatness, aware of the unceasing media microscope he operates under, aware of his uncommon elevation of athletics to the level of aesthetics, aware of how great his great shots really are. Wallace even wrote, “How aware are you of the ballboys?” before crossing the question out.

“I’m not a journalist—I’m more like a novelist with a tennis background.” Wallace had a history of anti-credentialing himself both in person and in print, and while this reportorial and rhetorical maneuver may have disarmed sources it also created a calculus for Wallace to write under.[i] He saw clear lines between journalists and novelists who write nonfiction, and he wrestled throughout his career with whether a different set of rules applied to the latter category.[ii]

4. Dear Kids

The idea that there is anything especially bad about 2014 is temporal narcissism. We just live in an age of countless opinions. We are just starting to get used to it, this idea that we can document everything. We can document it but we can’t begin to interpret or understand it.

You are two sweet, small people with oval faces. How do I prepare you for what’s coming? This week: An angry, mentally unstable man shot two policemen in their cars in a kind of retaliation for the strangulation of a man by police many months before. Some people blame the Mayor, who worries that his black son will be injured by policemen. We’ll put cameras on cops now. That feels like it will fix everything but it will probably just introduce a new class of ambiguities.

And next week: something else.

I’m worried about those things but more worried about getting you out of bed and dressed in the morning. I’m worried about looking out the window one day and seeing a column of fire but more worried about teaching you to be sad when I could be teaching you to be happy. I’m worried about the college teacher writing for the New York Times who also works as a waiter. I want you to have careers and cats; I want you to have apartments without roommates in your thirties.

A life spent making mistakes is not only more honorable, but more useful than a life spent doing nothing.
– George Bernard Shaw

Leave a Comment

Filed under Newsletter