Time to re-boot the newsletter. I’ve been meaning to start organizing it again for months but haven’t made the time. Not sure what interrupted the flow especially when I had a lot of encouragement from friends and family. One in particular who used to encourage me on this, to publish (or re-publish) the things that interested me, and to write even when I felt like it was an echo into the void. Miss you dude…
In town to give a lecture, the Harvard grad and Rhodes scholar speaks quickly, his voice rising a few pitches in tone, his long-fingered hands making sudden jerks when heâ€™s excited. Heâ€™s skinny, with a long face, scraggly beard and carelessly groomed mop of sandy brown hair â€” what you might expect from a theoretical physicist. But then thereâ€™s the street-style Adidas on his feet and the kippah atop his head. And the fact that this scientist also talks a lot about God.
The 101 version of his big idea is this: Under the right conditions, a random group of atoms will self-organize, unbidden, to more effectively use energy. Over time and with just the right amount of, say, sunlight, a cluster of atoms could come remarkably close to what we call life. In fact, hereâ€™s a thought: Some things we consider inanimate actually may already be â€œalive.â€ It all depends on how we define life, something Englandâ€™s work might prompt us to reconsider. â€œPeople think of the origin of life as being a rare process,â€ says Vijay Pande, a Stanford chemistry professor. â€œJeremyâ€™s proposal makes life a consequence of physical laws, not something random.â€
Englandâ€™s idea may sound strange, even incredible, but itâ€™s drawn the attention of an impressive posse of high-level academics. After all, while Darwinism may explain evolution and the complex world we live in today, it doesnâ€™t account for the onset of intelligent beings. Englandâ€™s insistence on probing for the step that preceded all of our current assumptions about life is what makes him stand out, says Carl Franck, a Cornell physics professor, whoâ€™s been following Englandâ€™s work closely. â€œEvery 30 years or so we experience these gigantic steps forward,â€ Franck says. â€œWeâ€™re due for one. And this might be it.â€
And all from a modern Orthodox Jew with fancy sneakers.
While there is no international law that directly refers to the ultra-modern concept of cyber warfare, there is plenty that applies. So CDCOE assembled a panel of international legal experts to go through this existing law and show how it applies to cyber warfare. This formed the basis of the Tallinn Manual and the 95 so-called ‘black letter rules’ it contains (so named because that’s how they appear in the text).
Through these rules the manual attempts to define some of the basics of cyber warfare. At the most fundamental level, the rules state that an online attack on a state can, in certain circumstances, be the equivalent of an armed attack. It also lays out that such an attack is against international law, and that a state attacked in such a way has the right to hit back.
Other rules the manual spells out: don’t target civilians or launch indiscriminate attacks that could cripple civilian infrastructure. While many of these sorts of rules are well understood when it comes to standard warfare, setting it out in the context of digital warfare was groundbreaking.
While the manual argues that a cyber attack can be considered to be the equivalent of an armed attack if it causes physical harm to people or property, other attacks can also be considered a use of force depending on their severity or impact. For example, breaking into a military system would be more likely to be seen as serious, as opposed to hacking into a small business. In contrast, cyber attacks that generate “mere inconvenience or irritation” would never be considered to be a use of force.
The manual also delves into some of the trickier questions of cyber war: would Country A be justified in launching a pre-emptive military strike against a Country B if it knew Country B planned to blow up Country A’s main oil pipeline by hacking the microcontrollers managing its pipeline pressure? (Answer: probably yes.)
The manual even considers the legality of some scenarios verging on the science-fictional.
If an army hacked into and took control of enemy drones, would those drones have to be grounded and marked with the capturers insignia before being allowed to carry out reconnaissance flights? (Answer: maybe.)
But what’s striking is that the Tallinn Manual sets the rules for a war that hasn’t been fought yet.
Minecraft is an incredibly complex game, but itâ€™s also â€” at first â€” inscrutable. When you begin, no pop-ups explain what to do; there isnâ€™t even a â€œhelpâ€ section. You just have to figure things out yourself. (The exceptions are the Xbox and PlayÂStation versions, which in December added tutorials.) This unwelcoming air contrasts with most large games these days, which tend to come with elaborate training sessions on how to move, how to aim, how to shoot. In Minecraft, nothing explains that skeletons will kill you, or that if you dig deep enough you might hit lava (which will also kill you), or even that you can craft a pickax.
This â€œyouâ€™re on your ownâ€ ethos resulted from early financial limitations: Working alone, Persson had no budget to design tutorials. That omission turned out be an inadvertent stroke of genius, however, because it engendered a significant feature of Minecraft culture, which is that new players have to learn how to play. Minecraft, as the novelist and technology writer Robin Sloan has observed, is â€œa game about secret knowledge.â€ So like many modern mysteries, it has inspired extensive information-ÂÂsharing. Players excitedly pass along tips or strategies at school. They post their discoveries in forums and detail them on wikis. (The biggest one, hosted at the site Gamepedia, has nearly 5,000 articles; its entry on Minecraftâ€™s â€œhorses,â€ for instance, is about 3,600 words long.) Around 2011, publishers began issuing handbooks and strategy guides for the game, which became runaway best sellers; one book on redstone has outsold literary hits like â€œThe Goldfinch,â€ by Donna Tartt.
â€œIn Minecraft, knowledge becomes social currency,â€ says Michael Dezuanni, an associate professor of digital media at Queensland University of Technology in Australia. Dezuanni has studied how middle-Âschool girls play the game, watching as they engaged in nuanced, Talmudic breakdowns of a particular creation. This is, he realized, a significant part of the gameâ€™s draw: It offers many opportunities to display expertise, when you uncover a new technique or strategy and share it with peers.
The single biggest tool for learning Minecraft lore is YouTube. The site now has more than 70 million Minecraft videos, many of which are explicitly tutorial. To make a video, players use â€œscreencastingâ€ software (some of which is free, some not) that records whatâ€™s happening on-screen while they play; they usually narrate their activity in voice-Âover. The problems and challenges you face in Minecraft are, as they tend to be in construction or architecture, visual and three-Âdimensional. This means, as many players told me, that video demonstrations have a particularly powerful explanatory force: Itâ€™s easiest to learn something by seeing someone else do it. In this sense, the game points to the increasing role of video as a rhetorical tool. (â€œMinecraftâ€ is the second-Âmost-Âsearched-Âfor term on YouTube, after â€œmusic.â€)
Consider Erica, a full-time college student. The first thing she does when she wakes up in the morning is reach for her smartphone. She checks texts that came in while she slept. Then she scans Facebook, Snapchat, Tumblr, Instagram, and Twitter to see â€œwhat everybody else is doing.â€ At breakfast, she opens her laptop and goes to Spotify and her various email accounts. Once she gets to campus, Erica confronts more screen time: PowerPoints and online assignments, academic content to which she dutifully attends (sheâ€™s an A student). Throughout the day, she checks in with social media roughly every 10 minutes, even during class. â€œItâ€™s a little overwhelming,â€ she says, â€œbut you donâ€™t want to feel left out.â€
Weâ€™ve been worried about this type of situation for thousands of years. Socrates, for one, fretted that the written word would compromise our ability to retell stories. Such a radical shift in communication, he argued in Phaedrus, would favor cheap symbols over actual memories, ease of conveyance over inner depth. Philosophers have pondered the effect of information technology on human identity ever since. But perhaps the most trenchant modern expression of Socratesâ€™ nascent technophobia comes from the 20th-century German philosopher Martin Heidegger, whose essays on the subjectâ€”notably â€œThe Question Concerning Technologyâ€ (1954)â€”established a framework for scrutinizing our present situation.
Heideggerâ€™s take on technology was dire. He believed that it constricted our view of the world by reducing all experience to the raw material of its operation. To prevent â€œan oblivion of being,â€ Heidegger urged us to seek solace in nontechnological space. He never offered prescriptive examples of exactly how to do this, but as the scholar Howard Eiland explains, it required seeing the commonplace as alien, or finding â€œan essential strangeness in â€¦ familiarity.â€ Easier said than done. Hindering the effort in Heideggerâ€™s time was the fact that technology was already, as the contemporary political philosopher Mark Blitz puts it, â€œan event to which we belong.â€ In this view, one that certainly befits todayâ€™s digital communication, technology infuses real-world experience the way water mixes with water, making it nearly impossible to separate the human and technological perspectives, to find weirdness in the familiar. Such a blending means that, according to Blitz, technologyâ€™s domination â€œmakes us forget our understanding of ourselves.â€
The only hope for preserving a non-technological havenâ€”and it was and remains a distant hopeâ€”was to cultivate what Heidegger called â€œnearness.â€ Nearness is a mental island on which we can stand and affirm that the phenomena we experience both embody and transcend technology. Consider it a privileged ontological stance, a way of knowing the world through a special kind of wisdom or point of view. Heideggerâ€™s implicit hope was that the human ability to draw a distinction between technological and nontechnological perception would release us from â€œthe stultified compulsion to push on blindly with technology.â€
If you were forwarded this newsletter and enjoyed it, please subscribe here:Â https://tinyletter.com/peopleinpassing
I hope that you’ll readÂ these articles if they catch your eye and that you’ll learn as much as I did. PleaseÂ email me questions, feedback or raise issues for discussion. Better yet, if you know of something on aÂ related topic, or of interest, please pass it along. And as always, if one of these links comes to mean something to you, recommend it to someone else.