Wednesday, January 17, 2018

Bundles of Joy

On December’s survey, I asked readers who had children whether they were happy with that decision. Here are the results, from 1 (very unhappy) to 5 (very happy):


The mean was 4.43, and the median 5. People are really happy to have kids!

This was equally true regardless of gender. The male average (4.43, n = 1768) and female average (4.49, n = 177) were indistinguishable.

To double-check this, I compared the self-reported life satisfaction of people with and without kids. People with kids were much more satisfied – but also did much better on lots of other variables like financial situation, romantic satisfaction, etc. So probably at least some of the effect was because people with kids tend to be older people in stable relationships who have their life more figured out, and maybe also more religious.

In order to compare apples to apples, I limited the comparison to married atheist men 25 or older. There was no longer a consistent trend for people with at least one child to be more satisfied. But there was a trend for increasing satisfaction with increasing number of children:

NUMBER OF CHILDREN : AVERAGE LIFE SATISFACTION ON 1-10 SCALE (total n = 1491):
0: 7.06
1: 7.09
2: 7.24
3: 7.31
4+: 7.43

This doesn’t make a lot of sense to me, since I would expect the biggest life change to be going from zero children to one child. Probably some residual confounders remain in the analysis – and commenter “meh” points out that people who are happiest with their existing children will be most likely to have more. But at the very least, people with children don’t seem to be less happy.

These results broadly match existing research, which usually finds that parents report being very happy to have children, but that this is not reflected in life satisfaction numbers. The main difference is that existing research usually claims parents have lower life satisfaction than non-parents. But this is different in different countries, either for cultural or for policy reasons. The survey respondents form a culturally unusual group and are of a higher socioeconomic status; they may be more similar to countries like Norway (where parents are happier) than to countries like the United States (where they are less happy).

(also, we should at least consider the Caplanian perspective that people more informed about genetics will be happier parents, since they’ll be less neurotic about the effect of their parenting styles.)

The View From Hell blog argues that the discrepancy between the direct question (“Are you happy to have kids?”) and the indirect one (“How happy are you?”, compared across parents vs. childless people) is pure self-deception; children suck, but parents refuse to admit it. I haven’t looked in depth at the study they cite, which purports to show that the more you prime parents with descriptions of the burdens of parenthood, the more great they insist everything is. But I wonder about the philosophical foundations we should be using here. There’s happiness, and there’s happiness: I am happy to be giving money to charity and making the world a better place, but I don’t think my self-reported life satisfaction would be noticeably higher after a big donation. It might even be lower if it cut into my luxury consumption. The wanting/liking/approving trichotomy may also be relevant.

People were happier with their decision to have children if they were (all results are binomial correlations and highly significant even after correction): more gender-conforming (0.14), had fewer thoughts about maybe being transgender (0.20), were more right-wing (0.10), considered themselves more moral people (0.15), were less autistic (0.12), were less extraverted (0.10), were more emotionally stable (0.15), and were more agreeable (0.13). All of these effects were very small compared to the generally high level of happiness at having children, no matter who you were and what your personality was like.

I included this survey question because I’m considering whether or not to have kids. Even though the survey only reinforced the (confusing) results of past research, I still find it helpful. After all, a lot of the survey-takers here are pretty skeptical of other aspects of traditional lifestyles: monogamy, gender norms, religion, etc. It’s impressive how strongly approval of parenting survives even in this weird a population; I consider this a new and exciting fact beyond the ones established by previous studies.

by Scott Alexander, Slate Star Codex |  Read more:
Image: SSC
[ed. I asked a friend: if you could do it over again, would you still marry the same person? The answer (ambiguously enough) was: yes, if it meant having the children they have now.]

How to Fix Facebook—Before It Fixes Us

Facebook and Google are the most powerful companies in the global economy. Part of their appeal to shareholders is that their gigantic advertising businesses operate with almost no human intervention. Algorithms can be beautiful in mathematical terms, but they are only as good as the people who create them. In the case of Facebook and Google, the algorithms have flaws that are increasingly obvious and dangerous.

Thanks to the U.S. government’s laissez-faire approach to regulation, the internet platforms were able to pursue business strategies that would not have been allowed in prior decades. No one stopped them from using free products to centralize the internet and then replace its core functions. No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil. No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms. It has been a sweet deal.


Facebook and Google are now so large that traditional tools of regulation may no longer be effective. The European Union challenged Google’s shopping price comparison engine on antitrust grounds, citing unfair use of Google’s search and AdWords data. The harm was clear: most of Google’s European competitors in the category suffered crippling losses. The most successful survivor lost 80 percent of its market share in one year. The EU won a record $2.7 billion judgment—which Google is appealing. Google investors shrugged at the judgment, and, as far as I can tell, the company has not altered its behavior. The largest antitrust fine in EU history bounced off Google like a spitball off a battleship.

It reads like the plot of a sci-fi novel: a technology celebrated for bringing people together is exploited by a hostile power to drive people apart, undermine democracy, and create misery. This is precisely what happened in the United States during the 2016 election. We had constructed a modern Maginot Line—half the world’s defense spending and cyber-hardened financial centers, all built to ward off attacks from abroad—never imagining that an enemy could infect the minds of our citizens through inventions of our own making, at minimal cost. Not only was the attack an overwhelming success, but it was also a persistent one, as the political party that benefited refuses to acknowledge reality. The attacks continue every day, posing an existential threat to our democratic processes and independence.

We still don’t know the exact degree of collusion between the Russians and the Trump campaign. But the debate over collusion, while important, risks missing what should be an obvious point: Facebook, Google, Twitter, and other platforms were manipulated by the Russians to shift outcomes in Brexit and the U.S. presidential election, and unless major changes are made, they will be manipulated again. Next time, there is no telling who the manipulators will be.

Awareness of the role of Facebook, Google, and others in Russia’s interference in the 2016 election has increased dramatically in recent months, thanks in large part to congressional hearings on October 31 and November 1. This has led to calls for regulation, starting with the introduction of the Honest Ads Act, sponsored by Senators Mark Warner, Amy Klobuchar, and John McCain, which attempts to extend current regulation of political ads on networks to online platforms. Facebook and Google responded by reiterating their opposition to government regulation, insisting that it would kill innovation and hurt the country’s global competitiveness, and that self-regulation would produce better results.

But we’ve seen where self-regulation leads, and it isn’t pretty. Unfortunately, there is no regulatory silver bullet. The scope of the problem requires a multi-pronged approach.

First, we must address the resistance to facts created by filter bubbles. Polls suggest that about a third of Americans believe that Russian interference is fake news, despite unanimous agreement to the contrary by the country’s intelligence agencies. Helping those people accept the truth is a priority. I recommend that Facebook, Google, Twitter, and others be required to contact each person touched by Russian content with a personal message that says, “You, and we, were manipulated by the Russians. This really happened, and here is the evidence.” The message would include every Russian message the user received.

This idea, which originated with my colleague Tristan Harris, is based on experience with cults. When you want to deprogram a cult member, it is really important that the call to action come from another member of the cult, ideally the leader. The platforms will claim this is too onerous. Facebook has indicated that up to 126 million Americans were touched by the Russian manipulation on its core platform and another twenty million on Instagram, which it owns. Together those numbers exceed the 137 million Americans who voted in 2016. What Facebook has offered is a portal buried within its Help Center where curious users will be able to find out if they were touched by Russian manipulation through a handful of Facebook groups created by a single troll farm. This falls far short of what is necessary to prevent manipulation in 2018 and beyond. There’s no doubt that the platforms have the technological capacity to reach out to every affected person. No matter the cost, platform companies must absorb it as the price for their carelessness in allowing the manipulation.

Second, the chief executive officers of Facebook, Google, Twitter, and others—not just their lawyers—must testify before congressional committees in open session. As Senator John Kennedy, a Louisiana Republican, demonstrated in the October 31 Senate Judiciary hearing, the general counsel of Facebook in particular did not provide satisfactory answers. This is important not just for the public, but also for another crucial constituency: the employees who keep the tech giants running. While many of the folks who run Silicon Valley are extreme libertarians, the people who work there tend to be idealists. They want to believe what they’re doing is good. Forcing tech CEOs like Mark Zuckerberg to justify the unjustifiable, in public—without the shield of spokespeople or PR spin—would go a long way to puncturing their carefully preserved cults of personality in the eyes of their employees.

These two remedies would only be a first step, of course. We also need regulatory fixes. Here are a few ideas.

First, it’s essential to ban digital bots that impersonate humans. They distort the “public square” in a way that was never possible in history, no matter how many anonymous leaflets you printed. At a minimum, the law could require explicit labeling of all bots, the ability for users to block them, and liability on the part of platform vendors for the harm bots cause.

Second, the platforms should not be allowed to make any acquisitions until they have addressed the damage caused to date, taken steps to prevent harm in the future, and demonstrated that such acquisitions will not result in diminished competition. An underappreciated aspect of the platforms’ growth is their pattern of gobbling up smaller firms—in Facebook’s case, that includes Instagram and WhatsApp; in Google’s, it includes YouTube, Google Maps, AdSense, and many others—and using them to extend their monopoly power.

This is important, because the internet has lost something very valuable. The early internet was designed to be decentralized. It treated all content and all content owners equally. That equality had value in society, as it kept the playing field level and encouraged new entrants. But decentralization had a cost: no one had an incentive to make internet tools easy to use. Frustrated by those tools, users embraced easy-to-use alternatives from Facebook and Google. This allowed the platforms to centralize the internet, inserting themselves between users and content, effectively imposing a tax on both sides. This is a great business model for Facebook and Google—and convenient in the short term for customers—but we are drowning in evidence that there are costs that society may not be able to afford.

by Roger McNamee, Washington Monthly | Read more:
Image: Chris Matthews 

[ed. If ever there were a flashing red signal... See also: When Speculation Has No Limits and
Beware the $500 Billion Bond Exodus.]

Tuesday, January 16, 2018

Mother Gaia


Repost

Google's Memory Loss

I think Google has stopped in­dex­ing the old­er parts of the We­b. I think I can prove it. Google’s com­pe­ti­tion is do­ing bet­ter.

Ev­i­dence · This isn’t just a proof, it’s a rock-n-roll proof. Back in 2006, I pub­lished a re­view of Lou Reed’s Rock n Roll An­i­mal al­bum. Back in 2008, Brent Sim­mons pub­lished That New Sound, about The Clash’s Lon­don Calling. Here’s a chal­lenge: Can you find ei­ther of these with Google? Even if you read them first and can care­ful­ly con­jure up exact-match strings, and then use the “site:” pre­fix? I can’t.

[Up­date: Now you can, be­cause this piece went a lit­tle vi­ral. But you sure couldn’t ear­li­er in the day.]

Why?
· Ob­vi­ous­ly, in­dex­ing the whole Web is crush­ing­ly ex­pen­sive, and get­ting more so ev­ery day. Things like 10+-year-old mu­sic re­views that are nev­er up­dat­ed, no longer ac­cept com­ments, are light­ly if at all linked-to out­side their own site, and rarely if ev­er visited… well, let’s face it, Google’s not go­ing to be sell­ing many ads next to search re­sults that turn them up. So from a busi­ness point of view, it’s hard to make a case for Google in­dex­ing ev­ery­thing, no mat­ter how old and how ob­scure.

My pain here is pure­ly per­son­al; I freely con­fess that I’d been us­ing Google’s glob­al in­fras­truc­ture as my own per­son­al search in­dex for my own per­son­al pub­li­ca­tion­s. But the pain is re­al; I fre­quent­ly mine my own his­to­ry to re-use, for ex­am­ple in con­struct­ing the cur­rent #SongOfTheDay se­ries.

Com­pe­ti­tion · Bing can find it! Duck­Duck­Go can too! Both of them can find Brent’s Lon­don Calling piece, too.

What Google cares about · It cares about giv­ing you great an­swers to the ques­tions that mat­ter to you right now. And I find that if I type in a ques­tion, even some­thing com­pli­cat­ed and ob­scure, Google of­ten sur­pris­es me with a time­ly, ac­cu­rate an­swer. They’ve nev­er claimed to in­dex ev­ery word on ev­ery page.

My men­tal mod­el of the Web is as a per­ma­nen­t, long-lived store of humanity’s in­tel­lec­tu­al her­itage. For this to be use­ful, it needs to be in­dexed, just like a li­brary. Google ap­par­ent­ly doesn’t share that view.

What I’m go­ing to do · When I have a ques­tion I want an­swered, I’ll prob­a­bly still go to Google. When I want to find a spe­cif­ic Web page and I think I know some of the words it con­tain­s, I won’t any more, I’ll pick Bing or Duck­Duck­Go.

by Tim Bray, Ongoing |  Read more:
Image: Home Depot via:

The Star Chef Hiding in Plain Sight on Lānaʻi

Lānaʻi City is not a city at all but a tiny village on the least populated of Hawai‘i’s publicly accessible islands. Home to nearly all of Lānaʻi’s 3,200 inhabitants, it’s centered around Dole Park, a town square lined with low-slung, plantation-style buildings that date to the early 1900s, when the island was transformed into the pineapple capital of the world.

There are no stoplights and just a handful of businesses, which include a market that sells great poke, a Korean café with a good burger, an art gallery, and a movie theater that was recently modernized by Larry Ellison, the tech billionaire who bought 97 percent of the island in 2012 for a reported $300 million. Ellison also owns roughly a third of the buildings in Lānaʻi City, including the Hotel Lānaʻi, a two-story white clapboard structure. The 11-room hotel, which has Wi-Fi but no TVs, also houses the Lānaʻi City Bar & Grille.

The restaurant, which takes up most of the first floor, was remodeled last year in a cool, contemporary color palette with modernist black booths — a radical change from the old-timey charm of its original decor. In November, Jimi Lasquete, a chef who trained with Alice Waters early in his career and under Michael Mina at San Francisco’s Aqua, took over; the word around town is that he has transformed a restaurant serving passably good fare, which had consisted of simple dishes like roasted chicken, to one serving top-flight cuisine.

On a summer evening, I settle in and order the pohole fern namasu, a twist on the traditional cucumber and carrot salad that uses local fiddleheads and is dressed with a sesame vinaigrette. The next dish, blistered green beans from a farm on Maui, is served with an equally memorable Hong Kong-style black bean sauce. By the time I sample the vibrant gojuchang dipping sauce for the popcorn shrimp and house-made chicharrones, I’m gobsmacked — how often does one experience such artful cooking in a relatively remote setting?

In the course of the meal, Lasquete emerges from the kitchen for his nightly round of chatting up the diners, a practice he took up while cooking at the 12-table Evans American Gourmet Café, a fine-dining restaurant in Lake Tahoe. He points to the sauces I’m oohing over and credits his Filipino father, a former Navy chef, for helping develop his palate; he goes on to praise the Korean home cooks in his hometown of Newark, California, from whom he learned the painstaking process of making kimchi, an essential element in the pan-Asian cuisine he’s undertaking here.

The day that Lasquete first set foot on Lānaʻi, during a visit with his girlfriend in the summer of 2016, he instantly fell in love with the place. “Here was this amazing peace and quiet,” he says. “And the people I met on the ferry, who were coming home after shopping on Maui, they were so gracious, so reflective of the small-town atmosphere. It reminded me of my upbringing, of my cousins and my family.” (...)

Lānaʻi, like the island of Ni‘ihau, has a long history of private ownership. For centuries, Hawaiians, including those on Lānaʻi, had divided their land into tracts that could be used by an individual but not owned. In 1862, Walter Murray Gibson, an erstwhile entrepreneur and newly anointed Mormon leader, arrived on Lānaʻi and began buying plots of land with church money. (He was eventually excommunicated for embezzling church funds, among other crimes.) By the time he died in 1888, Gibson had purchased most of the island, which he passed on to his heirs.

After attempts at sugarcane cultivation and sheep ranching, Lānaʻi was sold in 1922 to James Drummond Dole, who was two decades into building his fruit empire. He covered the island with pineapples and built Lānaʻi City for his employees, along with the two-story building as a guesthouse for his friends. The island was owned by the Doles until 1985, when billionaire David Murdock acquired Dole parent company Castle & Cooke and, with it, the island of Lānaʻi. Out went the pineapples, which still irks some islanders, and in went two resorts, one near Lānaʻi City at Kō‘ele and the other at Mānele Bay.

At a public meeting a year after Ellison purchased the island, the billionaire’s representatives explained that his vision was for Lānaʻi to be the world’s “first economically viable, 100 percent green community.” There are plans for a desalination plant, still on the drawing board, and for restoration of the ancient fishponds, which is in the works. The Four Seasons Resort Lānaʻi at Manele Bay recently underwent a $450 million upgrade, and the Four Seasons Resort, The Lodge at Koele has been in the midst of a massive remodel for the last two years. Pūlama Lānaʻi also purchased a state-of-the-art, USDA-approved butchering trailer so that meat from the roughly 30,000 axis deer living on the island — an invasive species that first arrived in the 1920s — could be used in the restaurants. Each week, Pūlama Lānaʻi’s game management division delivers a pair of deer to Lasquete, and he uses every part of the animal, including in his bolognese sauce, while venison loin with a black cherry cabernet demi-glace has become one of his signature entrees.

by Ann Herold, Eater |  Read more:
Image: Lanai City Bar & Grille
[ed. See also: The Eater Guide to Hawaii]

Public Transportation is Cool


New Adidas Trainers Double as Berlin Transit Passes

The shoes, which feature the same camouflage pattern used on the city’s train seats, double up as an annual transit pass. It’s embedded in the tongues of the trainers, which are styled as a fabric version of the BVG annual ticket, and can be used just like a regular ticket covering the bus, tram and underground in zones A and B. While the cheapest annual ticket available from the BVG is currently €728, the shoes cost just €180.

Image: uncredited
[ed. Pretty smart, and a great travel discount too.]

Monday, January 15, 2018

The TL:DR Guide to Michael Wolff's 'Fire and Fury'

A quick note about Michael Wolff's Fire and Fury, which upon a second pass still has, to put it mildly, some serious issues: As any art historian can pick out a forgery, veteran journalists reading this book will quickly spot an oversold narrative and perhaps unprecedented sourcing issues.

The tortured "Author's Note" preceding the prologue almost reads like a novel in itself. In fact, trying to follow Wolff's idea of what "off the record" means or does not mean is like trying to follow the hands of a three-card monte dealer. It just can't be done.

As a White House source put it, Wolff's narrative personality is almost like a comedy act in itself:

"He's like the old Jon Lovitz character from Saturday Night Live," the source said. "You know – 'Yeah, I went to Harvard, that's the ticket. And, yeah, I was on the couch in the West Wing for months, that's the ticket.'"

Fire and Fury is really two books rolled into one. The first is a compelling nonfiction book about the intellectual divide in the modern right, as candidly hashed out to Wolff by influential figures like Steve Bannon and Roger Ailes and (seemingly?) Rupert Murdoch.

The second is a Primary Colors-style novel about what goes on behind various closed doors in the Trump White House, based on a few bits and pieces of fact, which are offset by mountains of eye-rollingly insupportable supposition, spiced with occasional stretches of believable analysis.

There is considerable debate in the media world, on both the left and the right, about the value of this book (even I've gone back and forth on it). In the end, I think it's like a piece of moldy rye bread – you have to cut around the hairily sourced parts to keep from getting poisoned. But on a broad level, there is something to dig into.

Reading the book, there are at least a few real points about Trump that shine through:

1) Trump has almost no ideological convictions and is motivated almost entirely by the classic narcissistic value equation, i.e. how much praise or scorn he gets on a second-to-second basis, from whom, and why. Had he not run as a Republican – and in particular won on a platform scripted by a nationalist true believer like Bannon – he might very well by now have been pushed into a completely different kind of presidency. Trump wants so badly to be liked that, especially with the influence of Kushner and Ivanka, he might easily have allowed his White House to drift back toward his original politics, which (as New Yorkers and furious conservatives alike will clearly remember) was once squarely in the Bob Rubin rich-guy sort-of Democrat mold.

2) However, as Bannon points out in the book – correctly – Trump by now is so firmly entrenched in the consciousness of America's intellectual elite as a villain that he will never be accepted by that crowd. The constant battering Trump gets from the press, especially, ensures that he will continue to lash out at them, forcing him continually to tack back to the only people who still like him – Bannon's angry-man followers. This despite the fact that what Trump clearly craves is, instead, the approval of members of his own class.

3) The result is an insane paradox of an America led by a doomed and trapped psyche. This is a president who in another era might have been confined to the impact of an ordinary bad commander-in-chief (we've had many), i.e., sedated and/or scripted in public, and kept on the golf course the rest of the time while the empire runs on the dreary autopilot of donors, P.R. flacks and military advisers.

Instead, we get a leader whose most dangerous moments come during his ever-expanding calendar of hyper-tweeting downtime (incidentally, is anything more certain than the term "executive time" replacing "taking my talents to South beach" as this generation's euphemism for masturbation?). All those crazed Trump tweets guarantee an endless cycle of paranoia and rebuke – and a permanently paralyzed White House.

Anyway, it's a fascinating book. But too long for most people in the Internet age to actually read. So without further ado, here's shorter Michael Wolff, in chapter form:

a) The Author's Note:

See if you can make sense of this passage:

"Many of the accounts of what has happened in the Trump White House are in conflict with one another; many, in Trumpian fashion, are baldly untrue. Those conflicts, and that looseness with the truth, if not with reality itself, are an elemental thread of the book. Sometimes I have let the players offer their versions, in turn allowing the reader to judge them. In other instances I have, through a consistency in accounts and through sources I have come to trust, settled on a version of events I believe to be true."

In other words: The unattributed facts you're about to read are sometimes my best guess as to the truth, and sometimes someone else's more dubious version, and you won't know which is which, but – whatever, enjoy!

b) Prologue: Ailes and Bannon

This is the most interesting part of the book, and not just because Wolff has the stones to use the word "louche" in a sentence early on (there's an "I went to college, honest" word choice about once every four pages in Fire and Fury). This passage alone sums up 30 years of the history of right-wing thinking:

"Ailes was convinced that Trump had no political beliefs or backbone. The fact that Trump had become the ultimate avatar of Fox's angry common man was another sign that we were living in an upside-down world. The joke was on somebody – and Ailes thought it might be on him."

This is the main theme of the book: That both the Republican establishment (as represented by the likes of Ailes and Murdoch) and the alt-right revolution (as represented by Bannon) think Trump is a fumbled football they can pick up and run into the end zone of power.

In the end, of course, the joke is on everyone, as Trump's brain fumbles hopelessly out of bounds and neither side successfully appropriates his presidency, which becomes an endlessly circular, purposeless, narcissistic tweet-storm.

1. ELECTION DAY


Wolff becomes roughly the 40,000th writer to compare Trump's campaign to The Producers. In classic Hollywood formula-script fashion, the Trump campaign is presented as composed of characters that each have their own desperate motivation to lose, only to each be crushed in their own way by the shocker result.

This chapter reads a lot like Shattered, the acid catalogue of finger-pointing that took place among high-ranking Clinton campaign figures after Hillary's loss, except here it's backwards. In this case, the characters start to blame each other for somehow transforming what Steve Bannon called a surefire "broke dick" loser campaign into a winner.

The only person who truly believed from the start that Trump would win is Melania, who had learned to expect, with religious certainty, that her husband would deliver upon the worst-case scenario in every situation. She was right.

by Matt Taibbi, Rolling Stone |  Read more:
Image: Carolyn Kaster/AP

Beware the Lessons of Growing Up Galapagos

I'm wary of all conclusions drawn about media in the scarcity age, including the idea that people went to see movies because of movie stars. It's not that Will Smith isn't charismatic. He is. But I suspect Will Smith was in a lot of hits in the age of scarcity in large part because there weren't a lot of other entertainment options vying for people's attention when Independence Day or something of its ilk came out, like clockwork, to launch the summer blockbuster season.

The same goes for the general idea that any one star was ever the chief engine for a film's box office. If the idea that people go see a movie just to see any one star was never actually true, we can stop holding the modern generation of movie stars to an impossible standard.

The same mistake, I think, is being made about declining NFL ratings. Owners blame players kneeling for the national anthem, but here's my theory: in an age of infinite content, NFL games measure up poorly as entertainment, especially for a generation that grew up with smartphones and no cable TV and thus little exposure to American football. If I weren't in two fantasy football leagues with friends and coworkers, I would not have watched a single game this season, and that's a Leftovers-scale flash-forward twist for a kid who once recorded the Superbowl Shuffle to cassette tape off a local radio broadcast just to practice the lyrics.

If you disregard any historical romantic notions and examine the typical NFL football game, it is mostly dead time (if you watch a cut-down version of a game using Sunday Ticket, only about 30 minutes of a 3 to 3.5 hr game involves actual game action), with the majority of plays involving action of only incremental consequence, whose skill and strategy on display are opaque to most viewers and which are explained poorly by a bunch of middle-aged white men who know little about how to sell the romance of the game to a football neophyte. Several times each week, you might see a player hit so hard that they lie on the ground motionless, or with their hands quivering, foreshadowing a lifetime of pain, memory loss, and depression brought on by irreversible brain damage. If you tried to pitch that show concept just on its structural merits you'd be laughed out of the room in Hollywood.

Cultural products must regenerate themselves for each successive age and generation or risk becoming like opera or the symphony is today. I had season tickets to the LA Phil when I lived in Los Angeles, and I brought a friend to the season opener one year. A reporter actually stopped us as we walked out to interview us about why we were there, so mysterious it was to see two attendees who weren't old enough to have been contemporaries of the composer of the music that night (Mahler).

Yes, football has been around for decades, but most of those were in an age of entertainment scarcity. During that time the NFL capitalized on being the only game in town on Sundays, capturing an audience that passed on the game and its liturgies to their children. Football resembles a religion or any other cultural social network; humans being a tribal creature, we find products that satisfy that need, and what are professional sports leagues but an alliance of clans who band together for the network effects of ritual tribal warfare?

Because of its long incubation in an era of low entertainment competition, the NFL built up massive distribution power and enormous financial coffers. That it is a cultural product transmitted by one generation to the next through multiple channels means it's not entirely fair to analyze it independent of its history; cultural products have some path dependence.

Nevertheless, even if you grant it all its tailwinds, I don't trust a bunch of rich old white male owners who grew up in such favorable monopolistic conditions to both understand and adapt in time to rescue the NFL from continued decline in cultural relevance. They are like tortoises who grew up in the Galapagos Islands, shielded on all sides from predators by the ocean, who one day see the moat dry up, connecting them all of a sudden to other continents where an infinite variety of fast-moving predators dwell. I'm not sure the average NFL owner could unlock an iPhone X, let alone understand the way its product moves through modern cultural highways.

Other major sports leagues are in the same boat though most aren't as oblivious as the NFL. The NBA has an open-minded commissioner in Adam Silver and some younger owners who made their money in technology and at least have one foot in modernity. As a sport, the NBA has some structural advantages over other sports (for example, the league has fewer players whose faces are seen and who are active on social media in an authentic way), but the league also helps by allowing highlights of games to be clipped and shared on social media and by encouraging its players to cultivate public personas that act as additional narrative fodder for audiences.

I remember sitting in a meeting with some NFL representatives as they outlined a long list of their restrictions for how their televised games could be remixed and shared by fans on social media. Basically, they wanted almost none of it and would pursue take-downs through all the major social media companies.

Make no mistake, one possible successful strategy in this age of abundant media is to double down on scarcity. It's often the optimal strategy for extracting the maximum revenue from a motivated customer segment. Taylor Swift and other such unicorns can only release their albums on CD for a window to maximize financial return from her superfans before releasing the album on streaming services, straight from the old media windowing playbook.

However, you'd better be damn sure your product is unique and compelling to dial up that tactic because the far greater risk in the age of abundance is that you put up walls around your content and set up a bouncer at the door and no one shows up because there are dozens of free clubs all over town with no cover charge. (...)

My other test of narrative value is a variant of the previous compression test. Can you enjoy something just as much by just watching a tiny fraction of the best moments? If so, the narrative is brittle. If you can watch just the last scene of a movie and get most or all the pleasure of watching the whole thing, the narrative didn't earn your company for the journey.

Much more of sports fails this second test than many sports fans realize. I can watch highlights of most games on ESPN or HouseofHighlights on Instagram and extract most of the entertainment marrow and cultural capital of knowing what happened without having to sit through three hours of mostly commercials and dead time. That a game can be unbundled so easily into individual plays and retain most of its value to me might be seen as a good thing in the age of social media, but it's not ideal for the sports leagues if those excerpts are mostly viewed outside paywalls.

This is the bind for major sports leagues. On the one hand, you can try to keep all your content inside the paywall. On the other hand, doing so probably means you continue hemorrhaging cultural share. This is the eternal dilemma for all media companies in the age of infinite content.

by Eugene Wei, Remains of the Day |  Read more:
Image:Curtis Compton/Atlanta Journal-Constitution via AP

Hawaii and Human Error

The Cold War came to an end, somehow, without any of the world’s tens of thousands of nuclear warheads being fired. But there were decades-worth of close calls, high alerts, and simple mistakes that inched world leaders shockingly close to catastrophe.

Saturday’s terrifying, 38-minute episode in Hawaii will not go down as one of those close calls: Residents of the state waited for the bombs to fall after receiving text messages that a ballistic missile was on its way. FCC Chairman Ajit Pai on Sunday said “the government of Hawaii did not have reasonable safeguards or process controls in place to prevent the transmission of a false alert”—a case of human error, in other words.

But the episode did reveal the glaring deficiencies of an early-warning system that can easily misfire, along with some frightening truths about the speed at which policymakers and presidents must make decisions in the event that missiles really do fly. “Mistakes have happened and they will continue to happen,” the Arms Control Association’s Daryl Kimball told me. “But there is no fail safe against errors in judgment by human beings or the systems that provide early warning.”

As such, worries about miscalculation remain vivid. Vipin Narang, a political science professor at MIT focused on nuclear issues, tweeted one scenario on Saturday. “POTUS sees alert on his phone about an incoming toward Hawaii, pulls out the biscuit, turns to his military aide with the football and issues a valid and authentic order to launch nuclear weapons at North Korea. Think it can’t happen?”

The United States operates a series of radar and missile-defense systems across the Pacific. It includes satellites monitoring the Korean peninsula and fleets of American and Japanese warships equipped with the Aegis system, a powerful computing network that detects and tracks missile launches and aircraft. Those systems are tied to the U.S. Strategic Command’s Global Operations Center, buried deep underground in Nebraska, which monitors events around the world in real time and pumps that information to the Pentagon and the White House.

In the Hawaii incident, there was little danger of the United States firing off a nuclear response. Military officials knew within minutes of receiving the alert that there was no threat to U.S. territory; none of the Pentagon and U.S. spy satellites or the ground and sea-based radars detected any sign of missile launches from North Korea, government officials told me.

But with a president obsessed with cable news and Twitter, the erroneous alert could have easily triggered an angry or provocative tweet, which could have been interpreted by the North Koreans or Russians as an imminent threat. According to pool reports, Trump was briefed on the false alarm while at his private golf course in Florida. Hours later, he tweeted about Hillary Clinton’s “missing” emails and the performance of the stock market. He has yet to comment on the incident despite knowing within minutes that all was safe, even as horrified Hawaiians continued to expect the worst. (...)

While the United States has a series of sophisticated early warning systems, potential adversaries do not, making initial statements from American officials critical in tense situations. “We have to be concerned about our adversaries early warning systems and their interpretation of these signals and messages,” Kimball said.

Entering this complex array of political signaling, high-tech surveillance, and careless tweeting, is the Pentagon’s new Nuclear Posture Review, the first since 2010. Originally slated for release next month, a draft of the document leaked this past week shows the Trump administration is lowering the bar for what would trigger an American nuclear response. It includes an entire section about non-nuclear strategic attacks that could spur an American nuclear response: cyber warfare, massive blows to critical infrastructure, and certain catastrophic attacks on civilians.

That is a “major expansion over Clinton, Bush and Obama,” all of whom attempted to reduce the role of nuclear weapons, Jon Wolfsthal, a former Obama official who worked on nuclear issues, told me. The new strategy views nuclear weapons as “a swiss army knife that can be pulled out to solve a range is issues,” he added. Among several new weapons the document proposes are so-called “low-yield nukes,” which could be placed on existing Trident ballistic missiles launched from submarines, lowering the threshold for use by causing less fallout, limiting the impact zone, and causing fewer civilian casualties.

As one defense official involved in nuclear issues put it: “We are self-deterred because our nuclear weapons are too big, and would cause too much damage if used.” The new strategy paper, then, expands the types of scenarios under which the United States would choose the nuclear option, which in turn “could lead to a new round of testing of nuclear weapons,” the official said.

by Paul McCleary, The Atlantic |  Read more:
Image: Ben Jennings via The Guardian

Sunday, January 14, 2018

False Ballistic Missle Alert


[ed. I was in Honolulu when the alert went out. My initial reaction? Must be a virus. Nothing on the news, no jets screaming overhead, no sirens blaring, nothing. So I figured, just a spoof and clicked the phone off. Unfortunately, my brother was in Kona at the airport when the warning appeared. All flights were immediately cancelled and TSA operations shut down. We could only laugh afterward. When you think about it, where would you rather be in a real situation - standing in a line several hundred people deep waiting to get through TSA - or taking off in a jet? I'm actually surprised people responded as rationally as they did. No massive car pile-ups. No screaming in the streets. No looting or anything else (no strangling airport security). Just everyone seemingly taking it in stride, like... what can you do?

See also: Missle-Alert Error Reveals Uncertaintly About How to React, and What It Felt Like in Hawaii When Warning of an In-Bound Missile Arrived.]

Saturday, January 13, 2018

Audiophilia Forever: An Expensive New Year’s Shopping Guide

Here are some of the most beautiful recorded musical sounds that I have heard in the past few weeks: the matched horns and clarinet, very soft, in Duke Ellington’s “Mood Indigo,” recorded in 1950; Buddy Holly, in his just-hatched-this-morning voice, singing “Everyday,” recorded in 1957; the London Symphony Orchestra in full cry under André Previn, playing Shostakovich’s tragic wartime Symphony No. 8, recorded in 1973; and Willie Watson’s rich-sounding guitar, accompanying him singing “Samson and Delilah,” recorded last year. The source of all these sounds was a vinyl long-playing record.

I tried to quit. I tried to give up audiophilia. You might even say I stopped my ears. That is, I listened to my O.K. high-end audio rig when I could find a few hours, ignoring its inadequacies. But, most of the time, I listened to CDs ripped into iTunes and then played on an iPod with a decent set of headphones. Hundreds of hours of music were inscribed there: Wagner’s “Parsifal” and John Coltrane’s “Blue Train” and the Beatles’ “Rubber Soul”—soul music, indeed! The glories of Western music, if you want to be grand about it, were at my fingertips, and I was mostly content. For years, I relinquished the enthralling, debilitating, purse-emptying habit of high-end audio, that feverish discontent, that adolescent ecstatic longing for more—a better record player, speakers with more bottom weight, a CD player that completely filtered out such digital artifacts as ringing tones, brittleness, and hardness.

Most people listen to music in the way that’s convenient for them; they ignore the high-end stuff, if they’ve even heard of it, as an expensive fetish. But audiophiles are restless; they always have some sort of dream system in their heads. They are ready, if they can afford it, to swap, trade, buy. It’s not enough, for some listeners, to have a good turntable, CD player, streaming box, pre-amplifier, amplifier, phono stage, speakers, and top-shelf wires connecting them all together. No, they also need a power conditioner—to purify the A.C. current. Does it matter, each separate thing? The cables, too? Is it all nonsense? The debates rage on, for those who are interested. At the moment, the hottest thing in audio is “high-resolution streaming”—the hope, half-realized, of getting extraordinary sound through the Internet.

We audiophiles want timbal accuracy. We want the complex strands of an orchestral piece disentangled, voice recordings that reveal chest tones and a clear top, pianos that sound neither tinkly nor dull, with the decay of each note sustained (not cut off, as it is in most digital recordings). We want all that, yet the sound of live music is ineffable. The goal can never be reached. The quest itself is the point. (...)

Yet there’s a serious problem with most of the streaming services: the sound is no more than adequate (exceptions to follow). And therein lies a tale—a tale, from the high-end audiophile’s point of view, of commercial opportunism,betrayal, and, well, audiophile-led redemption. A little potted audio history is now in order.

The first betrayal: in the sixties, Japanese solid-state equipment (Sony, Panasonic, Yamaha, etc.) emerged as a low-cost mass-market phenomenon, driving American quality audio, which had made analog, vacuum-tube equipment, deep underground. The big American names (like Marantz and McIntosh) stayed quietly in business while a variety of engineers and entrepreneurs who loved music started small companies in garages and toolsheds. It was (and is) a story of romantic capitalism—entrepreneurship at its most creative. Skip forward twenty years, to the second betrayal: in 1982, digital sound and the compact disk were proclaimed by publicists and a gullible press as “perfect sound forever.” But any music lover could have told you that early digital was often dreadful—hard, congealed, harsh, even razory, the strings sounding like plastic, the trumpets like sharp instruments going under your scalp. The early transfer of “Rubber Soul,” just to take one example, was unlistenable.

The small but flourishing high-end industry responded to digital in three different ways: it produced blistering critiques of digital sound in the musically and technically literate audiophile magazines The Absolute Sound and Stereophile; it developed CD players that worked to filter out some of the digital artifacts; and it produced dozens of turntables, in every price range, which kept good sound and the long-playing record alive. Years ago, many refused to believe in the LP, but, really, anyone with a decent setup could have proved this to you: a well-recorded LP was warmer, more natural, more musical than a compact disk.

The recording industry woke up, as well: Sony and Phillips, which had developed the compact disc together, released, in 1999, a technology called D.S.D. (Direct Stream Digital) and embedded the results in Super Audio CDs—S.A.C.D. disks. Remember them? Some six thousand titles were produced, and the sound was definitely better than that of a standard CD. But the Super Audio CD was swamped by another marketing phenomenon—the creation of the iPod and similar devices, in 2001, which made vast libraries of music portable. So much for S.A.C.D.s—your music library was now in your hand! For me, the iPod was, for long periods, the default way of listening to music. God knows I have sinned. I knew that I wasn’t hearing anything like the best.

Which brings us to betrayal No. 3: music was streamed to iPods and laptops by squeezing data so that it would fit through the Internet pipes—the sound, in the jargon, was “lossy.” And that’s the sound—MP3 sound—that a generation of young people grew up with. The essentials of any kind of music came through, but nuance, the subtleties of shading and color, got slighted or lost. High-end types, both manufacturers and retailers, still lament this development with rage and tears. Availability was everything for the iPod generation. Well, yes, of course, says the high end, availability is a great boon. But most of the kids didn’t know that they were missing anything in the music.

Except for the few who did. A growing corpus of young music lovers have, in recent years, become attached to vinyl—demanding vinyl from their favorite groups as they issue new albums, flocking to new vinyl stores. For some, it may be about the sound. Or maybe it’s about backing away from corporate culture and salesmanship. Vinyl offers the joys of possessorship: if you go to a store, talk to other music lovers, and buy a record, you are committing to your taste, to your favorite group, to your friends. In New York, the independent-music scene, and the kinds of loyalties it creates, are central to vinyl. In any case, the young people buying vinyl have joined up with two sets of people who never really gave up on it: the scratchmaster d.j.s deploying vinyl on twin turntables, making music with their hands, and the audiophiles hoarding their LPs from decades ago. The audiophile reissue market has come blazingly to life:

by David Denby, New Yorker |  Read more:
Image: Janne Iivonen

Friday, January 12, 2018

How, and Why, the Spectre and Meltdown Patches Will Hurt Performance

As the industry continues to grapple with the Meltdown and Spectre attacks, operating system and browser developers in particular are continuing to develop and test schemes to protect against the problems. Simultaneously, microcode updates to alter processor behavior are also starting to ship.

Since news of these attacks first broke, it has been clear that resolving them is going to have some performance impact. Meltdown was presumed to have a substantial impact, at least for some workloads, but Spectre was more of an unknown due to its greater complexity. With patches and microcode now available (at least for some systems), that impact is now starting to become clearer. The situation is, as we should expect with these twin attacks, complex.

To recap: modern high-performance processors perform what is called speculative execution. They will make assumptions about which way branches in the code are taken and speculatively compute results accordingly. If they guess correctly, they win some extra performance; if they guess wrong, they throw away their speculatively calculated results. This is meant to be transparent to programs, but it turns out that this speculation slightly changes the state of the processor. These small changes can be measured, disclosing information about the data and instructions that were used speculatively.

With the Spectre attack, this information can be used to, for example, leak information within a browser (such as saved passwords or cookies) to a malicious JavaScript. With Meltdown, an attack that builds on the same principles, this information can leak data within the kernel memory.

Meltdown applies to Intel's x86 and Apple's ARM processors; it will also apply to ARM processors built on the new A75 design. Meltdown is fixed by changing how operating systems handle memory. Operating systems use structures called page tables to map between process or kernel memory and the underlying physical memory. Traditionally, the accessible memory given to each process is split in half; the bottom half, with a per-process page table, belongs to the process. The top half belongs to the kernel. This kernel half is shared between every process, using just one set of page table entries for every process. This design is both efficient—the processor has a special cache for page table entries—and convenient, as it makes communication between the kernel and process straightforward.

The fix for Meltdown is to split this shared address space. That way when user programs are running, the kernel half has an empty page table rather than the regular kernel page table. This makes it impossible for programs to speculatively use kernel addresses.

Spectre is believed to apply to every high-performance processor that has been sold for the last decade. Two versions have been shown. One version allows an attacker to "train" the processor's branch prediction machinery so that a victim process mispredicts and speculatively executes code of an attacker's choosing (with measurable side-effects); the other tricks the processor into making speculative accesses outside the bounds of an array. The array version operates within a single process; the branch prediction version allows a user process to "steer" the kernel's predicted branches, or one hyperthread to steer its sibling hyperthread, or a guest operating system to steer its hypervisor.

We have written previously about the responses from the industry. By now, Meltdown has been patched in Windows, Linux, macOS, and at least some BSD variants. Spectre is more complicated; at-risk applications (notably, browsers) are being updated to include certain Spectre mitigating techniques to guard against the array bounds variant. Operating system and processor updates are needed to address the branch prediction version. The branch prediction version of Spectre requires both operating system and processor microcode updates. While AMD initially downplayed the significance of this attack, the company has since published a microcode update to give operating systems the control they need.

These different mitigation techniques all come with a performance cost. Speculative execution is used to make the processor run our programs faster, and branch predictors are used to make that speculation adaptive to the specific programs and data that we're using. The countermeasures all make that speculation somewhat less powerful. The big question is, how much?

by Peter Bright, ARS Technica |  Read more:
Image: Aurich/Getty
[ed. A graduate seminar in micro-processor technology.]