What Happened Today: June 12, 2023
JP Morgan Chase to pay Epstein victims; "Lab leak theory" gets more support; Novak is King
The Big Story
JPMorgan Chase announced on Monday that it is prepared to pay a $290 million settlement to victims of Jeffrey Epstein’s sex-trafficking scheme. While the bank claims no knowledge of wrongdoing and says “any association with him was a mistake and we regret it,” Epstein was a client from 2000 to 2013—during which time he was convicted of soliciting prostitution from a minor. The lawsuit in question was brought by a “Jane Doe” on behalf of herself and as many as 100 other women who accused the bank of “knowingly” profiting from Epstein’s sex-trafficking business.
The announcement of a potential settlement comes just a few weeks after JPMorgan Chase CEO Jamie Dimon was questioned by lawyers for the plaintiff. Dimon claimed in his deposition, “I don’t recall knowing anything about Jeffrey Epstein until the stories broke sometime in 2019.” However, former Chase corporate and investment banking head Jes Staley, who was very close with Epstein, said in legal documents that Dimon knew who Epstein was and that Epstein had been a client at Chase as far back as 2006. Staley is a key figure in the web of Epstein’s sexual and financial crimes, as it’s been shown that he exchanged more than 1,200 emails with the convicted pedophile while employed at JPMorgan—and, according to a suit filed by Chase against Staley, he watched Epstein sexually assault one of his victims. The bank is seeking damages from Staley for “failing to disclose pertinent information and abandoning (JPMorgan’s) interests in favor of his own and Epstein’s personal interests.”
In the Back Pages: The Cyborgs Go to Washington
The Rest
→ Former Italian prime minister Silvio Berlusconi died on Monday at age 86 in Milan. The Teflon Don, as the media magnate turned scandal-prone politician was sometimes called, Berlusconi had a career arc from billionaire to world leader that was cited as a precursor to the rise of Donald Trump. Berlusconi, who spent his younger years singing on cruise ships, began accumulating his fortune in his late twenties, when he started a real estate business in Milan. He took his real estate money and poured it into the creation of a new TV network that showed American fare, and later into publishing, retail, and the AC Milan football team. In the 1990s, Berlusconi entered politics as the head of the conservative Forza Italia Party and ultimately served as prime minister three times between 1994 and 2011. He was forced to resign in 2011 after allegations that he hosted “bunga bunga” parties with underage girls and had committed tax fraud (he was convicted of the fraud in 2012.) Down but never out, in 2019 Berlusconi revived his political fortunes when he was elected to European Parliament, and his Forza Italia Party returned to power in the 2022 elections as part of Prime Minister Giorgia Meloni’s coalition.
A new report from the United Kingdom’s The Sunday Times digs deep into the origins of the COVID-19 pandemic and makes public connections between the Wuhan Institute of Virology, the New York-based EcoHealth Alliance, and the lab of American virologist Ralph Baric at the University of North Carolina. The article asserts that EcoHealth leader Peter Daszak lied to the National Institutes of Health about the danger of the experiments he was funding at Wuhan while getting millions of dollars to help his Chinese counterparts try to test the limits of coronavirus biology. To create the more highly transmissible and pathogenic versions of coronavirus, Wuhan researcher Shi Zhengli turned to Baric at North Carolina for his expertise in producing viral chimeras (combinations of viral genes) so that she could then test them on humanized mice in Wuhan. All this in the name of stopping a future pandemic. The article finally asserts that the Chinese military ultimately directed this research toward the creation of bioweapons, of which the novel coronavirus may very well be one. However, the timeline provided by The Sunday Times puts the accidental release of the COVID-19 virus in November 2019 with the documented illness of several researchers from the lab. Serological studies from Italy show the presence of COVID-19 antibodies in patients as early as September 2019, and wastewater samples from Barcelona show evidence of the virus as early as March 2019, so we’ll leave the Sunday Times report under Rumor Radar status, for now.
→ The creator of the Pfizer vaccine, German biotech company BioNTech, is being sued on Monday in German court over claims that the COVID-19 vaccine it developed has harmed one woman—causing upper-body pain, swollen extremities, fatigue, and sleeping disorder. The woman’s lawyer, Tobias Ulbrich, says his strategy is to challenge the government’s prior assertion of greater benefit than risk to his client. His firm, Rogers & Ulbrich, is pursuing 250 cases of vaccine injury, and another firm, Caesar-Preller, 100. Should they win the suits, the compensation may have to come directly from the European Union or German government, as their contracts with BioNTech and Pfizer largely protect the companies from liability. Earlier this year, German Health Minister Karl Lauterbach told German television, “I honestly feel very sorry for these people. There are severe disabilities, and some of them will be permanent. So it’s hard. What we do as a state is that the health insurance companies pay the treatment costs, and, well, the federal states bear the support costs, if support is necessary.”
→ Tweet of the Day:
https://twitter.com/amylutz4/status/1668228615048163328
In reference to the destruction of the critical Kakhovka dam last week, Time Magazine published an article titled “How Ukraine’s Dam Collapse Could Become the Country’s ‘Chernobyl.’” Another demonstration of the caliber of hire at our premier media institutions. Gently correcting the national magazine’s unforced error, tweeter Amy Lutz responded to Time, “Um. I think Chernobyl was Ukraine’s Chernobyl.”
→ The American Theatre Wing’s Tony Awards took place on Sunday, after some concern they would be canceled due to the current Writers Guild of America strike. The WGA gave the Tonys the go-ahead to put on the show, as long as it didn’t use scripted material that would normally be provided by union members, so the show went on (as it must) totally unscripted. Jewish stories were well represented in the winner’s circle: Tom Stoppard won Best Play for his Holocaust opus Leopoldstadt (based on his own family history), and the revival of Jason Robert Brown’s Parade, based on the story of Leo Frank, won Best Revival of a Musical. But as is so often the case with awards shows now, the biggest moments of the night came from attacks on conservatives and paeans to identity. Actress Denée Benton called Florida Gov. Ron DeSantis a “Grand Wizard” (as in a leader of the Ku Klux Klan), and two other Tony winners, J. Harrison Ghee and Alex Newell, made sure to inform the audience that it was historic that they both won as “nonbinary” performers. Last year’s Tonys had the second-lowest viewership in its history.
→ Number of the Day: 6.83 million
That’s how many couples filed a marriage registration last year in China out of a population of more than 1 billion. It’s a drop of 800,000 from 2021 and is the absolute lowest number of marriages in China since the country started keeping records. The birth rate also fell, to 6.77 births per 1,000 people, also the lowest on record. But the Chinese Communist Party isn’t going to let the decline go gently into that good night. In May, the government instituted a pilot project to encourage women to marry and have babies. Many provinces have already instituted more practical incentives, such as tax credits and housing subsidies.
→ Former Scottish National Party leader Nicola Sturgeon was arrested and questioned on Sunday in connection with an ongoing investigation into the misuse of donations. Hundreds of thousands of pounds collected starting in 2017 from supporters eager for a national referendum on independence from the United Kingdom was supposed to be “ring-fenced” for future use toward that goal, but a 2019-2020 review of the SNP’s bank accounts showed only $121,140 in the bank. In April, police seized a luxury camper van belonging to Sturgeon’s mother-in-law, though Sturgeon was not charged on Sunday after being questioned, she maintains, “Innocence is not just a presumption I am entitled to in law. I know beyond doubt that I am in fact innocent of any wrongdoing.”
→ Tennis has a new king and his name is Novak. Novak Djokovic won his record-setting 23rd major title at the French Open on Sunday in a commanding victory over young upstart Casper Ruud. Ruud said of Djokovic’s tenacity and endurance, “It’s just annoying for me, but it’s very, very impressive.” The road to 23 majors has required a great degree of tenacity and endurance from Djokovic indeed, as he refused to get the COVID-19 vaccine and was not allowed to play at the 2022 Australian or U.S. Open. Barring that, he might well be sitting on 25 majors by now. Nonetheless, the Serbian fighter still has a chance to make more history this year, as wins at this year’s Wimbledon and U.S. Open would make him only the second man in history to complete a calendar-year Grand Slam. With Rafa Nadal out with an injury until next season, it seems more possible than ever.
TODAY IN TABLET:
The Unsexiness of Sex Positivity by Ginevra Davis
As sex has become public, it has also become boring. A revival of discretion might be the only way forward.
A Place to Nurture Plants—and People by Ani Wilcenski
At an organic farm in northern Israel, troubled teenagers find a refuge where they can change the course of their lives
SCROLL TIP LINE: Have a lead on a story or something going on in your workplace, school, congregation, or social scene that you want to tell us about? Send your tips, comments, questions, and suggestions to scroll@tabletmag.com.
The Cyborgs Go to Washington
Who really benefits from the fearmongering calls for AI regulation?
Sam Altman is not exactly what you’d call “a humanist.” The CEO of Open AI (the company behind ChatGPT) believes we are designing our own evolutionary superiors through artificial intelligence. Humans, he says, can either embrace a cyborgian future or resign themselves to extinction. To that end, Altman plans on uploading his brain to the cloud. He even put a $10,000 deposit on an embalming procedure that, while “100 percent fatal,” will also supposedly preserve his brain so that he can be resurrected via computer simulation. Only artificial intelligence will allow us to “fully understand the universe,” Altman wrote last year. “Our unaided faculties are not good enough.”
This is all to say that Altman approaches AI regulation from a rather unsavory vantage point, assuming you still have a soft spot for those bumbling, pre-cyborgian apes we call humans. And yet, appearing before the Senate Judiciary Committee in May, Altman played the role of benevolent technologist, concerned above all with helping the government promote human flourishing in the midst of technological upheaval. He reminded the Senators on multiple occasions that OpenAI is a non-profit, leaving out the part where investors can still receive a 100x return before the money makes its way back to the non-profit arm. He also called for proactive regulations: It would be a “great idea” to put scorecards on AI systems, “great for the world” to set up international AI safety standards, and “very important” for governments to align AI systems with social values. The Senators either all fell for Altman’s schtick or, more likely, recognized his hustle as mutually beneficial. It was a chummy affair — the cherry on top of an enormously successful lobbying campaign that saw Altman meet with more than 100 members of Congress.
OpenAI achieved something undeniably remarkable in training silicon wafers to mimic human writing and thought. At the same time, nobody knows what comes next. Will it eliminate white-collar jobs? Upend journalism? Enchant incels with girlfriend simulators? Or, as Nassim Nicholas Taleb puts it, is ChatGPT just “a powerful cliché parroting engine?” Perhaps it can be all these things at once. After all, most desk jobs are tedious, most writing on the internet is cliché, and, to the lonely, even some small talk can be sustaining. The New Yorker may never use AI to write articles, but Axios and ESPN might, and Buzzfeed and Insider have already started. As for white-collar jobs, guesses range from labor market boom to fully automated corporate capitalism. IBM paused hiring for around 7,800 roles that the company said could be replaced by AI, but the publicity around the move suggests IBM was also trying to advertise its own AI automation services. Meanwhile, LinkedIn abounds with self-styled AI influencers preying on fears of professional obsolescence.
While it isn’t clear what Congress should do, everyone seems to agree it needs to do something. Yet our legislators have shown time and again they can’t meaningfully regulate the tech industry, which spent over $277 million on lobbying since 2020. Silicon Valley has only grown more powerful amid perpetual promises from Democrats and Republicans alike to rein in Big Tech. Every time something gets off the ground, the Big Tech lobbying apparatus ends up either killing the bill, neutering it, or appropriating it to serve their own interests. Last year, the industry managed to do a little of all three, banding together to pass a $76-billion corporate subsidy package, block a bill targeting Big Tech’s anti-competitive practices, and promote a wolf-in-sheep's-clothing federal privacy bill that would have preempted more comprehensive legislation in the states.
The hype cycle around AI is making matters worse. With Altman and his allies leading the dance, Silicon Valley can help write the rules of its own regulation in a way that lets both Washington and the tech industry claim they are “doing something” while further entrenching their own power. The national security state has already used the specter of AI disinformation to justify an escalation in their surveillance and censorship campaigns. Silicon Valley will be happy enough to play along, so long as it doesn’t impact bottom lines. More likely, the resultant regulations will effectively designate a handful of “responsible” purveyors of AI, guaranteeing an oligopoly for Microsoft (which owns a sizable portion of OpenAI), Amazon, and Google.
To the disinformation brigade, generative AI represents an unprecedented threat warranting an unprecedented response. The CEO of the NewsGuard called ChatGPT “the most powerful tool for spreading misinformation that has ever been on the internet.” Obama outlined “frightening and profound” implications for AI-generated disinformation on elections, democracy, and the legal system. Name-dropping on her podcast, Kara Swisher let slip that she had dinner with “Tony” Blinken, who was “very interested” in the State Department being part of an international effort to regulate AI. The imagined scenarios coming out of this sphere read like MSNBC fanfiction circa 2017: In the hands of Russia, China, and Iran, generative AI will be used to flood social media with authoritative-sounding disinformation, manipulating Americans at an unprecedented scale on issues such as vaccine efficacy and election procedures.
Because AI makes disinformation generation ubiquitous and near-instantaneous, content moderation systems will need to be made ubiquitous and instantaneous too — or so the thinking goes. Meta says it has been working on “new AI systems to automatically detect new variations of content that independent fact-checkers have already debunked.” Advances in AI will make for more powerful and unintelligible content moderation systems, capable of picking up on the subtleties of speech and suppressing select viewpoints in near real-time. Once these systems are in place, AI disinformation campaigns can serve as boogeymen for clamping down on ideological dissent. Researchers at Georgetown, for example, found that “after seeing five short messages written by GPT-3 and selected by humans, the percentage of survey respondents opposed to sanctions on China doubled.” Their conclusion that GPT-3 is therefore a dangerous manipulation machine betrays a fundamental lack of faith in the critical thinking abilities of the general public. It also shows how “AI manipulation campaign” can easily be conflated with espousing the wrong views.
AI makes for great cover in part because these scenarios are plausible, and to some extent already happening. In late March, the S&P 500 momentarily plunged after several prominent news aggregators fell for an AI-doctored photo of an explosion outside the Pentagon. Anyone can go to ChatGPT today and tell it to write an article in the style of The New York Times about Russia nuking the United States. The result is somewhat convincing, if not exactly Pulitzer-worthy. An example of the output: “The scale of devastation and loss of life is yet to be fully assessed, but early reports indicate a catastrophic impact on both infrastructure and human lives.” OpenAI has attempted to train ChatGPT to refuse nefarious prompts, but the guardrails are still surprisingly permissive: It won’t write fake articles referring to Xi Jinping or Joe Biden, but it can make up news about malfunctioning election machines and summarize the key points of vaccine skeptics (with a long preamble about “the overwhelming scientific consensus”).
The focus on AI guardrails in the national security context belies a fundamental (and likely willful) misunderstanding of the technology. Training large-language models such as ChatGPT is incredibly expensive and resource-intensive, but once the models are created, they are relatively easy to download and run on a personal computer without any of the usual restrictions. An advanced generative AI model developed by Meta was already leaked online. It’s laughable to think Russia or Iran would be hindered by guardrails on commercial chatbots. This is something the technology maximalists actually get right: the cat is out of the bag, and licensing requirements would do little to prevent AI systems from getting in the wrong hands.
The more likely outcome is that licensing will increase existing market power, allowing the most powerful tech companies to lobby for safety standards only they can pass. In a recent interview, Microsoft President Brad Smith called for a licensing system that would ensure models are only developed in “large data centers where they can be protected from cybersecurity, physical security and national security threats.” The lack of subtlety here is almost comical: Microsoft, Amazon, and Google control virtually the entire U.S. public cloud market and hold highly coveted security clearances needed to bid on Department of Defense cloud contracts.
The Pentagon can also help tech giants take out their primary foreign AI competitors in Alibaba and Baidu. It’s therefore very much in their interest to hype up the apocalyptic potential of AI, whether they believe it or not. A recent open letter signed by the likes of Altman, Bill Gates, and Microsoft CTO Kevin Scott said, in its entirety, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The RAND Corporation, a military-funded think tank, has already developed plans for preventing “bad actors” (read: China) from developing advanced AI systems. Ideas include embedding microelectronic safeguards in advanced chips to prevent anyone from developing models without U.S. permission. And while it has a horrible record on censorship and surveillance, China already passed a set of AI safety regulations with key privacy protections we may never enjoy in the U.S.
All of this ends up feeling tiresome. We’ve seen this playbook before: Censorship disguised as safety, economic warfare in the name of national security, profiteering beneath a veneer of social welfare. What’s particularly concerning about the AI hype cycle is that — as with the best lies — they contain more than a kernel of truth. Sam Altman and his ilk are true believers, even if they are also opportunists. They know fear is a powerful tool for coercion, but they also know there is something to fear. The hype cycle around AI makes it seem as though everything will become radically different, and while that may eventually prove true, for now it just distracts us from all that will stay the same.
Hirsh Chitkara is a writer living in New York.
A question, rarely asked, and seldom answered:
What could go wrong?