What Happened Today: June 14, 2023
Nord Stream whodunit update; $400 billion fraud; Cancer drug shortage; Arthur Herman on the AI war
The Big Story
The mystery of last year’s Nord Stream pipeline sabotage has seemed to finally begin unraveling in recent weeks with both the announcement that the Biden administration was warned about a Ukrainian plan to destroy the pipelines three months before they went down on Sept. 26 and new details from a German investigation that point to possible Polish collaboration in the attacks. Dutch intelligence services warned the CIA in June 2022 that the Ukrainians were planning an attack on the Nord Stream 1 pipeline that ran under the Baltic Sea from Russia to Germany, but subsequently decided to shelve it, Dutch public broadcaster NOS reported on Tuesday. American officials in turn informed the German government of the intel and also warned the Ukrainians against pursuing such an attack, according to a separate report in The Wall Street Journal that also cites Dutch intelligence as the original source of the tip to U.S. officials. While Ukraine’s President Zelenskyy has maintained that his nation was not involved in the attack on the pipeline, a third report in The Washington Post claims that Zelenskyy was purposefully kept out of the loop by his top general, Valerii Zaluzhny, as a way of providing the leader with plausible deniability for the attack.
When the attacks on the Nord Stream pipelines occurred last September, much of the initial speculation in the press focused on a possible Russian involvement in the plot—an improbable, though not impossible, assertion given that the pipelines were major economic and strategic assets for Russia. More straightforward theories held that the attack was likely carried out by Ukrainian or NATO forces. Then, in February of this year, maverick American journalist Sy Hersh published a blockbuster article alleging in exhaustive detail that the attack was planned and led by the United States. Hersh’s theory spread far and wide but was quickly taken apart in Tablet by Lee Smith, who cataloged the many factual inconsistencies in Hersh’s account as well as the unreliability of its sourcing.
Germany has been conducting its own investigation of the attacks and apparently has gathered significant evidence pointing to Poland as a staging ground. The German findings point to a German yacht, the Andromeda, rented in Germany by a Ukrainian-owned travel service based in Poland, that might have been used to plant the explosives in the attack. The Polish government denies any involvement and says the Andromeda story could be Russian misinformation.
Read it here: https://edition.cnn.com/2023/06/13/europe/nordstream-plot-dutch-cia-ukraine-russia-intl/index.html
And read Lee Smith’s February article debunking the Nord Stream conspiracies: https://www.tabletmag.com/sections/news/articles/sy-hersh-swings-big-misses-lee-smith
In the Back Pages: How to Win the AI War
The Rest
→ On Tuesday, former president Donald Trump turned himself in at the federal courthouse in Miami on charges that he mishandled classified documents after leaving the White House. As Trump sat with his arms crossed in the trial for the first case of a president facing federal criminal charges, his team pleaded not guilty to 37 charges against him. He was not required to give up his passport or restrict his travel, nor to pay bail to be released. Immediately after the proceedings, Trump headed to legendary Cuban restaurant Versailles, in Little Havana, where supporters sang Happy Birthday to him in advance of his 77th birthday, today. Trump replied, “Some birthday. Some birthday.” Later Tuesday evening at his Bedminster, New Jersey, golf club, Trump defended his actions, telling the crowd, “Whatever documents a president decides to take with him, he has the right to do so. It’s an absolute right. This is the law.” He accused President Biden of using the case against him to remove his “top political opponent” in the midst of an election and said that if he wins in 2024, he’s going to “appoint a real special prosecutor to go after the most corrupt president in the history of the United States of America, Joe Biden, and the entire Biden crime family.”
→ Number of the Day: $400 billion
That’s the amount of COVID-19 pandemic assistance that the Associated Press estimates was taken fraudulently from the $4.2 trillion the government has paid out. According to the Small Business Administration’s Office of Inspector General, fraud in the Economic Injury Disaster Loan program was at least $86 billion and in the Paycheck Protection Program at least $20 billion. In an effort to get the money out the door quickly to those in need, the government applied blind trust that was bound to create a lot of fraud, U.S. Justice Department Inspector General Mike Horowitz told the AP: “If you open up the bank window and say, give me your application and just promise me you really are who you say you are, you attract a lot of fraudsters, and that’s what happened here.”
→ The Pentagon is having trouble arming itself, according to a new report from the Government Accountability Office. More than half of 26 major acquisition programs evaluated by the GAO are delayed due to “supplier disruptions, software development delays, and quality control deficiencies.” Delayed programs include LGM-35A Sentinel intercontinental ballistic missile (ICBM) and Zumwalt-class destroyers. The ICBM delays are especially concerning, as the current land-based leg of the American nuclear triad, the Minuteman III, is essentially still running on infrastructure and parts from the 1960s. The missile, due to be operational in 2029, is now set to debut in 2030.
→ If you’re in New York, head to the Yeshiva University Museum for an unprecedented exhibition on Maimonides, the “Rambam,” one of the supreme biblical commentators of the past thousand years. The exhibit, titled “The Golden Path: Maimonides Across Eight Centuries,” has collected artifacts from across the world, including actual handwritten texts by Maimonides. “Arguably, no other individual has had a more pervasive or enduring effect on Jewish religious life over the past millennium than Maimonides,” said curator Dr. David Sclar to Ynet News. The exhibition runs through the end of 2023.
→ Last Wednesday, the National Comprehensive Cancer Network released the results of a survey that shows more than 90% of the nation’s largest cancer centers are running low on crucial chemotherapy drugs like carboplatin and cisplatin. While they are mostly still able to treat current patients, oncologists are concerned. In fact, to combat the shortages, the FDA signed off on a non-FDA-approved version of cisplatin from China. The shortage is partially due to supply chain issues, with a major producer in India offline, but also due to the low profitability of generics like carboplatin and cisplatin, which means drug companies aren’t devoting as much production to the drugs. Marina Sharifi, medical oncologist at the University of Wisconsin’s Carbone Cancer Center, told Axios, “This is the first time I’ve ever experienced drug rationing in my career.”
→ Tweet of the Day:
https://twitter.com/GuyDealership/status/1668748263300710401
This amazing graphic shows car ownership correlation by political party and voter turnout. Tesla owners are the biggest Democrats, followed closely by Mercedes owners, who are more likely to vote. Toyota owners are split right down the middle and are fairly reliable voters, while Lincoln owners are the most likely Republicans to vote—but GMC owners are the most Republican overall. Apparently, Mitsubishi and Infiniti owners just don’t like going to the polls.
Prior to the outbreak of wildfires in Quebec that have turned New York orange, the province of Alberta experienced a widespread eruption of their own that Alberta Premier Danielle Smith posited could have been started intentionally. In a podcast interview last week, Smith said, “I’m very concerned that there are arsonists, and there have been stories as well that we’re investigating. … We have almost 175 fires with no known cause at the moment.” But Alberta Wildfire spokesperson Melissa Story told the Toronto Star that while there are arson-related wildfires every year, “it’s not an emerging trend that we’re concerned about right now.” That won’t totally satisfy the Twitter crowd, though, as a time-lapse satellite video has been making the rounds showing the Quebec fires popping up nearly simultaneously over many miles of territory. Philippe Bergeron, a spokesperson for Quebec’s forest fire prevention agency, told AFP Canada in an email that the agency can trace 60% of the fires to lightning strikes from a storm on June 1.
→ The Messenger News is reporting that U.S. officials are making evacuation plans to get the 80,000 U.S. citizens out of Taiwan in the event of a war with China. While the planning has been going on for months, “it’s heated up over the past two months or so,” according to one of its anonymous sources. A State Department source told the outlet that the planning is being done quietly to prevent concern among Americans and Taiwanese who might interpret the preparations as a sign that war with China is imminent. The logistics of such an operation would be exceedingly difficult, especially in a surprise attack, as most of Taiwan’s airports are on the west side of the island, closest to China. Mark Cancian, a senior advisor at the Center for Strategic and International Studies, said it would also be a nightmare to get naval extraction. “Imagine a D-Day invasion and then a third country–Switzerland or something like that–wants to send a cruise ship through the U.S. fleet to Normandy to pick up its citizens.”
→ Thread of the Day:
https://twitter.com/benryanwriter/status/1668781806726836227
New York Times, Guardian, and NBC contributor Benjamin Ryan breaks down the latest American Medical Association policy on “gender-affirming care,” which cites more than 2,000 studies to support its conclusions, he says, but does not mention that most of those studies did not concern children, the group in question. The AMA statement claims to be “evidence-based,” but Ryan points to a recent article in British Medical Journal that illustrates the low quality of said evidence. He also points out that for the AMA to call the treatments “life-saving” is a misnomer, as no correlation between these treatments and suicide prevention has been substantiated.
TODAY IN TABLET:
My Mother and Me by Alberta Nassi
I could not save her. But I could save myself.
The Rabbi’s Wife Was a Spy by Motti Inbari
Ruth Blau’s shocking, true-life journey from Catholicism to the top of the ultra-Orthodox Neturei Karta—and from the French Resistance to facing off against the Mossad
SCROLL TIP LINE: Have a lead on a story or something going on in your workplace, school, congregation, or social scene that you want to tell us about? Send your tips, comments, questions, and suggestions to scroll@tabletmag.com.
How to Win the AI War
To avoid losing to China or going down the Chinese path of state control, we need a national strategy for artificial intelligence. Here’s where it should start.
Virtually everything that everyone has been saying about AI has been misleading or wrong. This is not surprising. The processes of artificial intelligence and its digital workhorse, machine learning, can be mysteriously opaque even to its most experienced practitioners, let alone its most ignorant critics.
But when the public debate about any new technology starts to get out of control and move in dangerous directions, it’s time to clue the public and politicians in on what’s really happening and what’s really at stake. In this case, it’s essential to understand what a genuine national AI strategy should look like and why it’s crucial for the U.S. to have one.
The current flawed paradigm reads like this: How can the government mitigate the risks and disruptive changes flowing from AI’s commercial and private sector? The leading advocate for this position is Sam Altman, CEO of OpenSource AI, the company that set off the current furor with its ChatGPT application. When Altman appeared before the Senate on May 13, he warned: “I think if this technology goes wrong, it can go quite wrong.” He also offered a solution: “We want to work with the government to prevent that from happening.”
In the same way that Altman volunteering for regulation allows him to use his influence over the process to set rules that he believes will favor his company, government is all too ready to cooperate. Government also sees an advantage in hyping the fear of AI and fitting it into the regulatory model as a way to maintain control over the industry. But given how few members of Congress understand the technology, their willingness to oversee a field that commercial companies founded and have led for more than two decades should be treated with caution.
Instead, we need a new paradigm for understanding and advancing AI—one that will enable us to channel the coming changes to national ends. In particular, our AI policy needs to restore American technological, economic, and global leadership—especially vis a vis China—before it’s too late.
It’s a paradigm that uses public power to unleash the private sector, and transform the national landscape, to win the AI future.
A reasonable discussion of AI has to start by disposing of two misconceptions.
First is the threat of artificial intelligence applications becoming so powerful and pervasive at a late stage of their development they decide to replace humanity—a scenario known as Artificial General Intelligence (AGI). This is the Rise of the Machines fantasy left over from The Terminator movies of the 1980’s when artificial intelligence research was still in its infancy.
The other is that the advent of AI will mean a massive loss of jobs and the end of work itself, as human labor—and even human purpose—is replaced by an algorithm-driven workforce. Fear mongers like to point to the recent Goldman Sachs study that suggested AI could replace more than 300 million jobs in the United States and Europe—while also adding 7% to the total value of goods and services around the world.
Most of these concerns stem from the public’s misunderstanding what AI and its internal engine, Machine Learning (ML), can and cannot do.
ML describes a computer’s ability to recognize patterns in large sets of data—whether that data are sounds, images, words, or financial transactions. Scientists call the mathematical representation of these data sets a tensor. As long as data can be converted into a tensor, it’s ready for ML and its more sophisticated offspring, Deep Learning, which builds algorithms mimicking the brain’s neural network in order to create self-correcting predictive models through repeated testing of datasets to correct and validate the initial model.
The result is a prediction curve based on past patterns (e.g., given the correlation between A and B in the past, we can expect AB to appear again in the future). The more data, the more accurate the predictive model becomes. Patterns that were unrecognizable in tens of thousands of examples can suddenly be obvious in the millionth or ten millionth example. They then become the model for writing a ChatGPT essay that can imitate the distinct speech patterns of Winston Churchill, or for predicting fluctuations in financial markets, or for defeating an adversary on the battlefield.
AI/ML is all about using pattern recognition to generate prediction models, which constantly sharpen their accuracy through the data feedback loop. It’s a profoundly powerful technology but it’s still very far from thinking, or anything approaching human notions of consciousness.
As AI scientist Erik Larson explained in his 2021 book The Myth of Artificial Intelligence, “Machine learning can never supply real understanding because the analysis of data does not bridge to knowledge of the causal structure of the world [which is] essential for intelligence.” What machine learning does—associating data points with each other—“doesn’t ‘scale’ to causal thinking or imagining.” An AI program can mimic this kind of intelligence, perhaps enough to fool a human observer. But it's inferiority to that observer in thinking, imagining, or creating, remains permanent.
Inevitably AI developments are going to be disruptive—they already are—but not in the way people think or the way the government wants you to think.
The first step is realizing that AI is a bottom up and not top-down revolution. It is driven by a wide range of individual entrepreneurs and small companies, as well as the usual mega players like Microsoft and Google and Amazon. Done right, it’s a revolution that means more freedom and autonomy for individual users, not less.
AI can perform many of the menial repetitive tasks that most of us would associate with human intelligence. It can sort and categorize with speed and efficiency; it can recognize patterns in words and images most of us might miss, and put together known facts and relationships in ways that anticipate development of similar patterns in the future. As we’ll demonstrate, AI’s unprecedented power to sharpen the process of predicting what might happen next, based on its insights into what’s happened before, actually empowers people to do what they do best: decide for themselves what they want to do.
Any technological revolution so sweeping and disruptive is bound to generate risks, as did the Industrial Revolution in the late eighteenth century and the computer revolution in the late twentieth. But in the end the risks are far outweighed by the endless possibilities. That’s why calls for a moratorium on large-scale AI research, or creating government entities to regulate what AI applications are allowed or banned, not only fly in the face of empirical reality but play directly into the hands of those who want to use AI as a tool for furthering the power of the administrative, or even absolute, state. That kind of centralized top-down regulatory control is precisely the path that AI development has taken in China. It is also the direction that many of the leading voices calling for AI regulation in the U.S. would like our country to move in.
Critics and AI fearmongers can’t escape one ineluctable fact: there is no way to put the AI gini back in its bottle. According to a company that tracks startup companies, Tracxn Technologies, at the end of 2022 there were more than 13,398 AI startups in this country. A recent Adobe study found that seventy-seven percent of consumers now use some form of AI technology. A McKinsey survey on the state of AI in 2022 found that AI adoption more than doubled since 2017 (from 20% to 50%), with 63% of businesses expecting investment in AI to increase over the next three years.
Once it's clear what AI can’t do, what can it do? This is what Canadian AI experts Ajay Agrawal, Joshua Gans, and Avi Goldfarb explain in their 2022 book, Power and Prediction.“ What happens with AI prediction,” they write, “is that prediction and judgment become decoupled.” In other words, AI uses its predictive powers to lay out increasingly exact options for action; but the ultimate decision on which option to choose still belongs to the program’s user’s judgment.
Here’s where scary predictions about AI will put people out of work need to be put in proper perspective. The recent Goldman Sachs report predicted the jobs lost or displaced could be as many as 300 million; the World Economic Forum put the number at 85 million by 2025. What these predictions don’t take into account is how many jobs will be created thanks to AI, including jobs with increased autonomy and responsibility since AI/ML will be doing the more tedious chores.
In fact, a January 2022 Forbes article summarized a study by the University of Warwick this way: “What appears clear from the research is that AI and associated technologies do indeed disrupt the labor market with some jobs going and others emerging, but across the board there are more jobs created than lost.”
Wide use of AI has the potential to move decision-making down to those who are closest to the problem at hand by expanding their options. But if government is allowed to exercise strict regulatory control over AI, it is likely to both stifle that local innovation and abuse its oversight role to grant the government more power at the expense of individual citizens.
Fundamentally, instead of being distracted by worrying about the downsides of AI, we have to see this technology as essential to a future growth economy as steam was to the Industrial Revolution or electricity to the second industrial revolution.
The one country that understood early on that a deliberate national AI strategy can make all the difference between following or leading a technological revolution of this scale was China. In 2017 Chinese President Xi Jinping officially set aside $150 billion to make China the first AI-driven nation by 2030. The centerpiece of the plan is a massive police-surveillance apparatus that gathers data on citizens whenever and wherever it can. In a recent U.S. government ranking of companies producing the most accurate facial recognition technology, the top five were all Chinese. It’s no wonder that half of all the surveillance cameras in the world today are in China, while companies like Huawei and TikTok are geared to provide the Chinese government with access to data outside China’s borders.
By law, virtually all the work that Chinese companies do in AI research and development supports the Chinese military and intelligence services in sharpening their future force posture. Meanwhile, China enjoys a booming export business selling those same AI capabilities to autocratic regimes from Iran and North Korea to Russia and Syria.
Also in 2017, the same year that Xi announced his massive AI initiative, China’s People’s Liberation Army began using AI’s predictive aptitude to give it a decisive edge on the battlefield. AI-powered military applications included enhanced command-and-control functions, building swarm technology for hypersonic missiles and UAVs, as well as object- and facial-recognition targeting software and AI-enabled cyber deterrence.
No calls for an international moratorium will slow down Beijing’s work on AI. They should not slow America’s efforts, either. That’s why former Google CEO Eric Schmidt, who co-authored a book with Henry Kissinger expressing great fears about the future of AI, has also warned that the six-month moratorium on AI research some critics recently proposed would only benefit Beijing. Back in October 2022 Schmidt told an audience that the U.S. is already steadily losing its AI arms race with China.
And yet the United States is where artificial intelligence first started back in the 1950s. We’ve been the leaders in AI research and innovation ever since, even if China has made rapid gains—China now hosts more than one thousand major AI firms, all of whom have direct ties with the Chinese government and military.
It would clearly be foolish to cede this decisive edge to China. But the key to maintaining our advantage lies in harnessing the technology already out there, rather than painstakingly building new AI models to specific government-dictated requirements—whether it’s including “anti-bias” applications, or limiting by law what kind of research AI companies are allowed to do.
What about the threat to privacy and civil liberties? Given the broad, ever-growing base of private AI innovation and research, the likelihood of government imposing a China-like monopoly over the technology is less than the likelihood that a bad actor, whether state or non-state, will use AI for deception and “deep fake” videos to disrupt and confuse the public during a presidential election or a national crisis.
The best response to the threat, however, is not to slow down, but to speed up AI’s most advanced developments, including those that will offer means to counter AI fakery. That means expanding the opportunities for the private sector to carry on by maintaining as broad a base for AI innovation as possible.
For example, traditional microprocessors and CPUs are not designed for ML. That’s why with the rise of AI, Graphics Processing Unit (GPU) are in demand. What was once relegated to high-end gaming PCs and workstations is now the most sought-after processor in the public cloud. Unlike CPUs, GPUs come with thousands of cores that speed up the ML training process. Even for running a trained model for inferencing, more sophisticated GPUs will be key for AI.
So will Field Programmable Gate Array or FPGA processors, which can be tailored for specific types of workloads. Traditional CPUs are designed for general-purpose computing while FPGAs can be programmed in the field after they are manufactured, for niche computing tasks such as training ML models.
The government halting or hobbling AI research in the name of a specious assessment of risks is likely to harm developments in both these areas. On the other hand, government spending can foster research and development, and help increase the U.S. edge in next-generation AI/ML.
AI/ML is an arena where the United States enjoys a hefty scientific and technological edge, a government willing to spend plenty of money, and obvious strategic and economic advantages in expanding our AI reach. So what’s really hampering serious thinking about a national AI strategy?
I fear what we are seeing is a failure of nerve in the face of a new technology—a failure that will cede its future to our competitors, China foremost among them. If we had done this with nuclear technology, the Cold War would have had a very different ending. We can’t let that happen this time.
Of course, there are unknown risks with AI, as with any disruptive technology. The speed with which AI/ML, especially in its Deep Learning phase, can arrive at predictive results that startle its creators. Similarly, the threat of Deep Fake videos and other malicious uses of AI are warnings about what can happen when a new technology runs off the ethical rails.
At the same time, the U.S. government’s efforts to censor "misinformation" on social media and the Biden White House’s executive order requiring government-developed AI to reflect its DEI ideology fail to address the genuine risks of AI, while using concerns about the technology as a pretext to clamp down on free speech and ideological dissent.
This is as much a matter of confidence in ourselves as anything else. In a recent blogpost in Marginal Revolution, George Mason University professor Tyler Cowen expressed the issue this way:
“What kind of civilization is it that turns away from the challenge of dealing with more. . . intelligence? That has not the self-confidence to confidently confront a big dose of more intelligence? Dare I wonder if such societies might not perish under their current watch, with or without AI?”
China is confidently using AI to strengthen its one-party surveillance state. America must summon the confidence to harness the power of AI to our own vision of the future.
Arthur Herman is a senior fellow and director of the Quantum Alliance Initiative at Hudson Institute. He is also The New York Times bestselling author of Freedom’s Forge: How American Business Won World War II.
The U.S. blew up the Nord Stream and no amount of BS, smoke and mirrors, no matter how far and long it's slung is ever going to change that.