12/19/18 Michael Klare: the Catastrophic Consequences of Nuclear AI

by | Dec 21, 2018 | Interviews

Michael Klare joins Scott to talk about his recent article, “Alexa, Launch Our Nukes!”, about the dangers of a future where our weapons systems are controlled by artificial intelligence—indeed they already are to an ever greater extent. Computers of course have a big advantage over humans in processing lots of data quickly, so they can conceivably do a much better job calculating when and where to fire weapons, running simulations of different battle scenarios, distinguishing valid targets based on image recognition, etc. What they can’t do is make the type of decision that might involve losing a conventional war rather than starting a nuclear war. Moreover, computers are only as good as the humans programming them, and those humans can and do make mistakes. Unfortunately we have a tendency to trust computers as inherently immune to mistakes.

Discussed on the show:

Michael Klare is the author of The Race for What’s Left and a regular contributor at TomDispatch.com. Find him on Twitter @mklare1.

This episode of the Scott Horton Show is sponsored by: Kesslyn Runs, by Charles Featherstone; NoDev NoOps NoIT, by Hussein Badakhchani; The War State, by Mike Swanson; WallStreetWindow.comRoberts and Roberts Brokerage Inc.Zen Cash; Tom Woods’ Liberty ClassroomExpandDesigns.com/Scott; and LibertyStickers.com.

Check out Scott’s Patreon page.

Play

Sorry, I'm late.
I had to stop by the Wax Museum again and give the finger to FDR.
We know Al-Qaeda, Zawahiri, is supporting the opposition in Syria.
Are we supporting Al-Qaeda in Syria?
It's a proud day for America.
And by God, we've kicked Vietnam syndrome once and for all.
Thank you very, very much.
I say it, I say it again, you've been had.
You've been took.
You've been hoodwinked.
These witnesses are trying to simply deny things that just about everybody else accepts as fact.
He came, he saw us, he died.
We ain't killing they army, but we killing them.
We be on CNN like, say our name, bitch, say it, say it three times.
The meeting of the largest armies in the history of the world.
Then there's going to be an invasion.
All right, you guys, introducing Michael T. Clare.
He is a regular at Tom Engelhardt's great website, tomdispatch.com.
And therefore we run, I think, pretty much everything he writes at antiwar.com as well.
He is the five college professor emeritus of peace and world security studies at Hampshire College.
And a senior visiting fellow at the Arms Control Association.
His most recent book is The Race for What's Left.
Welcome back to the show.
How you doing, Michael?
Always a pleasure.
Very happy to have you here.
And what a great piece you wrote here.
It's scaring the hell out of me.
We've all seen this coming since Terminator 1 came out in 1984 or whatever it was.
But we're really getting there.
The title is Alexa, Launch our Nukes, Artificial Intelligence and the Future of War.
So, well, I guess, what all exactly is at issue here?
We've known for a while that we have drones, of course, piloted by humans.
And yet there's been stories over the years now, going back quite a few years, I guess, about autonomous weapon systems where the artificial intelligence will be able to pick targets and kill them.
But it sounds like this is rapidly advancing and the story itself is changing to just to what degree American military force or other military forces could be turned over to artificial intelligence for almost full command and control, it sounds like here.
That's the way things appear to be moving.
Now, bear in mind, when drone technology was developed in the past 10, 15 years or so, the U.S. was fighting small wars, isolated wars in Africa and the Middle East.
And drones were used to help out our ground soldiers by spying out enemy hideouts, tracking down isolated bands of violent extremists in desert locations for the most part.
There was no opposition to this.
So these were really intended then for counterinsurgency, counterterrorism operations.
What the Pentagon is thinking about now is something very, very different.
They're thinking about high-intensity warfare with Russia and or China, where both sides are going to have armed robots and drones, and the intensity of fighting is going to be, you know, a thousand, a million times greater than the kind of fighting we see in, say, Iraq or Syria or Somalia.
It's going to be thousands and thousands of weapons all firing at once.
And in these conditions, no battlefield commander is expected to be able to keep track of all the moving pieces on the battlefield.
So increasingly, they expect that machines are going to have to take over the tracking, the monitoring of all these forces, and the decision-making, because humans won't be able to keep up with it all.
At least this is the way the Pentagon sees things.
Well, so, I mean, I'm just thinking immediately of Neil Postman and his great book, Technopoly, about the outsourcing of decision-making.
And you talk a bit about this here in the article, particularly in the context of nuclear weapons, where a human American maybe would decide that it'd be better to lose a war against China than to go ahead and break out the H-bombs, whereas a computer doesn't have feelings.
It might not think about things that way.
It's just garbage in, garbage out, logic according to the programmer.
And the computers very well could get to a position where, never mind H-bombs necessarily, but where they start making decisions that humans would never make.
You have this paradox, right, where you have these silicon chips and the flawless logic of the way computers work compared to the kinds of mistakes that mushy human brains can make.
But mushy human brains can also conceive of things.
But computers can't conceive of anything at all.
A computer can't feel, can't know.
All it can do is process data.
It can't even really understand what information is, other than just pushing it through an algorithm.
And yet, humans seem to have this huge incentive to promote this sort of thing in order to diffuse responsibility away from themselves and to automate the decision making.
But it just seems like absolutely a recipe for disaster.
Like you're saying, a battlefield commander might not be able, a human might not be able to keep up with all the different things happening on the battlefield in a way that a computer could.
But he also could understand context in a way that a computer never could, no matter how smart it is.
Yeah, well I think that's the essence of the problem.
You also use the word mushy humans and flawless machines.
And one of the things we learn about machines is they're not flawless.
They contain all kinds of programming errors that are not obvious until something goes wrong.
A lot of the algorithms that would be used in making decisions in warfare are akin, similar to, of the same nature as the algorithms that are used in self-driving cars.
And we know that self-driving cars have been put on the road, and sure, they work fine a lot of the time.
But we also know that they're capable of making mistakes.
They're capable of misinterpreting road signs and traffic conditions and killing people, as has occurred.
So imagine the same thing when you're trusting machines with decisions to launch nuclear weapons or not, or to escalate nuclear war or not.
We might assume that they're flawless in their decision-making as compared to mushy human beings, but that's not the case, because they've been programmed by humans, and they reflect the biases of humans and the mistakes that we made, we put into them.
And we can't see those mistakes until it's too late.
Moser.
Right.
Yeah, I mean, that's the real joke, right?
Is, computer will flawlessly execute your bad code.
And, I mean, if it doesn't overheat or break down, and, you know, hardware and software are different, but yeah, you're right.
Of course the hardware can have all kinds of glitches and problems as well.
But I guess I was thinking more, not so much my opinion, just as that's like the conception, right?
Is that a human brain is wet and makes terrible mistakes.
I mean, I one time called for Glenn Greenwald to be executed, and then I only realized later that I met Paul Wolfowitz, the guy that Greenwald's article was about.
But I didn't even notice my big mistake, where a computer wouldn't make a mistake like that.
That's the idea, right?
Is that computers are here to prevent the kinds of careless mistakes that a mushy human brain might make.
But then, as you're saying, they still open up entire new and different categories of mistakes to make instead.
Yes, there's that.
And now talking about the human brain, the event in my life that was so decisive because I was a young person and terrified, and living at ground zero in New York City was the Cuban Missile Crisis of 1962.
I could remember the headlines because it made such a big impression on me as a young person.
I thought my days were coming to an end.
And that was an occasion where the generals were ready to take action, were ready to put nuclear weapons on alert.
They were ready to go all the way.
And from the declassified documents acquired later, we know that the Russian generals were the same way.
They were prepared to push the button, to move to nuclear launch conditions.
And the political leaders, in this case President Kennedy in particular, and it was – who was it back then?
I can't – Brezhnev, I think.
Khrushchev, Khrushchev.
Khrushchev took a look at this and had second thoughts and said, wait a minute, wait a minute, maybe this is a bad idea.
Now you want somebody in the loop who's saying, wait a minute, wait a minute, maybe this is a bad idea.
You don't want this to happen automatically without anybody giving it a second thought.
Right.
And we see this all the time too, right, where it's not even necessarily computers, but say for example zero tolerance policies in school that say a weapon is a weapon, a drug is a drug.
So a girl gives an aspirin to another girl, they're both expelled.
The valedictorian has a butter knife on the back seat of her car parked out in the parking lot, expelled.
Zero tolerance, no thinking allowed, no judgment, no discrimination, no decision making by humans.
We go with what the words say on the paper.
The rule must be implemented.
And we see that all the time, right, like even when it comes down to expelling the valedictorian over a butter knife, they will do that rather than stop and say no, pardon, this is where our humanity interrupts and intervenes and we make decisions instead and do something different instead.
So if we're talking about a computer, same difference, right, the computer goes and implements the algorithm, whether it makes sense or not.
I'm not familiar with that valedictorian story, but if it's true, that's a pretty upsetting case.
That was just one, yeah.
I mean, and there was one, you know, yeah, the girls give each other Tylenol for menstrual cramps or something and then they get thrown out as though they're a heroin dealer, this kind of thing.
Yeah, no, I agree with you about that.
And that's why we have to in particular look at this issue of AI and decision making in wartime.
And Scott, the worry here is time, the compression of time, where the pace of warfare is accelerating or is expected to accelerate in the future, where you're not going to have as much time maybe as was once the case to ponder these decisions and say, wait a minute, let's think about this, because the pace of action is going to be so rapid that thinking will be pushed aside and it's going to be go or no go.
You have three minutes to make a decision.
This is what terrifies me.
We have to figure out how to slow this down, this process down, so that human minds can take a look at the possible implications of what's about to happen.
Yeah, you talk about in here where the paradox of where the more that we automate it, the faster all the decisions happen.
And on the side of our enemies, you know, would be presumably enemies from the Pentagon's point of view anyway, Russia and China, that they're using AI, too, and they're going to be making battlefield decisions at such a rapid rate that we can't have an AI gap.
We have to make sure and stay on our end.
And so, it kind of keeps escalating and escalating.
But now that they've created this situation, now they're basically forcing themselves.
They've painted themselves in a corner, where now they have to outsource, as you're showing at the end of this argument, at the end of the day, they're going to have to outsource virtually all of their even strategic decisions to the computers in order to keep up.
Yes, yes.
Yes, indeed.
And you know, one thing that's interesting I've discovered about all of this, that among the people who are raising the most alarm about this trend are people within the industry itself.
It's not so much people like you and me necessarily on the outside saying, nah, nah, nah, you know, this is bad.
I'm learning about this mainly from people within the industry who are saying, gee, what have we unleashed here?
Maybe we should give it some thought.
You see this in the revolt of people at Google, who opposed the use of their intellectual property or their intellectual creativity by the Pentagon to develop AI systems for killing people overseas.
And there was a huge revolt, and the company finally agreed not to work on that project.
It's called Project Maven.
It's about using image identification algorithms for identifying potential targets for assassination.
And the Google employees said that this is not what they signed up for.
They don't want their work being used for military purposes and are speaking out about the dangers of what might happen when this technology is turned over for military use without any kind of regulation or thought given to the possible legal and moral and ethical consequences of this.
Well, and that's so important, what you're saying, because it shows the Google employees revolted.
They didn't all resign over it.
They said, we're staying, but we don't want to do this.
And they all, I guess, signed petitions and this and that.
And Google actually, at least as far as we know, backed down on that and is not going to do that now.
So that really shows that resistance is possible, but it's not automatic.
People have to make the effort to go ahead and fight.
You know, I mentioned that book by Neil Postman, Technopoly.
The subtitle is The Surrender of Culture to Technology, where he's saying that anything, anything that can be invented will be invented.
And as soon as it's cheap enough, it will be implemented.
And there's no set of beliefs on the part of the population of this society that can possibly stop it.
And so, you know, this kind of thing is, it's a last ditch effort, but that's still pretty big.
And it goes to show that, you know, if people of whichever different political persuasions and ideologies, if they're really desperate to hang on to liberty, if we really want to live in a society where our sheriff's department is not flying drones over our heads, or where the traffic cameras aren't tracking and keeping tabs on all of our license plates, and where the NSA isn't keeping data on every place our cell phone has been for the last five years, and these kinds of things.
We have to make it that way.
We have to insist.
Because otherwise, this stuff is, you know, a tsunami.
And especially, it's all subsidized, all the worst surveillance type tech, and drone swarms, and God knows what they're doing.
It's all on the taxpayers' dime.
It's the Pentagon and the national security state that are pushing technology specifically in these directions.
But we can resist, and we can stop it by writing articles like you've written here, by using their internet to push back, like we're doing with this show and the rest.
And like you're saying with Google Maven there, now Amazon has said that they don't care.
They're going to keep up with all their CIA and military contracts, and I guess there will always be someone.
But it's sort of like with BDS.
What matters is making it an issue, and forcing people to argue about it, and to shed light on the problem.
Yeah, I tell you, if people are concerned about government invasion, intrusion through electronics, what you were saying about AI snooping, this is happening much faster than any of us think.
That's what I'm being told, that the technology of interception of our communications and our imagery is happening at an exceedingly rapid pace.
As you say, if this is a concern for people about preserving the privacy of their information and your location and all that, don't wait to raise objections.
The time is now.
And what a weird position to be in too, right?
Where the government is pushing the invention of fruit fly sized drones, and sensors the size of grains of dust, and all these things.
And then all we can do is essentially petition that they will only use these technologies against helpless peasants in third world countries, in their horrible wars overseas, and not use them against us.
Because otherwise they can't.
If there's not a law that says that the Pentagon is forbidden from giving these kinds of sensors out to local police departments, then they will.
And this kind of stuff, like we're seeing with the stingrays intercepting our phone calls.
We've got our local sheriff's department acting like the NSA in the neighborhood, because that's where they got the technology.
The more you talk, the more anxious I get.
Yes, all this is true.
My concern, my research is on the implications of all this for the future battlefield.
And when I look at that, that's where I get really anxious.
Because, of course, it's not just the U.S., it's also Russia and China, Israel and South Korea are all moving to develop and to sell, to export this technology.
So it's going to become a worldwide phenomenon in the not-too-distant future.
Unless we move quickly to impose some kind of limits, right now there doesn't seem to be much standing in the way of this.
Yeah.
Hey guys, here's how to help support this show.
First of all, buy my book, Fool's Errand, Time to End the War in Afghanistan.
Everybody likes it.
It's got great reviews.
Read the paperback, the Kindle, or the audiobook.
And the EPUB is available at Barnes & Noble and everywhere else online as well, Fool's Errand, Time to End the War in Afghanistan.
And also, I take donations.
If you go to scotthorton.org slash donate, you'll see the kickbacks.
You can get a lifetime subscription to listen and think audiobooks, or a silver commodity disc for any donation of $100.
And you get a signed copy of Fool's Errand for $50, so that ain't too bad.
And anyone who donates by way of PayPal or patreon.com slash scotthortonshow, $5 or more per month.
And you'll get access to the private subreddit, the new reddit group there, at r slash scotthortonshow.
And all new signers, uppers to Patreon also get two free audiobooks as well.
And yes, I take every kind of cryptocurrency, most especially Horizon, but also Bitcoin and the rest of them too.
So check all that out at scotthorton.org slash donate and patreon.com slash scotthortonshow.
Thanks.
Oh yeah, and don't forget to shop amazon.com by way of my link at the bottom of my page at scotthorton.org.
You'll see it there.
Now let me ask you something else here about the subject matter of Russia and China, or the enemies.
Supposedly we have to fight.
I mean, the thing is about Earth, I guess the problem for the military industrial complex is, they've run out of countries and continents where they can find bad guys to fight.
China's our No.
2 trading partner after Canada, or tied for first place with Canada.
The Soviet Union ceased to exist back in 1991.
We don't really have that much to fight about, but they're focusing on the old Cold War enemies there.
And yet, when I was a kid, and I was just a kid at the time, but in the Reagan years, when it was brinksmanship and the height of the Cold War right before it all ended, it seemed like all the talk, the idea was universally understood, I think, that if America and the Soviet Union ever went to war, that it would be a full-scale H-bomb, strategic nuclear war, that every major city in both countries would be completely obliterated off the face of the Earth, and it would be the war to end all wars for real this time.
And yet now, there seems to be all this talk, and just kind of conceptually, when they're talking about these weapon systems and what have you, just like in your article here, this hyper-war, it sounds like they really think that they could fight a conventional war against Russia and China that would not go nuclear, and that's what they're really planning on.
They're acting like H-bombs just aren't in the picture here, and that if we fought a big conventional war with Russia and or China, here's how we would do it.
The battlefield would be Latvia or whatever, I guess they imagine, and it'll be fine, they think.
And so I wonder if that seems really different to you as well, that there's kind of this idea that we could do this without losing D.C. and Houston.
I don't know what goes through the minds of the people who are planning these war games, these war scenarios, so I can't speak for them.
The impression I have is that the current contemporary military leadership, the people who are now the generals and admirals, have spent the past 15 or 20 years fighting, losing wars, senseless, what they see as, I'm putting it the way I think they see it, pointless wars in Afghanistan, in Iraq and Syria, that have gained the U.S. no advantage whatsoever, and have drained their forces, depleted their forces, and have been nightmares for them, while Russia and China have not had to do that, and have been able to invest comparable sums in modernizing their own forces.
So now they think, we have to catch up with Russia and China in this conventional warfighting capacity, and move beyond Iraq and Syria and Afghanistan, and be able to fight Russia and China on the conventional battlefield.
Do they think that they can keep it a conventional war?
I really don't know.
I think they do.
I think they believe in their warfighting skills.
They see themselves as warriors, as people who are trained and equipped, and I mean no disrespect to fight and win these high-tech wars of the future.
I think they're totally wrong about their ability to contain it at the non-nuclear level, because whoever faces defeat in one of these high-tech, all-or-nothing, zero-sum conventional wars is not going to say, oh, I give up.
They're going to escalate.
And at that point, we're talking about nuclear weapons.
So I think there's a fallacy in all of this.
That's what scares the crap out of me.
Well, it seems too like the people who are interested in nuclear warfare, game theory, and this kind of thing are a bunch of insane lunatics too.
So they come up with these policies like, you know what we'll do?
We'll escalate to de-escalate, which means we'll set off some nukes in order to warn the other guys that they better back down now or else we'll set off even more.
When no, I mean obviously the reaction is going to be for the other side to use nukes too.
Escalate to escalate to escalate.
There's no de-escalation once you're setting off H-bombs.
Yeah, no, you can't.
Once the decision has been made, all hell breaks loose, and there's no control whatsoever.
Only geniuses could be that stupid, right?
Only these people who are so good at math, they'll just blow you away.
You could never understand what they're even talking about, but then only they could be so wrong, lost that deep in their navel-gazing that they just kind of ignore the plain common sense realities of the world we live in that cancel out all their stupid algorithms.
Yeah, well, but this is what, but this speaks, however, to what I was saying.
It speaks to the notion that you can fight a conventional war to win or defeat, and the other side will accept defeat, you know, will just do so without raising the stakes.
And I can't imagine that that's possible.
If the U.S. attacks Russia or China with conventional weapons and destroys their military, I mean the core, the hard core of their military, destroys all their ships, all their planes, all their radars, through our superior conventional weaponry, we should assume that they'll just wave the white flag and say, we surrender, do to us what you wish.
I think that's where the fallacy lies.
Those countries, and the same would be true of us the other way around.
One side or the other will say, we're not going to bow our knees, we're going to escalate.
So as long as you have the potential for escalation, that there's always the likelihood that that will be the outcome.
Yeah, I mean, and that's definitely the way to think about it, right?
Yes.
What would American politicians do in the position they think they can put Russia in here?
Exactly.
They would use H-bombs.
Yeah, of course.
Yeah.
And, you know, so here's the thing, too, is, people don't like mentioning this, so I think it's worth bringing up extra, that this is, the whole problem with Russia is all America's fault.
George H.W. Bush promised not to expand NATO, and then Bill Clinton broke that promise, and then George Bush's son doubled the breaking of that promise, Obama tripled it, and even Trump now has put Macedonia and Montenegro into NATO as well, but they've expanded all the way up to the Baltic states, right on Russia's border, what, 1,500 miles east of where the line used to be, halfway across Germany.
And they've drawn this red line.
It'd be like if the Russians had Warsaw Pact, you know, Russian and other Soviet troops stationed in Mexico on our border.
That would be a pretty trippy trip line.
It would be a really bad idea for them to do that, wouldn't it, everybody?
Right?
And that's exactly the position that we put the Russians in.
And then just like George Kennan predicted back in 1998, he told Thomas Friedman, he goes, I'll tell you exactly what's going to happen here.
We're going to expand NATO like this, and everybody's saying that it's fine, and that the Russians won't mind, because this isn't about threatening Russia, it's totally cool.
When the Russians react, those same people are going to say, see, that's why we have to do this, to contain Russian aggression, when they're the aggressors.
And just because we're from here doesn't mean that's not true.
Yeah.
Well, I think from the Russian perspective, they see this pattern.
They feel they've been betrayed and lied to by American leaders, as you say, going back to the first Bush.
And so they think they're the victims of Western aggression, as it were, or NATO expansionism, just as you say.
Now, I don't think that excuses, by any means, the aggressive behavior on Putin's part.
I think he's taken advantage of the legitimate anger that ordinary Russians feel, they feel they've been taken advantage of by the West.
He's exploited that legitimate feeling to perpetuate his own crony regime by whipping up nationalist sentiment and using that to perpetuate his regime and engaging in bad behavior of his own.
So that puts all of us in a worse situation.
Right.
Yeah, no, I totally agree with that.
I mean, if you look at what Russia has done in eastern Ukraine and in Syria, obviously, especially in Syria, a lot of innocent people have been killed.
But even in both of those cases, those are direct reactions to American policy, where our government had overthrown the government of Ukraine twice in 10 years.
And then the new coup government in 2014 was threatening to kick the Russians out of the Sevastopol Naval Base on Crimea, where they've had a base since we were under the Articles of Confederation here, you know, before Alabama was a state in the Union.
And so, you know, we really put them in a corner and up against the wall.
And basically, the Obama government did force them to intervene in Crimea and then in the east in the way that they did.
I'm not excusing all their behavior, again, of what they've done.
Innocent people have died and it's been horrible.
But it's all only been, you know, the Russians had no need to intervene in Syria American support for al-Qaeda was getting so bad that the regime there was really threatened and could actually fall.
And right in Russia's neighborhood, I mean, that threat's still far away from here, but that is a huge deal to the Russians.
It was only in the end of 2015, after four years of U.S. support for the terrorists in Syria, that the Russians finally came.
And again, and they've killed tens of thousands of people with their bombing there, just like America did in Iraq, or three as well in Syria and Iraq.
But, yeah, I my interpretation of events would be different.
I think that that Putin is playing a geopolitical game like like other imperial powers.
And Syria was a ripe area for Russia to reassert itself as a Middle Eastern power.
After it had been pushed out, you know, at the end of the Cold War era, Russia, then the Soviet Union, had been a major international actor on the world stage.
And they lost that with the end of the Cold War.
It was one of their only foreign bases was their naval base in Syria there.
Yeah, I think that's for Russia.
I don't disagree with you.
I just am saying that still it was the Americans who created the entire crisis there for them to take advantage of.
I don't mind hearing you complain about Russia.
I just don't want to hear the policymakers complaining about Russia.
You know what I mean?
Yeah.
Agreed.
OK, let's agree with it right there.
Because, yeah, no, I'm no I'm no apologist for that regime in any sense other than just there's four fingers pointing back at our side first every time we're accusing them.
Yeah, my equal worry, if not big worry, is in the South China Sea, because that's where I think World War Three could break out.
And here, I think once again, the blame can be equally distributed.
I think the U.S. is, you know, believes that it is entitled to remain the imperial power in Asia as it was for the past century.
And in in the Western Pacific and China, I think legitimately says that era has come to an end.
And China is the rising power in Asia.
And that creates a irreconcilable clash.
And it's incumbent on leaders on both sides to say we have to step back from the brink here and figure out how to manage this world's historical event of a change in the power balance in the region without creating, you know, global annihilation.
This is a world historical reality.
You can't turn the clock back and say China doesn't exist.
But China is also behaving in an atrocious way, I believe, by building up these islands in the South China Sea and militarizing them.
And the U.S. is behaving in an atrocious way by trying to provoke the Chinese by sending ships in harm's way next to those islands.
And sooner or later, somebody is going to get killed.
And that could set off an escalatory spiral leading to World War Three.
This is this worries me as much, if not more than the U.S.-Russian conflict in the Baltic states.
So wherever you look, there are these geopolitical pressures that are pushing us to the brink of war.
Yeah.
Well, you know what?
Another major pressure involved in all of this is the corruption of the military-industrial complex.
So not just the military-industrial complex, but the whole money first, you know, M-16s that fire when you pull the trigger second kind of attitude.
So that, you know, when the Russians are developing their new fighter jets, they're really working hard to make the best fighter jet that they can for the price, like you would expect.
But in America, it's all just about stealing as much as possible.
So the F-35 is a complete piece of crap.
And so, you know, we have 1970s model F-16s that could blow an F-35 out of the sky.
None of its equipment works.
We don't have to go down the litany of everything that doesn't work about the F-35.
But it's the same thing with our Navy ships, too.
These literal combat ships are basically worthless.
I would assume that all the high-tech, all the great high-tech stuff on most of those battleships and carriers doesn't really quite work right the way it's supposed to.
And we get into even a little skirmish with the Chinese on the high seas.
And you lose, you know, a ship or two gets sunk.
Or imagine if an aircraft carrier had a ballistic missile fall on its deck and cut it in half and sink to the bottom of the sea.
You're talking thousands killed there.
And that's the kind of thing that no politician at home can survive forgiving something like that.
And they would rather start a war than resign.
Yeah, this is what wakes me up in the middle of the night is that scenario that you just described.
And this is hardly unlikely.
This could happen tomorrow, something like that.
Maybe not quite tomorrow, but we're moving towards a world like that where all these powers feel an obligation to flex their muscles all the time to show they're not going to back down.
We're not going to let the other side walk all over us.
That's probably how they talk to each other.
And one side or the other is going to push too far.
Well, you know, I guess sort of back to my old question about, you know, do they really think that they can fight kind of conventional war?
I'm not trying to ask you to just read all the minds of the wonks.
But I just mean, in the same sense that you're exposed to the media the same way I am and whatever events you go to or what have you, I don't know.
But talking to people that you talk to the experts, it seems to me like this narrative about the rise of China and what is America going to do about it?
It seems like people are so married to this idea.
I hear very little talk of, yeah, who's scared of China?
The richer China gets, the better.
We ought to find, if we have any problems with the Chinese, boy, we better find more things to cooperate with them on then in order to make things better.
Same kind of thing with Russia, where, look, we can't fight these countries.
Everybody's got H-bombs, so no fighting.
So the other option is to always do our best to work together.
So, yes, here's a country of a billion plus people that abandoned communism in the 70s, essentially, for more of a fascist, pseudo-capitalist type system.
So there's huge amounts of new wealth being generated there, rapidly growing economy, etc., like that.
And so, at some point, it would be equal in economic output to the United States.
But then, so what?
That's just more Chinamen to trade with, more wealth for everyone on Earth to benefit from.
And it seems like I don't hear that narrative at all anywhere.
You turn on NPR or read Commentary magazine or whatever you got, and the idea is, oh no, China, China, China.
How are we going to prevent their rise?
How are we going to contain them?
And everything begins kind of with the presumption that America has everything to lose as China comes up from Maoism.
That's the really worrying part to me, is it doesn't seem like any reasonable people are part of the debate at all.
Yeah, you're right.
And I'm in Washington, D.C. right now, and that is the drumbeat in Washington is exactly what you're saying.
How can we get ahead of China?
How much more money can we spend?
How many more billions can we spend to ensure that we have permanent supremacy against China?
And that's what Inhofe, Senator Inhofe, the new chair of the Senate Armed Services Committee, that's all he talks about.
And he has plenty of other friends who share that point of view.
Let's throw money at the military-industrial complex so we can be certain that we'll never fall behind the Chinese.
The Chinese will never catch up with us.
That's the point.
And this is endless until we're utterly bankrupt, because that's the only way it could go.
All right.
Well, I'll let you go at this point.
I've kept you over time here, man, but I sure appreciate it, Michael.
It's always great to talk to you.
Always, yes.
And best wishes to all of your listeners.
Thanks.
Appreciate that.
That's the great Michael T. Clair, everybody.
He's at TomDispatch.com with this one.
The Coming of Hyperwar is one title.
You know how Tom does.
He's always got two different titles.
I like this one.
Alexa, launch our nukes, artificial intelligence, and the future of war.
And also check out his book, The Race for What's Left.
Hey, guys.
This is Burton Carr from the Friends Against Government podcast here.
And you've probably heard Scott talk about the crowdsourcing effort on Reddit about his upcoming book.
All you've got to do to get in is to donate $5 a month to Scott's Patreon and request to join the private Reddit group.
In Scott's Reddit group, you will find a pinned post outlining the details for how you can help find source material for Scott's upcoming book by listening to archived interviews and taking a few notes.
If you have any questions, feel free to reach out to either myself, at Birdarchist, or Carr, at CarrCampIt on Twitter, and we'll be happy to help.
We look forward to seeing you in there.
All right, y'all.
Thanks.
You can find me at LibertarianInstitute.org, at ScottHorton.org, AntiWar.com, and Reddit.com slash ScottHortonShow.
Oh, yeah, and read my book, Fool's Errand, Timed and the War in Afghanistan, at foolserrand.us.

Listen to The Scott Horton Show