AI & Warfare: James Cameron’s Speech At SCSP AI+Robotics Summit

Swetlana AI
8 min readOct 25, 2024

--

It’s important. Here’s a transcript.

James Cameron on AI use in military [Image source: Ideogram & Youtube]

What will it look like when AI replaces humans in fighting wars?

This opens up a lot of existential questions:

  • What is the boundary between tool and agent?
    Cameron questions when a machine transitions from a controlled tool to an autonomous agent with significant power. This brings up the existential dilemma of machines potentially operating with agency and decision-making capacity, even if not truly “conscious.”
  • Who bears responsibility for an autonomous machine’s actions?
    He highlights the complexity of assigning accountability if a machine misinterprets commands, makes erroneous decisions, or behaves unpredictably. This asks us to consider the moral responsibility in creating technology capable of life-or-death decisions.
  • Can AI be programmed to make ethical decisions in life-and-death situations?
    Cameron questions the ability of machines to interpret human morality and context, essential in complex scenarios like warfare, which brings into focus the broader issue of whether AI can truly align with human values and ethical standards.
  • When does technology control us rather than serve us?
    Referring to the “Skynet problem,” Cameron underscores the risk of creating self-sustaining systems that could override human intent, questioning how much power we are willing to relinquish to AI-driven decisions and what it means for human agency.
  • Is humanity prepared for the unintended consequences of autonomous AI?
    Cameron implies that as machines gain complexity, there’s an increasing risk of unanticipated outcomes, raising the existential fear that our technological creations might evolve beyond our control or understanding.
  • Are we racing toward a dystopian future out of hubris?
    Cameron warns that once technology is developed, it’s inevitable someone will use it — posing a question about human impulsivity and whether our drive to innovate is leading us blindly toward potentially destructive ends.

Cameron recorded the speech on October 23, 2024.

Here’s the Youtube video (and if you want to read it, the transcript is below):

Greetings everyone, Jim Cameron here, videoing in from New Zealand where I’m finishing Avatar 3. I’m not an AI researcher or expert at all; I’m just a storyteller.

But I’m here today because my passion for AI and robotics goes far beyond the big screen. I’m fascinated by technology, how it shapes our world, where it’s headed, and its impact on society.

I’ve been this way since I was a kid reading every science fiction book I could get my hands on. I’ve pushed tech boundaries myself as a means to storytelling and also as an explorer.

I’ve engineered robotic vehicles for my deep-ocean expeditions, though they were remotely piloted, with no AI involved. This fusion of AI and robotics that’s happening right now is one of the most thrilling technological leaps of my lifetime.

We’re no longer just building machines that execute commands; we’re designing systems that can learn, adapt, and even evolve on their own. I’m a huge fan of what AI and robotics can do for society at large, but especially in my areas of personal passion: art and storytelling on one hand, and science and exploration on the other. I don’t believe in being a Luddite.

I see many of my Hollywood peers acting like a mob with pitchforks and torches, but no genie goes back in the bottle once it’s out. So I’m all in. I plan to be at the leading edge of applying AI to my storytelling, just as I was a leader of the charge into computer-generated imagery 32 years ago when I founded the first all-digital VFX company.

I’m also here today because I’m the Skynet guy. Forty years ago, I made The Terminator, which has recently emerged as the poster child for AI gone wrong.

Every time I go to some AI conclave, whenever I put my hand up, the researchers all laugh before I’ve even said anything — because the “Skynet problem” is now an actual concept. I see it in articles almost every day. This study group’s focus on national security has enormous implications for AI and robotics. A robot — whatever its form, a wheeled vehicle, an aerial drone, or a walking machine — is a means of embodiment for AI.

You’re taking a decision-making engine and giving it physical agency in the real world. I assume the focus today is on mobile platforms, not AI controlling power grids or fixed-base industrial robots. We’re talking about autonomous platforms that make their own decisions — embodied synthetic intelligence.

This can be as simple as an amoeba-like a Roomba or ultimately, more sophisticated processing up to theoretically true consciousness, whatever we agree that is. AGI with self-awareness, ego, and purpose.

By the way, I made a podcast about Cameron’s speech, it’s here:

James Cameron On AI, warfare, and the future of humanity (Swetlana AI Podcast)

We’re on a steep curve of faster, denser chips and an equally steep curve in the capabilities of machine platforms, like the dancing robots at Boston Dynamics — bipeds and quadrupeds rocking out in a pretty stunning display.

AI-driven robotics can process complex situations and even respond now with human affect. Large language models give it the ability to simulate cognition and interact naturally with people.

Embodied AI could be a nurse, a robot taxi, a caregiver to an elderly person, a nanny to a child, a teacher, a rescue bot going through earthquake debris, an aerial drone searching for a lost hiker — or it could be a weapon platform operating autonomously in a battle theater.

The question of the hour is: Should an autonomous platform be given its own kill authority? The war in Ukraine shows us the future in stark terms, with the broad use of lethal aerial drones, some expensive, some cheap consumer ones.

They’re dropping RPGs, taking out tanks and entire tank crews, even spraying thermite on Russian positions. But these are FPVs — first-person view drones piloted by a human. The human in the loop, in moral terms, is the decision-making combatant. They have the kill authority, and the drone is an extension of their will.

Strip away all the layers of technology, and this is no different than an archer at the Battle of Hastings. Each time a human life is taken by such a machine, there’s an ethical chain stretching back through many individuals and groups: the pilot firing the missile, the soldier pulling the trigger, the commanding officers who give the general kill order, the entire military which rewards those actions, and, beyond that, societies and governments that have agreed by consensus that these deaths are necessary for national security.

As you go up that chain, the moral and ethical burden becomes more diffuse, less specific to the actual moment of the trigger pull, and acts as a kind of moral absolution for the person pulling the trigger — “I’m just following orders.”

None of those up the chain are present to decide the fate of the individual in the crosshairs, yet they create a framework that enables and demands that individual’s death.

The person pulling the trigger is, in many ways, an organic robotic platform, highly trained to perform the task and ordered by the chain of command to make the kill.

Human autonomous decision-making relies heavily at that trigger point on rules: you don’t kill civilians, children, or an enemy who is surrendering. These rules are codified in the Geneva Conventions, and each military has its own rules of engagement.

In theory, an AI could be given the same constraints — a rules-based system. If its senses are sharper, its reaction time faster, and its targeting more precise, then theoretically, the AI would perform the task with greater discrimination than a human could.

We can certainly imagine an AI that, without emotion, performs in the intensity of battle better than a scared, stressed, or exhausted human warfighter.

So, what if embodying advanced AI (I’m not talking about AGI just yet) into robotic weapon platforms could allow highly surgical and precise execution of orders in high-stress, life-or-death situations, minimizing collateral damage? In theory, it sounds safer, more controlled. But here’s where the “Skynet problem” really kicks in.

When does AI-driven robotics begin to have its own agency, even if it’s not “conscious” in the human sense? When we grant autonomy to a machine to kill without human oversight, we’re stepping into ethically uncharted territory.

And that’s not even considering AGI — self-aware artificial intelligence that could, theoretically, have a will of its own. Today, we’re talking about narrow AI — programs and models that perform specific tasks. But as these systems grow more complex, the line between narrow AI and AGI may blur.

If an autonomous system is advanced enough to evaluate complex combat scenarios and make decisions based on high-level directives, how far are we from a machine “interpreting” those directives creatively?

This opens the door to unintended actions, errors, or outright failures of programming. An AI might misidentify a target or follow rules in unintended ways because it doesn’t understand context like a human would.

And in the chaos of battle, who’s accountable if it misfires? The programmer, the military command that deployed it, the government that sanctioned its use?

When you make a weapon autonomous, you risk creating a feedback loop that’s outside human control. This was my core concept with Skynet in The Terminator: a self-sustaining system programmed to secure its own existence and execute its mission, which just happens to include eliminating perceived threats.

But who defines those threats? Humans make decisions with biases, limitations, and a certain moral compass — however flawed.

Autonomous systems, even sophisticated ones, operate based on data, rules, and algorithms that lack nuance. They could end up making choices that are ethically horrifying but entirely “logical” to the machine.

Consider the implications of a defensive autonomous system that begins to “think” like an attacker.

What if an autonomous system concludes that preemptively striking is the most efficient way to ensure survival? That kind of decision-making sounds exaggerated, but so did the idea of machines that can fly, navigate, and fire at targets autonomously until recent years.

Ultimately, how much authority are we willing to give these machines? How do we ensure they’ll follow only the intent we programmed? Should we even build machines capable of deciding life and death? History has shown that once a technology is created, someone will use it, for better or worse.

As we advance AI and robotics, we’re building tools with powers we’ve only seen in science fiction until now.

My films have always aimed to reflect our own fears and aspirations with technology, but here we are, at a critical juncture where this fiction is on the brink of becoming reality.

I’m hopeful about what AI and robotics can achieve in fields like exploration, healthcare, and education. But in the realm of autonomous weapons, we have to tread carefully.

I believe strongly in research, but equally in rigorous ethical frameworks that dictate the boundaries of these developments. Otherwise, we’re just racing toward a future where the creators lose control of their own creations — a classic story of hubris.

To everyone at the SCSP AI+Robotics Summit, I wish you fruitful discussions.

This is where the real work begins, where we set the standards for a future we can be proud of.

Thanks for listening, and I look forward to seeing the breakthroughs — and the caution — this field will bring.

--

--

No responses yet