How AI Is Used in Wars

AI in warfare is transforming battlefields through data, prediction, and precision.

Amina BELKACEM
AICyberAttackMilitary TechnologyPalestineComputer Vision

Introduction

Picture this, you are on a boat, looking out of your window and then you see a drone. That’s not necessarily unusual nowadays, but the drone is not here for the view, this drone wants to kill you! This is exactly what happened a few days ago at a port in Tunisia. And more and more this is how modern warfare looks like. Cyberattacks, artificial intelligence and robots are all playing a major role on the battlefield. Technology is changing the way we fight but what are the dangers of that? And could there be any upsides?

What’s AI in Warfare?

There is no universally agreed definition of AI in wars. Technologies that were once considered “artificial intelligence” like terrain-matching navigation or missiles that can identify tanks, are now viewed as workaday software.
That begins with the less glamorous stuff: maintenance, logistics, personnel and other tasks necessary to keep armies staffed, fed and fuelled. AI can predict when maintenance is needed on warplanes, which will avoid breakdowns, prevent unnecessary part replacements and ensure supplies arrive on time. It is also being used to analyse personnel files to help score soldiers for promotion.

An AI enabled army is now fusing all kinds of data from satellites, drones, radar and even the weather, allowing the software to propose potential military targets. A lot of the stuff that was being done manually in headquarters by dozens of mid-level officers, with pens, papers, can be done by software. The code can do it in a tenth of the time, way more quickly. That’s a real shift in the nature of warfare.
Drones now navigate and decide who to strike, when, and where, even if the connection to the pilot is lost. These drones use small and cheap chips. This technology is still immature, as its targeting algorithms face challenges similar to those of self-driving cars, such as cluttered environments and obscured objects, and some unique to the battlefield, such as smoke and decoys. But it is improving fast.

Drones alone can’t win battles. But when combined with “digitised command and control systems” think software and cloud platforms that fuse data, create live battlefield maps and recommend actions, together with “new-era meshed networks of civilian and military sensors” such as satellites, radars and even smartphones, all linked together so information flows instantly, the result is transformative. Soldiers on the front lines can see what’s happening and act on real-time information instead of waiting for reports from distant headquarters.

How Israel Automated Genocide

In his book The Human-Machine Team, the head of an Israeli military-intelligence unit argues that AI could revolutionize warfare by rapidly processing massive amounts of data to generate “thousands of new targets every day” in a war. This, he explains, would solve the “human bottleneck” that slows down both locating new targets and deciding whether to strike them. This idea ties directly to the concept of the kill chain which is the military process of finding, identifying, deciding on, and striking a target.

In fact, that machine already exists. The Israeli army has developed and deployed an AI-based system called “Lavender” an ironically gentle name for a tool that is designed to suggest who to kill faster than humanly possible, what it does is it marks all suspected operatives in Hamas, including low ranking ones, as potential bombing targets.

Additional automated systems, including one called “Where’s Daddy?” were used specifically to track these individuals and trigger bombings once they entered their family homes.

The Israeli army systematically targeted these individuals while they were at home, often at night, when entire families were present rather than during military activity.

On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations. an intelligence officer explained.

As a result, thousands of Palestinians, most of them women and children (who were not involved in the fighting) were killed in Israeli airstrikes as a direct consequence of the AI program’s decision.

In the Israeli army, the term “human target” once referred only to a senior Hamas operative. After October 7, however, the army adopted a dramatically different approach. They decided to designate all operatives of Hamas as human targets, regardless of their rank or military importance. And that changed everything.

The new policy posed a technical problem for Israeli intelligence. In the past, approving the assassination of a single human target required a lengthy and complex “incrimination” process: verifying that the individual was indeed a senior Hamas member, find out where he lived, his contact information, and finally know when he was home in real time. When the target list was limited to a few dozen senior operatives, intelligence officers could individually handle this process manually.

However, once the list expanded to include tens of thousands of lower-ranking operatives, the Israeli army turned to automated software and artificial intelligence to handle the task. As a result, the role of human personnel in “incriminating” Palestinians as military operatives was pushed aside, and AI did most of the work instead. Approval to automatically adopt Lavender’s kill lists was granted roughly two weeks after October 7, after intelligence officers “manually” checked the accuracy of a random sample of several hundred targets selected by the AI system, which showed about 90% accuracy in identifying individuals’ affiliation with Hamas. After that point, if Lavender flagged someone as a Hamas operative, officers were expected to treat it as an order, without independently checking why the system made that decision or to examine the raw intelligence data on which it is based.

At 5 a.m., [the air force] would come and bomb all the houses that we had marked," the intelligence officer said. "We took out thousands of people. We didn’t go through them one by one — we put everything into automated systems, and as soon as one of [the marked individuals] was at home, he immediately became a target. We bombed him and his house.

The Lavender software processes data gathered on nearly all 2.3 million residents of the Gaza through a system of mass surveillance to produce targets for its bombing campaign, this includes biometric data, call records, location data, and social network connections like Whatsapp contacts or Facebook friends, then assigns each person a score from 1 to 100 indicating how likely they are a militant. The system was trained on information about known Hamas operatives, learning to identify their common features. When it detects these same features in other individuals, it assigns them higher scores, and thus automatically becomes a potential target for assassination.

Officers who used Lavender, were not required to independently review the AI system’s assessments, in order to save time and enable the mass production of human targets without procedural delays.

When the Algorithm Gets It Wrong

Lavender sometimes misidentified individuals based on patterns of behavior similar to those of known Hamas operatives including police and civil defense workers, militants’ relatives, people with the same name or nickname as an operative, or Gazans using a phone that once belonged to a Hamas member.

Similar issues arose with the system’s ability to correctly link a phone number to a targeted individual. In war, Palestinians change phones all the time. They lose contact with their families, lend their phone to a friend or spouse, or misplace it entirely. There is no way to rely 100 percent on an automatic system to know which number belongs to whom. The army was aware that the limited human oversight would not catch these errors.

There was no ‘zero-error’ policy. Mistakes were accepted as part of the process. Because of the scale, the rule was that even if you couldn’t be completely sure the system was right, you know that statistically it’s fine. So you go for it.

When a Palestinian’s rating rises above a set threshold, they are added to the kill list. Officers who used Lavender admitted that the threshold was often lowered whenever Israel ran out of targets, effectively sweeping more civilians into the system. This process reduces human beings to mere data points, making it easier to dehumanize them and ignore the life-or-death consequences of these decisions.

Despite these flaws, officers said they typically spent no more than 20 seconds reviewing Lavender’s recommendations, often just confirming whether the person’s voice sounded masculine. If it did, the name was uploaded to the kill list and fed into a tracking system known as “Where’s Daddy?”. This system then monitored the individuals’ locations, waiting for the moment they were at home and that’s when the killing began.

kill Chain

Israel’s automation of genocide in Gaza represents one of the most extreme abuses of technology in modern warfare.

Lavender and Where’s Daddy? are just two components of a vast network of weaponized surveillance and control that dominates every aspect of Palestinian lives.

And what happens in Palestine doesn’t stay in Palestine.

Israeli cyber tools are exported to governments worldwide, many with poor human rights records. Israeli-made spyware has been discovered on the devices of journalists, human rights defenders, and politicians across India, Saudi Arabia, Canada, Mexico, El Salvador, the Dominican Republic, and many other countries.

Conclusion

These are just a few examples of real-world applications of AI being integrated into war, and there will continue to be more as the technology evolves. But wait! shouldn’t we slow down and ask? Yes, we can do this, this is something that technology allows us to do but is it ethical, legal, and safe? Could it get out of control if we give computers more information and more decision-making power? Will AI make war easier and faster? and by taking humans out of the process, will it make it less humane? Critics argue that a human must always retain control over the decision to use force. Like do I kill them or not? But if a commander is simply pressing a red button to approve whatever the computer suggests, that is not real human control. Yet the most consequential step may not be the final approval to kill, but the initial identification of a target.

Like any technology, AI’s impact depends on how well we understand and use it. The key is not just whether machines are given autonomy, but whether humans in charge truly understand the software, what it’s good at, what it’s not, and when to trust or question its outputs. True human control means scrutinizing the machine’s suggestions, knowing when it might fail, and resisting blind automation.

In war, whoever can get the most information and make the fastest, most accurate decisions is usually the victor… Humans tend to assume that decisions made by a machine are correct, which makes us easily convinced by what AI says. I mean, we all feel this, when we talk to ChatGPT, a bias to believe it because it sounds right. The machine just spit it out in this digestible, authoritative, natural language. There will be strong motivations to let machines do more of the work and deal with the consequences later and sometimes… too late. Machines will make mistakes, including targeting civilian shelters, but guess what? Humans do this too.

The question is, are machines gonna reduce the incidence of those things? The reality is that there is no turning back. No one is going to stop integrating AI into warfare or wait for ethicists and lawyers to catch up. Geopolitics will push this forward.

All we can hope for is wisdom and foresight from those making the decisions on how we use these systems so that we approach it humanely and responsibly.

I would like to conclude with a thought shared by a dear friend:

" Resistance is not only about weapons. It is about building counter-systems, alternative narratives and safe spaces that safeguard humanity, dignity and awareness.

Resistance also means:
Understanding the tools of oppression, not to help create them, but to build defenses against them, or to reject their very existence when they are unjust.

The road ahead is long… but it is the only way forward.
A just cause is not defended by slogans alone, but by knowledge, action, and sincerity.
The journey is long… it will take a conscious plan, patience, and determination to build parallel systems and narratives that redefine dignity, justice, and truth "

-Selma Bentaiba

Free Palestine !

Resources :

How AI is changing Warfare
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza
Artificial Genocidal Intelligence: how Israel is automating human rights abuses and war crimes
Killer AI
Fact Sheet: the Israeli Cyber Industry