In a scene reminiscent of a computer war game, three battle-fatigued soldiers, dressed in white snow camouflage, emerge from a war-torn alley with their hands raised above their heads.
They crouch down, following the orders being blasted at them, fear and shock etched across their faces as they stare down the barrel of a machinegun mounted on a so-called ground robot.
Recommended Stories
list of 4 items- list 1 of 4‘An apocalypse’: Why are experts sounding the alarm on AI risks?
- list 2 of 4Humanoid robots perform advanced martial arts at Chinese New Year gala
- list 3 of 4Anthropic’s case against the Pentagon could open space for AI regulation
- list 4 of 4Humanoid robot breaks half marathon world record in Beijing
This footage, released in January by Ukrainian defence company DevDroid, is said to show the moment Russian soldiers were captured by a Ukrainian robot using artificial intelligence.
In April, Ukrainian President Volodymyr Zelenskyy said that, for the “first time in the history of this war, an enemy position was taken exclusively by unmanned platforms – ground systems and drones”.
“Ground robotic systems have already carried out more than 22,000 missions on the front in just three months,” he wrote in a post on X, alongside images of green machines with tank tracks and weapons mounted on top.
But for analysts who have studied the intersection of artificial intelligence (AI) and warfare, the footage reflects an expected evolution – one that will unfold far beyond the front lines in Ukraine as the world wrestles with the ethical implications of controlling it.
UAVs, naval drones and robot dogs
For years, militaries have used ground robots primarily for bomb disposal and reconnaissance.
But in Ukraine, their role has expanded rapidly, with some brigades reporting that up to 70 percent of front-line supplies are now delivered by robotic systems rather than soldiers.
These machines transport ammunition, food and medical supplies, and evacuate wounded troops from dangerous positions.
Yet the sight of robotic systems moving across the battlefield is part of a much broader shift in warfare – one that has been building for decades.
The modern debate about AI in warfare was largely driven by the rise of US unmanned aerial vehicle (UAV) operations in the early 2000s.
In 2002, the MQ-1 Predator drone was used by the US to carry out one of the first targeted air strikes in Afghanistan, marking a turning point in how wars could be fought remotely.
Its use expanded rapidly throughout the 2000s and peaked in the late 2000s to mid-2010s, particularly in Pakistan, Yemen and Somalia.
As AI has advanced, the debate has moved beyond remote-control operations.
The focus shifted towards systems which can help identify targets, prioritise strikes and guide battlefield decisions, raising deeper questions about how much autonomy should be delegated to machines.
Analysts say the question of autonomy must remain central, rather than being overshadowed by rapid technological developments, however striking the sight of increasingly anthropomorphic machines on the battlefield may be.
“These technologies are here to stay,” Toby Walsh, an AI expert at the University of New South Wales, told Al Jazeera. He described AI-driven military operations as “the third revolution of warfare”.
The transformation is also spreading beyond land targets.
Naval drones packed with explosives have already reshaped battles in the Black Sea, while autonomous underwater systems are being developed for surveillance, mine clearance and sabotage missions by militaries worldwide.
Robotic dogs, meanwhile, are already being tested for surveillance, reconnaissance and bomb-disposal missions, with some experimental versions even fitted with weapons.
Human involvement
In recent years, the emergence of fully autonomous drones or so-called “killer robots” has triggered a fierce debate after a United Nations report suggested that Turkish-made Kargu-2 loitering munition drones, operating in fully autonomous mode, had identified and attacked fighters in Libya in 2020.
The incident prompted intense discussions among experts, activists and diplomats worldwide, as they grappled with the moral and ethical implications of a machine making – and executing – the decision to take a human life.
However, there needs to be more focus on regulatory debate about the use of semi-autonomous weapon systems, “where humans are still so-called in the loop”, Anna Nadibaidze, a postdoctoral researcher in international politics at the Centre for War Studies, University of Southern Denmark, told Al Jazeera.
A major concern, she said, is whether “enough time and space” is being given to the “exercise of human judgement that’s necessary in the context of warfare”.
The extent of human involvement is often something observers have to take militaries at their word on; a difficult task when their actions leave trust in short supply, said Toby Walsh.
In the case of ground robotics in Ukraine, a human operator has, so far, remained in control, directing machines that can still be halted by obstacles such as uneven terrain.
However, when AI is involved in the decision-making process, as is the case in Israel’s attacks on Gaza and the wider region, the scale of attacks which have resulted in “huge collateral damage and civilian casualties for a small number of military targets” challenges the rules of international humanitarian law and, in particular, the idea of proportionality, Walsh said.
The issue, Nadibaidze said, is that it is hard to enforce rules on the use of AI in warfare as it is essentially “a matter of each military to decide what they consider to be a citizenship role for the human, and there isn’t enough international debate on that”.
An April report by the Stockholm International Peace Research Institute warned that the AI supply chain is also fragmented, global and heavily dependent on civilian technologies, further complicating efforts to govern or control military uses of AI.
The United States Department of Defense and the Pentagon are consistently incorporating privately developed software systems into their war apparatus.
In the middle of last year, the Defense Department awarded OpenAI a $200m contract to implement generative AI into the US military, alongside $200m contracts for xAI and Anthropic.
“If we’re not careful, warfare will be much more terrible, much more deadly, a much quicker, much faster thing that humans can no longer actually really be participants in, because humans won’t have the speed, won’t have the accuracy or the ability to respond,” Walsh warned.
Ukraine as a testing ground
Technology and AI are not inherently harmful, experts say – it is how they are used that matters.
In Ukraine, ground robotic systems have also been used to rescue civilians and provide logistical support in heavily mined and treacherous conditions.
Yet what is unfolding on the front line is, in many ways, a testing ground, and the international community will need to look ahead to how these technologies might be applied and regulated in future conflicts.
There is also room for cautious optimism. Despite the “moral failure” over Israel’s actions in Gaza, Walsh said, there is a recognition in the international community that these issues must be addressed, including a series of UN meetings focused on regulating Lethal Autonomous Weapons Systems.
The United Nations Institute for Disarmament Research (UNIDIR), an autonomous body within the UN which conducts independent research on disarmament and international security, is set to meet in June to examine the implications of AI for international peace and security.
It is not the first time new weapons technologies have threatened to upend the rules-based order, said Walsh, pointing to chemical weapons as an example. While imperfect, international agreements were eventually put in place to bring those under some level of control.
“There are a lot of actors based in the Global South that do want regulation, so there might be regional initiatives forming,” said Nadibaidze, adding that even if such efforts do not initially include major powers or leading tech developers, they could still help to shape emerging norms.

3 hours ago
3

















































