26 April 2026

Sunday, 18:28

ALGORITHMS OF WAR

Neural networks link satellites, drones and intelligence, radically change the scale and pace of modern conflicts

Author:

01.04.2026

The introduction of artificial intelligence (AI) has triggered the transformation of the rules of warfare, a trend clearly evident in the Middle East conflict. Firstly, neural networks are being integrated into command and control systems at a rapid pace, influencing political and military decisions at a speed that is beyond human capability. Secondly, platforms similar to those used by the military are, as they say, 'going mainstream'. Consequently, the lag of traditional media is becoming increasingly evident, not only in the speed of news publication, but also in analytical articles. Conversely, we observe the potential for military operations and human suffering to be reduced to 'mere content' on a large scale and successfully monetised. Participants vote on the subsequent action, similar to the way spectators did in gladiatorial arenas, with geolocation and real-time cryptocurrency exchange rates integrated as additional features.

Thirdly, as the 'drone age' was being adopted by the industry, there was already a shift in focus towards the development of biorobots, which raises a number of ethical issues by blurring the line between machine and living organism. Fourthly, the advantages offered by AI actually turn into vulnerabilities, as data centres and undersea cables have become high-priority strategic military targets.

The conflict between the US, Israel and Iran has therefore created a new reality: the use of AI in warfare leads to a reassessment of capabilities, objectives and threats.

 

The use of AI in the military

The integration of artificial intelligence into military command and data analysis systems is already taking place in all the world's armed forces. This process is inevitable. AI is used to process vast amounts of information—text, digital, video and satellite data. It is common knowledge that a scandal erupted in the US at the end of February concerning the use of AI in the defence sector. The US Department of Defence has added Anthropic—the developer of the Claude neural network—to a list of suppliers posing a risk to national security. This is because the company refused to grant the Pentagon the right to unrestricted use of its products. The Pentagon has not officially acknowledged this, but it is clear that it was Anthropic's Claude Sonnet network that was used during a secret US special forces operation to capture Venezuelan President Nicolás Maduro, as well as in the Middle East during preparations for an air strike on Iran. This is about the Maven Smart System, which is a product of the tech giant Palantir and uses Anthropic's Claude. It can aggregate data from dozens of sources: satellite imagery, drone footage, transport routes, intercepted communications, signals intelligence data, and more. All this is analysed at tremendous speed, with a search for military targets that are ranked by importance. At the same time, the scale and time available for officers—who bear ultimate responsibility—to make final decisions are significantly altered.

The question is whether this rapid pace affects the soundness of human decisions and their moral and ethical content. For example, when British scientists asked a neural network to simulate a military conflict during an experiment, they discovered that the machine was prepared to readily deploy weapons of mass destruction in situations where a human would never do so. The study definitively showed that in approximately 95% of scenarios, the threshold for tactical nuclear use was crossed, whilst strategic nuclear threats arose in 76% of cases. The question is straightforward: if incorrectly configured models begin to play a significant role in crisis planning, will they push people towards escalation strategies?

Trump's directive to cease working with Anthropic's products has been disregarded by the US military, which continues to use its developments. The US Department of Defence is collaborating with OpenAI, and the capabilities of Google Gemini and Grok (the GenAI.mil platform) will be utilised. Moreover, Elon Musk's company has already agreed to the Pentagon's requirement to use AI for 'any lawful purpose', unrestricted by ethical norms.

In the run-up to and from the start of the war between the US and Israel against Iran on 28 February, the Chinese AI start-up MizarVision began publishing geospatial data on the presence of US military forces in the Middle East almost in 'real-time'. MizarVision's artificial intelligence can automatically scan, identify and tag the most complex types of military equipment. The armed forces must now maintain the secrecy of their operations in the face of readily available commercial satellite imagery, supplemented by sophisticated AI analysis.

 

AI tools for the coverage of military conflicts

Experts have expressed concerns that AI tools used to cover military conflict and marketed as 'democratising access to information' may in fact seriously distort perceptions of reality. In many ways, they merely create an illusion of control, turning war into content with elements of gambling and fake data, stripping it of context and accountability. Such online platforms often combine open data (satellite imagery, vessel tracking, shipping routes, power cuts, news) with AI analysis, as well as with chat rooms and links to prediction markets. One notable example is the dashboard of the venture capital firm Andreessen Horowitz, where users can place bets on events (such as who will become the next leader of a particular country, or where the next strike will be launched). The creators' objective is to circumvent 'slow-moving media' and purportedly provide users with 'direct access to the truth'. People primarily trust such platforms because the military and other professionals use AI capabilities in exactly the same way.

However, a key point is worthy of note: within the military sphere, expert assessment and context are paramount. Secondly, there is the ability to use confidential information that is not in the public domain. Additionally, it is important to note that AI has the potential to generate false satellite images and news reports. Intelligence agencies and the media verify information through experts and classified sources. In contrast, public panels do not possess these capabilities.

News feeds also combine significant events with unrelated information, for example displaying cryptocurrency prices alongside a map of strikes. Finally, the link to the aforementioned betting markets (in which the creators of such panels often invest) has the effect of transforming geopolitical crises into financial instruments. This is how platforms such as Kalshi and Polymarket operate—prediction markets where people trade contracts on the outcome of real-world events (politics, economics, weather, sport, etc.), essentially placing a structured bet on whether an event will occur or not.  In the event of war, such ‘auctions’ appear extremely cynical. As a result, users do not receive the ‘truth’ they seek, but risk being drawn into a sort of ‘war circus’, where real suffering becomes the backdrop for digital entertainment.

 

Welcome to the era of biodrones

The use of AI poses questions for humanity that require careful consideration and may require further research to find suitable answers. Unmanned aerial vehicles (UAVs) have had a profound impact on warfare, akin to the transformative effect of the introduction of firearms. They have significantly altered the tactics, strategy and economics of modern conflicts. A drone, priced at between $500 and $2,000, is capable of destroying equipment valued at millions of dollars, including tanks, IFVs and radar systems. Reconnaissance drones can transmit target coordinates directly to artillery or strike UAV operators, reducing the time from detection to engagement to less than 10 minutes. Significant integration with other systems is underway, with drones serving as vital surveillance assets for artillery, aviation, air defence systems and cyber operations.

AI-based technologies are already poised to go a step further, for example by operating in 'dark zones' such as tunnels, bunkers and urban infrastructure. This is particularly relevant in areas where communication is limited, navigation is restricted, and there is a significant lack of 'visibility' for surveillance purposes. In order to overcome the barriers that exist at the intersection of biology and high technology, a new field is emerging: biological drones, and ultra-small ones at that. For instance, the Swarm Biotactics consortium—a German start-up working with NATO and European organisations—is developing bio-drones based on cockroaches. A microscopic 'backpack' is attached to the insect's back, inside which are concealed an electronic brain, a battery and sensors. Electrodes are implanted into the cockroach's body to take control of its motor functions. The cockroach drones have been designed to operate within a self-organising network. Meanwhile, in Russia, the Neiry group of companies is developing a biodrone using pigeons as a means of flight, which is intended to fly in the required direction. As anticipated, its range and endurance will be significantly superior to those of a conventional quadcopter. Meanwhile, scientists are also examining other birds, including crows, seagulls and albatrosses.

Swarm Biotactics emphasises that the process is painless for the insects, and their well-being is key to effective operation, whilst the Russian start-up Neiry even compares controlling a pigeon to horse riding.

It is anticipated that in the near future, the technology will advance to the point where mosquitoes, flies, beetles and dragonflies will be able to be programmed to operate as drones. However, it is important to consider the potential consequences of such scientific advancements. It is possible that people may perceive a threat in every animal. This will inevitably increase the psychological pressure on civilians, as the presence of biodrones will be akin to total video surveillance, only now 'scattered' across the biosphere—the feeling that you are being watched from literally every branch and every crevice. Firstly, this will have a negative impact on the long-standing relationship between humans and nature, which is already in a poor state. Secondly, it will create situations where animals fight instead of people. After all, the same cockroach could be employed to carry explosives or conduct reconnaissance for an attack. It is reasonable to assume that the preventive extermination of 'suspicious' species in the conflict zone will also be employed. It is interesting to note that the general public rarely feel sorry for such creatures as cockroaches or pigeons. As one user on Russian social media commented, 'We urgently need to teach our pigeons to hunt their cockroaches...'

 

Data centres as military targets

Any technology that can incorporate AI is considered a potential military target. Facilities responsible for the functioning of AI, namely data centres, also become targets. Data centres are concrete structures comprising power plants and other equipment, and are extremely vulnerable to kinetic strikes. In addition to direct attacks, damage to supporting systems (cooling, power distribution, fibre-optic switching equipment, backup generators, etc.) is particularly dangerous for them. A disruption to the infrastructure that underpins digital intelligence has the potential to result in the paralysis of critical sectors, including finance, logistics, military operations and government administration.

The reclassification of such targets as strategic has already been demonstrated by the wars between the US, Israel and Iran, as well as between Russia and Ukraine. It has been reported that Iranian drones caused damage to several cloud services in the Persian Gulf region, including two data centres—Amazon Web Services in the UAE and a facility in Bahrain. The strike on the facility in the UAE was the first instance of military action disrupting the operations of a major American technology company's data centre.

Amazon, Microsoft and Google have invested significantly in building data centres across the Persian Gulf, with analysts estimating that there are now over 200 data centres in the region. The Strait of Hormuz and the Strait of Bab el-Mandeb are also digital arteries on which the pulse of the global economy depends. It is estimated that between 17 and 20 per cent of global internet traffic passes through the Red Sea. The Strait of Hormuz is of critical importance to the digital connectivity of the Gulf states. It comprises an extensive network linking Iran, Iraq, Kuwait, Bahrain, Qatar and the UAE to the rest of the world via key landing points in Oman. These cables are used for a variety of purposes, including government communications, bank transfers, data from cloud centres, military intelligence flows, air and maritime traffic control systems, and a significant portion of commercial internet traffic between Europe, the Middle East, South Asia and other regions. To summarise, the sectors most affected would be investment funds, oil and gas traders, global banks and, naturally, the military.

Furthermore, it should be noted that the damage may not necessarily be the result of deliberate attacks, which, incidentally, are quite difficult to carry out. Rather, the cause of accidents could be collateral damage, or they may be purely technical or natural in nature—a failed anchor drop, other damage from maritime transport, or seismic activity. Specialised vessels are responsible for dealing with these issues, and they sometimes spend weeks at sea, which may become impossible in the context of active military operations.

The ongoing conflict between the US, Israel and Iran demonstrates that AI has evolved from a supporting tool to a central element of modern warfare. This is occurring against the backdrop of a lack of clear global rules and deterrence mechanisms. The necessity of such rules and mechanisms has been a subject of discussion for several years by international organisations, expert communities and humanitarian bodies.

It is essential to establish who will be held accountable for decisions made at machine speed, and according to what criteria. In what ways can the interests of transnational corporations be distinguished from the concept of 'national security'? It is evident that advancements in military technology are rapidly outpacing the development of ethics, law and politics. It is also deeply regrettable that, instead of serving humanitarian goals—the treatment and rescue of people, animals and nature as a whole, education and the arts—cutting-edge technologies are developing most actively in the military. Perhaps this is a general conclusion, but nevertheless, it is hardly arguable: despite all technological achievements, their humanistic aspect raises serious doubts.



RECOMMEND:

64