Dangerous killer robots from the atomic bomb are being used in the world

  I see four basic dangers posed by sovereign weapons. The first issue is misidentification. Will the sovereign weapon be able to distinguish between a 12-year-old child playing with an enemy soldier and a toy gun while setting a target?


According to a recent UN report on Libya's civil war, last year a system of sophisticated weapons known as killer robots may have killed humans for the first time. History may call it the starting point of a new arms race, a starting point that could potentially be the end of humanity.

Autonomous Weapons System Deadly weapons are robots that operate independently, with no human influence in determining the target and the decision to target it. Armies around the world are investing heavily in research and development of sophisticated weapons. Between 2016 and 2020, the United States alone spent اٹھ 18 billion on sovereign weapons.

On the other hand, human rights and humanitarian organizations are trying to ban the development of such weapons and develop a code of conduct. Foreign policy experts warn that disruptive sophisticated sophisticated weapons technologies will destabilize existing nuclear strategies to a dangerous extent, as they will radically change the very notion of strategic dominance that The dangers of amputee attacks will increase and second because they can be combined with chemical, biological, radiological and nuclear weapons themselves.

 As a human rights researcher who has been closely monitoring the issue of the use of artificial intelligence in weapons, I have come to the conclusion that sovereign weapons technology has created an unstable balance and fragmented security arrangements for nuclear-armed countries. Will cause For example, it would destabilize and further disperse the very limited authority of the US President to drop nuclear weapons.

Fatal bugs and black boxes

I see four basic dangers posed by sovereign weapons. The first issue is misidentification. Will the Sovereign Weapons be able to distinguish between enemy soldiers and 12-year-olds playing with toy guns while setting a target? Will there be a difference between civilians leaving the battlefield and retreating under the insurgents' strategy?

The problem is not that machines will make such mistakes and not humans. The difference between human error and algorithm error is the same as mailing a letter to someone and sharing it on Twitter. The scale, scope, and speed of the target robot systems that run on the algorithm can destroy the entire continent, compared to a human error in targeting, such as the US drone strike in Afghanistan. Yes, it looks very simple.

Sovereign weapons expert Paul Shire makes the difference with the metaphor of the runway gun. A runway gun is a bad machine that keeps firing after its bulb is submerged. The runway gun keeps raining bullets until it runs out of fuel because it doesn't know it's making a mistake. Runway guns are extremely dangerous, but fortunately they are driven by humans who can cut off the flow of fuel or divert it in a safe direction. Sovereign weapons do not have such a defense system in their structure.

The important thing is that this system of weapons powered by artificial intelligence does not need any defect to be as uncontrollable as a runway gun. As many research on algorithm errors in various industries has shown, the best algorithms that work according to design can correct their errors independently and draw the right conclusions. These kinds of mistakes spread very fast all over the world.

For example, a network design of the nervous system in Pittsburgh hospitals identified asthma in pneumonia patients as a risk-reducing factor, while image recognition software used by Google A machine used by Amazon to identify guerrillas and systematically rank job candidates gave women negative numbers.

The problem is not just that when artificial intelligence makes a mistake, that mistake is massive. On the contrary, when she makes a mistake, her makers often do not know why she did it and therefore are unable to correct it. The black box problem associated with artificial intelligence makes it almost impossible to think of an autonomous weapons system as morally responsible.

Spreading issues

The next two risks are related to low and large scale spread. First let's talk about small scale spread. The forces currently active in the field of sovereign weapons assume that sovereign weapons and their use will be within their reach. But if weapons technology has taught the world anything, it is that the proliferation of weapons cannot be stopped.

As a result of market pressures, the manufacture and sale of autonomous weapons can be seen as an alternative to Kalashnikovs, killer robots that are cheap, efficient, and spread around the world that would be almost impossible to ban. Sovereign Kalashnikov weapons could fall into the hands of government insurgents, including global and local terrorists.

However, large-scale spread is just as dangerous. In order to keep ahead of each other, nations can set out to develop deadly sophisticated weapons, including chemical, biological, radiological and nuclear weapons. Ethical concerns about increasing the lethality of weapons will be exacerbated by overuse of weapons

The proliferation of sovereign weapons of mass destruction will likely lead to more wars, as it will eliminate the two main reasons why wars in the past have been fought for shorter and shorter periods, namely of civilians abroad and their troops. Concerned neo-hippies and their global warming, i'll tell ya. Expensive ethical watchdogs are likely to be attached to these weapons, which are intended to protect civilians from harm, and if the words of US Special Representative Agnes Callamard are to be borrowed, the use of "mythical surgical strikes" would provoke moral protest. Will press

Sovereign weapons will reduce both the risk and the need to send troops to war, which will dramatically reduce the war costs that nations incur during and during a war.

Asymmetric Wars, wars fought on the territories of low-tech nations, are likely to increase in number. Remember the global instability created by the intervention of the Soviet Union and US troops during the Cold War, which has continued since the First Proxy War. Multiply all of this by any country that is committed to large-scale sophisticated weapons development.

Sabotaging the laws of war

Eventually, sovereign weapons will break even the last man-made obstacle to war crimes and atrocities, the laws of world war. These laws, which have evolved since the Geneva Conventions of 1864, are the fine lines in the world that separate war from genocide. These laws ensure that people can be held accountable for their actions even during war, because the right to kill an opposing soldier does not mean that you have the freedom to kill civilians. A notable example of the standoff is Slobodan Milosevic, former president of the Federal Republic of Yugoslavia, who was formally summoned by the United Nations International War Crimes Tribunal for crimes against humanity in the former Yugoslavia. ۔

But how can sovereign weapons be called into question? If robots commit war crimes, who will take responsibility? Who will be put in the dock? The weapon? To the soldier? The soldier's commander? Weapons manufacturing corporation? NGOs and international law experts are concerned that the process of accountability with sovereign weapons will be hampered.

Prosecutors will have to prove both the criminal act and the criminal intent if the soldier who dropped the sovereign weapon committing war crimes is arrested while being held responsible. Because those sophisticated weapons can do anything unexpected in their structure, it would be difficult to prove legally and morally unfair. I think the decisions made independently in the context of sovereign weapons will be far more than the intervention of a soldier.

 Putting the blame for the damage on the leadership at the top or the soldier on the battlefield will not ease the legal and moral difficulties. Without laws that give people meaningful access to sovereign weapons, there will be war crimes in the world, but not war criminals who can be held accountable. The structure and enforcement of the laws of war will also be significantly weakened.

A new world race

Imagine a world where forces, insurgent groups and global and local terrorist ideologies can use unlimited lethal force at any time, anywhere in the world without any hindrance and as a result no one will ask them. This is a world where inevitable algorithmic errors can be devastating, where big companies like Amazon and Google can wipe out entire cities.

I don't think the world should resume a devastating competition like the development of nuclear weapons. It should avoid actions that could increase human suffering.

Post a Comment

0 Comments