经济学人官方译文节选 | 军备控制:人类必须严控自主武器

微信公众号:田间小站

Arms control
军备控制
Taming terminators
驯服终结者
Humans must keep tight control of autonomous weapons
人类必须严控自主武器

FOR THOUSANDS of years, weapons went where humans thrust, threw or propelled them. In the past century, they have grown cleverer: more able to duck and weave to their targets; more able to select which of many ships, tanks or aircraft to strike; and more able to wait for the right target to turn up. Increasingly, such weapons can be let loose on the battlefield with little or no supervision by humans.
千万年来,武器按着人类刺、投或推的方向发出攻击。在过去一百年里,武器已变得更聪明——它们愈发能够在攻击过程中闪躲穿行,在众多舰船、坦克或飞机中选择攻击目标,以及等待攻击目标的现身。在战场上,这样的武器可能会越来越多地在少有或完全无人监督的情况下自由发挥。

The world has not entered the age of the killer robot, at least not yet. Today’s autonomous weapons are mostly static systems to shoot down incoming threats in self-defence, or missiles fired into narrowly defined areas. Almost all still have humans “in the loop” (eg, remotely pulling the trigger for a drone strike) or “on the loop” (ie, able to oversee and countermand an action). But tomorrow’s weapons will be able to travel farther from their human operators, move from one place to another and attack a wider range of targets with humans “out of the loop” (see Briefing). Will they make war even more horrible? Will they threaten civilisation itself? It is time for states to think harder about how to control them.
世界还没有进入杀手机器人的时代,至少现在还没有。当今的自主武器大多是在自卫时击落来袭威胁物体的静态系统,或者攻击狭小指定区域的导弹。几乎所有的自主武器系统仍然有人员“在回路中”(比如远程控制无人机开火)或“在回路之上”(比如能够监视和撤销某个行动)。但未来的武器将能使用人员“在回路外”的系统,行进到离人类操纵员更远之处,从一地移动到另一地,向更大范围的目标发起攻击。这样的武器会不会让战争变得更恐怖?会不会威胁人类文明自身?各国是时候认真思考该如何控制自主武器了。

The UN’s Convention on Certain Conventional Weapons (CCW) has been discussing autonomous weapons for five years, but there is little agreement. More than two dozen states (including Austria, the Vatican, Brazil and nuclear-armed Pakistan), backed by increasingly vocal activists, support a pre-emptive ban on “fully autonomous weapons”. They point to campaigns against anti-personnel landmines, cluster munitions, and biological and chemical weapons as evidence that this can succeed. Most big powers—among them America, Russia and Britain—retort that the laws of war are already good enough to control autonomous weapons. Some argue that such weapons can be more accurate and humane than today’s.
联合国《特定常规武器公约》(CCW)就自主武器展开讨论已有五年,却没能达成什么共识。在呼声日益强烈的活动人士的支持下,20多个国家(包括奥地利、梵蒂冈、巴西和拥有核武器的巴基斯坦)主张采取先发制人的行动,禁止“全自主武器”。它们以针对杀伤性地雷、集束弹药和生化武器的行动为例证,指出禁止全自主武器是可行的。而美国、俄罗斯和英国等多数强国则反驳称,战争法已经非常完善,足以制约自主武器。一些强国称自主武器可能比现有武器更精准、更人道。

A third group of countries, led by the likes of France and Germany, is urging greater transparency and scrutiny. Autonomous systems make wars more unpredictable and harder to supervise; and they make it harder to assign responsibility for what happens during conflict. This third group is surely right to try to impose at least some controls.
以法国、德国等为首的第三类国家则敦促提高透明度和加大审查力度。自主系统让战争更难预测和监督,也让冲突中各类事件的责任界定变得更加困难。这些国家试图至少采取一些管制措施,这显然是正确的。

The laws of war are still the right place to start. They do not seek to ban war, but to limit its worst excesses. Among other things, they require that warriors discriminate properly between combatants and civilians, and ensure that collateral damage is proportionate to military gains. Military actions must therefore be judged in their context. But that judgment is hard for machines to form.
从战争法入手没有错。战争法并不谋求禁止战争,而是要限制最恶劣的战争暴行。比如,它要求士兵正确区分作战人员和平民,并确保附带性破坏与军事收益成正比。因而必须根据实际情形来判断采取何种军事行动。但是,机器很难做出这种判断。

In addition, new rules will be difficult to negotiate and monitor. For one thing, it is hard to control what does not yet exist and cannot be precisely defined. How long may a drone hover above the battlefield, empowered to strike, before it has slipped out of the hands of the humans who sent it there? The difference between machines under human control and those beyond it may be a few thousand lines of secret code.
此外,要议定和监督实施新的法规存在很大难度。首先,很难对那些尚不存在且不能被精确定义的自主武器实施管控。一架被人类赋予攻击的能力和权力的无人机要在战场上空盘旋多久,才能摆脱人类指挥员的控制?受人类控制的机器与不受人类控制的机器之间的差别可能是几千行秘密代码。

That said, two principles make sense. First, the more a weapon is permitted to roam about over large areas, or for long periods, the more important it is that humans remain “on the loop”—able to supervise its actions and step in if necessary, as circumstances change. That requires robust communication links. If these are lost or jammed, the weapon should hold fire, or return.
即便如此,仍有两条原则是明智的。第一条是武器被允许的活动范围越广、时间越长,就越有必要保持有人员“在回路之上”,从而能够监督武器的行动,并且根据环境的变化在必要时介入其中。这需要强大而稳定的通信链路。如果通信链路丢失或堵塞,武器应该能停火或返回。

A second tenet is that autonomous systems, whether civilian ones like self-driving cars or those that drop bombs, should be “explainable”. Humans should be able to understand how a machine took a decision when things go wrong. On one point, at least, all states agree: that the buck must stop with humans. “Accountability cannot be transferred to machines,” noted a report of the CCW in October. Intelligent or not, weapons are tools used by humans, not moral agents in their own right. Those who introduce a weapon into the battlefield must remain on the hook for its actions.
第二个原则是自主系统的行为都应该是“可解释的”,无论是无人驾驶汽车等民用系统,还是投掷炸弹的军用系统。人类应该能够了解机器在出现问题时是如何做决策的。所有国家至少在“责任在于人类”这点上达成了共识。《特定常规武器公约》去年10月的一份报告指出,“不能将责任转嫁到机器上。”武器无论智能与否,都不过是人类的工具,它们本身并不是道德行为体。那些将武器推向战场的人仍然必须为武器造成的后果负责。

A good approach is a Franco-German proposal that countries should share more information on how they assess new weapons; allow others to observe demonstrations of new systems; and agree on a code of conduct for their development and use. This will not end the horrors of war, or even halt autonomous weapons. But it is a realistic and sensible way forward. As weapons get cleverer, humans must keep up.
法德的提议不失为一种良策:各国应该就如何评估新武器分享更多信息;允许他国观摩新系统的演示;对武器的开发和使用的实施准则达成一致。这并不会终结战争的恐怖,甚至也不会遏阻自主武器的进程。但它却是一条务实而明智的前进道路。随着武器越来越智能,人类也必须与时俱进。

打赏

微信赞赏支付宝赞赏