Communications of the ACM - Artificial Intelligence 22小时前
Can AI Replace Copilots on Passenger Jets?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了航空业在经济和飞行员短缺的双重压力下,尝试用人工智能(AI)副驾驶取代一名飞行员的可能性。尽管研究已持续数十年,但专家警告,忽视飞行员在安全、心理层面所扮演的关键角色,可能导致灾难。文章重点关注了单人驾驶(SPO)模式的风险,包括AI无法应对突发情况、飞行员的疲劳和注意力分散、以及缺乏人类之间的情感支持和认知验证。文章强调,人类飞行员在处理异常情况、维护飞行安全方面,仍然发挥着不可替代的作用。

🧐 航空业正探索单人驾驶(SPO)模式,以应对飞行员短缺和经济压力。该模式初期可能采用“减少机组人员操作”(eMCO),即在巡航阶段由一名飞行员和AI副驾驶共同操作。

⚠️ 专家警告,用AI取代飞行员可能忽视了飞行员在情感调节、压力管理和应对突发情况等方面的关键作用。AI难以像人类一样提供情感支持,协助缓解压力,或在紧急情况下做出正确判断。

🤔 单人驾驶模式可能导致飞行员的疲劳和注意力下降,因为AI无法像人类一样通过交流、互相提醒来保持警觉。此外,AI缺乏人类的社交和认知验证能力,难以像人类一样进行交叉检查,减少误解。

🚨 专家强调,人类飞行员能够识别彼此的健康问题,并处理各种非飞行操作任务,如与机组人员、调度员的沟通。AI在这些方面存在局限性,可能导致单人飞行员不堪重负,尤其是在异常情况下。

In a bid to boost the economics of the aviation industry and alleviate a pilot shortage at the same time, researchers in the U.S. and Europe for the last two decades have been investigating ways in which one of the two pilots in the cockpits of passenger airliners could be replaced with an automated copilot based on artificial intelligence—with research labs of NASA and the European Union in the vanguard of that research.

However, computer scientists and human factors experts are warning that removing one human pilot and force-fitting an AI into their role could see the aviation industry sleepwalk into disaster in failing to take into account the full range of subtle, often safety-related, psychological roles that pilots play when two work side-by-side on a flight deck.

That was the overarching message at a mid-March conference of the Royal Aeronautical Society (RAeS) in London, where pilots, their trade unions, aviation regulators, computer scientists, and human-machine interaction specialists met to discuss the wisdom of the industry’s idea to move, sometime in the next decade, to a flight regime going under the umbrella name of Single Pilot Operations, or SPO.

Currently scheduled for regulatory consideration by the European Aviation Safety Agency and the U.N.’s International Civil Aviation Organization sometime between 2027 and 2030, the initial aim of SPO would be to first test the technological waters by reducing the number of pilots in the cockpit in just one phase of a flight.

These “reduced crew operations,” SPO proponents say, would begin with a scheme called extended Minimum Crew Operations (eMCO), in which two pilots would fly the take-off and climb, as happens now, but only one pilot would fly the extended cruise phase at any one time, alongside the artificially intelligent copilot, while the other pilot is resting. The two eMCO pilots would then join forces again to fly the descent and landing together.

eMCO is just a prelude to the real SPO deal. Later on, if eMCO’s single-pilot cruise automation technology has been shown to work safely, aircraft with flight decks with only one pilot seat could be designed to allow full end-to-end, gate-to-gate, single-pilot flights. RAeS delegates heard that they may also be designed to use ground support—perhaps allowing some aircraft functions to be radio-controlled—in a nod to the advances being made in the large military drone sector, since some outsized remotely-piloted drones, like Northrop Grumman’s Global Hawk, have wingspans similar to single-aisle passenger jets.

Pilot unions globally bitterly oppose eMCO and SPO, seeing the moves not only as dangerous in many respects, but also as the start of a slippery slope to having no human pilot at all. Particular concerns include the case of an incapacitated single pilot: how would the AI fly hundreds of people alone with no human oversight? Could the automation cope if the aircraft suffered serious physical damage due to weather or technical issues?

The latter situation has occurred even with today’s limited flight deck automation and a full crew: in 2010, Qantas flight QF32, carrying 469 people from Singapore to Sydney in an Airbus A380 superjumbo suffered an engine explosion. The explosive debris damaged fuel tanks in the wings and severed hundreds of avionics network cables in the fuselage, sending the automation haywire as sensor readings became garbage, forcing the crew to ignore a blizzard of advisories from the cockpit automation as the messages either made no sense or were plain dangerous, such as suggesting that fuel be transferred from undamaged tanks into damaged ones. Thanks to a lot of smart action by QF32’s human crew, however, the plane and all aboard landed safely.

With such a life-threatening experience in mind, what has convinced some in the aviation industry that an AI-based automation system would be able to assume all the subtle roles of a human copilot?

Not much, it turns out. At least, that was the view of the RAeS conference’s keynote speaker, Sami Makaleinen, a former Nokia and Telstra computer scientist and now a researcher at the Institute for the Future in Palo Alto, CA, a fellow at the Royal Melbourne Institute of Technology in Australia, and an AI ethicist with the University of Melbourne.

Makaleinen told delegates, “What makes me believe that we’re not there yet with single-pilot operations is the fact that we’re seemingly planning on removing the pilot without even taking into account everything they do, because the pilot’s task isn’t just to fly the plane.”

He explained, “Pilots are human, and their tasks include emotional regulation and stress management. Having another pilot on the flight deck helps maintain emotional stability and composure when facing turbulence, severe weather, unexpected air traffic control requests, and unusual operational stress. Pilots often rely on each other for reassurance, emotional cues, and mutual calming, whereas automation doesn’t really soothe nerves or crack jokes.”

Another risky aspect of flight deck life is that pilots can get bored and lose task attentiveness while monitoring automated flight deck systems, and working alongside an AI copilot could worsen the situation. “But another human in the cockpit reduces boredom through conversation, helping with situational awareness. As automation-induced complacency is a recognized hazard, pilots keep each other engaged and responsive, subtly preventing the drifting of attention,” said Makaleinen.

In addition, said Makaleinen, “Pilots are there for social and cognitive validation, too, cross-checking reality by continually cross-referencing their perceptions: asking ‘Did you hear that weird sound?’ or ‘What did air traffic control just say?’ They continuously verify their understanding of situations, reducing ambiguity or confusion. Automation rarely provides effective reality checks in ambiguous circumstances.”

On the health front, two human pilots can notice subtle signs of encroaching sickness in each other’s behavior, appearance, or responsiveness, which AI cannot so easily do, potentially avoiding sudden incapacitation. Without a second human pilot, early signs of mild hypoxia (oxygen deprivation), heart attack, or stroke might go unnoticed until it is too late.

Makaleinen noted that there are also “heaps of non-flying operational tasks” human pilots must undertake that would be beyond the ability of AI to handle, like coordinating the cabin crew’s response to disruptive passengers, communicating with airline dispatchers, and managing gate issues before departure and arrival. “A single pilot might find these ancillary responsibilities distracting, overwhelming, or downright impossible, especially during abnormal situations, which can happen at any point in the flight.”

“Removing a pair of eyes comes at a cost,” warned James Blundell, an aviation human factors researcher at Cranfield University in the U.K. He and his colleagues are studying how unexpected events like inflight lightning strikes or exploding engines can cause potentially disruptive “startle and surprise” responses from pilots.

Where a human pilot could recognize this reaction in a colleague and talk them through a startled state, it could be difficult for an AI copilot to recognize it is working with a shocked human whose control decisions should be over-ridden. Researchers are investigating whether a raft of in-cockpit sensors could help, by assessing parameters like the pilot’s facial temperature, gaze behavior, and speech patterns, perhaps to inform the AI on how engaged the pilot is with their flying task. But that is still in its very early days, said Blundell.

It all begs the question: How has the notion of SPO gotten so far down the road as to put the critical flight deck human-to-human interactions that Makaleinen describes on the verge of being lost, or without considering the results of the startle/surprise management investigations?

To some, it is no surprise.

“It is common for people who design automated systems to have a blind spot for human interaction,” said Wendy Ju, a professor of human-computer interaction at the Jacobs Technion-Cornell Institute in New York. “The most obvious example is with autonomous cars; researchers put off studying how people will interact with AVs [autonomous vehicles] to study the ‘hard problems’ of how to detect and navigate the road. But now they are discovering that figuring out how to drive amongst other human drivers, pedestrians, and cyclists is the real moon-shot grand challenge.”

Another issue challenging SPO from ever being deemed safe enough to utilize is a relatively simple truth: on civilian flights, things go wrong all the time, and it takes a human crew to troubleshoot them, said Tanja Harter, president of the European Cockpit Association, a pilots’ trade union.

Harter said ‘Murphy’s law’ applies on the flight deck: “What can go wrong, will go wrong.”

“Murphy is a frequent flyer,” she said, “and is on board all the time.” 

Paul Marks is a technology, aviation, and spaceflight journalist, writer, and editor based in London, U.K.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

单人驾驶 人工智能 飞行安全
相关文章