PDA

View Full Version : Understanding emerging risk in aviation - ChatGPT


safetypee
1st Nov 2023, 09:04
"ChatGPT can empower humans but can also introduce potential risks. As such, there is a need to understand this technology’s role in the future and identify potential risks."

'The risks of using ChatGPT to obtain common safety-related information and advice' https://www.sciencedirect.com/science/article/pii/S0925753523001868 then view pdf

“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now."

Several case studies relate to aviation; e.g. distraction, psychological safety, high workload, fatigue,

N.B. Thread on AI; https://www.pprune.org/jet-blast/652884-ai-its-potential-impacts.html?highlight=ChatGPT
In which artee identifies similar issues, "Looking at the most visible manifestation of AI (using Large Language Models - LLM's) at present, ChatGPT, the makers (OpenAI) do warn that it can be very credibly wrong. Someone asked it to write a CV of them, and it put in some things that were completely made up and wrong. I think that at present you have to know enough about what you're prompting to be able to sanity check the answer. Crafting the prompts will become a skill in its own right."

As an alternative, this new thread proposes that ChatGPT is a subsection or significantly different from the general perception of "AI", and that individuals are more likely to use and be influenced by ChatGPT - exposed to risk.
Is this a significant risk; what is likelihood or severity of outcome from ChatGPT use.

MechEngr
1st Nov 2023, 09:55
The main risk ChatGPT and other "AI" programs pose is from two factors.

The first is they are "trained" to create/sound like existing material, but they don't have a sense of what the source of the underpinning of that material is; in human terms, they lie; in brain terms. the hallucinate. A lot of the time the results are believable because they are based on matching existing believable material. When they are asked for responses that aren't existing, they follow some set of rules they have inferred from existing material, and fill in the blanks.

The second is they are faster. A ChatGPT system can write entire books in minutes, the image generating "AIs" can generate thousands of high-detail images per hour. They can produce such an overwhelming amount of material that software detects as matching human efforts that using software to filter it out cannot work. Any software that can decide will be used to create better training for the AI until the detection software fails.

Together, it's a fire hose of indistinguishably plausible information that humans are ill-equipped to deal with.