Go Back  PPRuNe Forums > Ground & Other Ops Forums > Safety, CRM, QA & Emergency Response Planning
Reload this Page >

Understanding emerging risk in aviation - ChatGPT

Wikiposts
Search
Safety, CRM, QA & Emergency Response Planning A wide ranging forum for issues facing Aviation Professionals and Academics

Understanding emerging risk in aviation - ChatGPT

Thread Tools
 
Search this Thread
 
Old 1st Nov 2023, 09:04
  #1 (permalink)  
Thread Starter
 
Join Date: Dec 2002
Location: UK
Posts: 2,451
Likes: 0
Received 9 Likes on 5 Posts
Understanding emerging risk in aviation - ChatGPT

"ChatGPT can empower humans but can also introduce potential risks. As such, there is a need to understand this technology’s role in the future and identify potential risks."

'The risks of using ChatGPT to obtain common safety-related information and advice' https://www.sciencedirect.com/scienc...25753523001868 then view pdf

ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now."

Several case studies relate to aviation; e.g. distraction, psychological safety, high workload, fatigue,

N.B. Thread on AI; https://www.pprune.org/jet-blast/652...hlight=ChatGPT
In which artee identifies similar issues, "Looking at the most visible manifestation of AI (using Large Language Models - LLM's) at present, ChatGPT, the makers (OpenAI) do warn that it can be very credibly wrong. Someone asked it to write a CV of them, and it put in some things that were completely made up and wrong. I think that at present you have to know enough about what you're prompting to be able to sanity check the answer. Crafting the prompts will become a skill in its own right."

As an alternative, this new thread proposes that ChatGPT is a subsection or significantly different from the general perception of "AI", and that individuals are more likely to use and be influenced by ChatGPT - exposed to risk.
Is this a significant risk; what is likelihood or severity of outcome from ChatGPT use.


safetypee is offline  
Old 1st Nov 2023, 09:55
  #2 (permalink)  
 
Join Date: Oct 2019
Location: USA
Posts: 850
Received 201 Likes on 111 Posts
The main risk ChatGPT and other "AI" programs pose is from two factors.

The first is they are "trained" to create/sound like existing material, but they don't have a sense of what the source of the underpinning of that material is; in human terms, they lie; in brain terms. the hallucinate. A lot of the time the results are believable because they are based on matching existing believable material. When they are asked for responses that aren't existing, they follow some set of rules they have inferred from existing material, and fill in the blanks.

The second is they are faster. A ChatGPT system can write entire books in minutes, the image generating "AIs" can generate thousands of high-detail images per hour. They can produce such an overwhelming amount of material that software detects as matching human efforts that using software to filter it out cannot work. Any software that can decide will be used to create better training for the AI until the detection software fails.

Together, it's a fire hose of indistinguishably plausible information that humans are ill-equipped to deal with.
MechEngr is offline  

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off



Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.