PPRuNe Forums - View Single Post - Understanding emerging risk in aviation - ChatGPT
Old 1st Nov 2023, 09:55
  #2 (permalink)  
MechEngr
 
Join Date: Oct 2019
Location: USA
Posts: 876
Received 218 Likes on 121 Posts
The main risk ChatGPT and other "AI" programs pose is from two factors.

The first is they are "trained" to create/sound like existing material, but they don't have a sense of what the source of the underpinning of that material is; in human terms, they lie; in brain terms. the hallucinate. A lot of the time the results are believable because they are based on matching existing believable material. When they are asked for responses that aren't existing, they follow some set of rules they have inferred from existing material, and fill in the blanks.

The second is they are faster. A ChatGPT system can write entire books in minutes, the image generating "AIs" can generate thousands of high-detail images per hour. They can produce such an overwhelming amount of material that software detects as matching human efforts that using software to filter it out cannot work. Any software that can decide will be used to create better training for the AI until the detection software fails.

Together, it's a fire hose of indistinguishably plausible information that humans are ill-equipped to deal with.
MechEngr is offline