safetypee
1st Nov 2023, 09:04
"ChatGPT can empower humans but can also introduce potential risks. As such, there is a need to understand this technology’s role in the future and identify potential risks."
'The risks of using ChatGPT to obtain common safety-related information and advice' https://www.sciencedirect.com/science/article/pii/S0925753523001868 then view pdf
“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now."
Several case studies relate to aviation; e.g. distraction, psychological safety, high workload, fatigue,
N.B. Thread on AI; https://www.pprune.org/jet-blast/652884-ai-its-potential-impacts.html?highlight=ChatGPT
In which artee identifies similar issues, "Looking at the most visible manifestation of AI (using Large Language Models - LLM's) at present, ChatGPT, the makers (OpenAI) do warn that it can be very credibly wrong. Someone asked it to write a CV of them, and it put in some things that were completely made up and wrong. I think that at present you have to know enough about what you're prompting to be able to sanity check the answer. Crafting the prompts will become a skill in its own right."
As an alternative, this new thread proposes that ChatGPT is a subsection or significantly different from the general perception of "AI", and that individuals are more likely to use and be influenced by ChatGPT - exposed to risk.
Is this a significant risk; what is likelihood or severity of outcome from ChatGPT use.
'The risks of using ChatGPT to obtain common safety-related information and advice' https://www.sciencedirect.com/science/article/pii/S0925753523001868 then view pdf
“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now."
Several case studies relate to aviation; e.g. distraction, psychological safety, high workload, fatigue,
N.B. Thread on AI; https://www.pprune.org/jet-blast/652884-ai-its-potential-impacts.html?highlight=ChatGPT
In which artee identifies similar issues, "Looking at the most visible manifestation of AI (using Large Language Models - LLM's) at present, ChatGPT, the makers (OpenAI) do warn that it can be very credibly wrong. Someone asked it to write a CV of them, and it put in some things that were completely made up and wrong. I think that at present you have to know enough about what you're prompting to be able to sanity check the answer. Crafting the prompts will become a skill in its own right."
As an alternative, this new thread proposes that ChatGPT is a subsection or significantly different from the general perception of "AI", and that individuals are more likely to use and be influenced by ChatGPT - exposed to risk.
Is this a significant risk; what is likelihood or severity of outcome from ChatGPT use.