I feel that that's giving ChatGPT a lot of credit that it doesn't really deserve. ChatGPT doesn't need to see an error to produce an error.LLMs use a vast database of human generated content to predict answers and the language humans would want to read them in. But the problem is that we humans are quite fallible. So a computer trained to respond based on the average answer a human would give will be fallible, too.
Statistics: Posted by BirdFood — Thu May 30, 2024 2:22 am — Replies 18 — Views 1086