As an engineer, I think this is amazing. As a science fiction fan, it’s terrifying. Skynet being constructed right before our eyes. Or best case, HAL 9000. Just don’t ask it to lie to the crew.
On a more serious note, I *do* think OpenAI has changed computing and user interfaces forevermore. This combo of text, voice, and video is the UI holy grail – interacting with machines on a whole new level. Having said that, OpenAI’s creations have some weird blind spots that make them less trustworthy than one would assume. An intelligent person (less common than one would hope) knows (infers) that if A=B, then B=A. ChatGPT does *not* know that unless both equations in both forms are part of its rule set – this is known as the “symmetry fail” for neural networks. That’s a shocker to those trying to anthropomorphize ChatGPT, those who assume it “thinks” just like us. It doesn’t. It uses a completely different process to arrive at an output or conclusion. ChatGPT doesn’t understand what it’s been told and use that understanding to produce an answer, it simply generates a result based on its internal neural network. Inference requires understanding and deduction – that’s what we do, via processes that are still not well understood.
The speed with which OpenAI’s software has reached this point is amazing. I hope that they invest time and money in making their software less brittle, more reliable. The symmetry fail isn’t the only blind spot that ChatGPT and its brethren have.
Thank you for explaining “symmetry fail” in a way a non-techy person can understand. This was an a-ha for me.