This article explains the part that is missing from deep learning models. It is short but interesting, and if you have ever written a program, you get it.
DLMs rely on pseudo-neural networks, that train on specific data. They work well within the range of the data they have trained on. But when you get into edge cases and corner cases, the DLM will choke. The example he shows is a semi trailer lying on its side, which the autopilot does not recognize, so it runs into it, because it has not been trained to not run into things that it does not recognize.
In other words, a DLM is good at interpolation and recognizing familiar things, but the ability to extrapolate and adapt is troublingly problematic. This is the thing that DLM creators need to work on. For example, I speak of "accufessions", which we can all interpret because we can see the root symbols involved in that construction. But a LLM will struggle with that word because it does not fit into its training model. It is not even clear whether it could suggest a correction because the word is so far out of scope for its models.
Hence, for "AI" to succeed, we need to have methods for it to parse unfamiliar symbology and things that it does not recognize in order to have adequate comprehension for the critical tasks some of us want to put it to – like knowing when to stop the car due to unfamiliar circumstances and then figure the situation out so that it can find a way to go again.