This is just the analysis I was looking for !!! I'm a software dev / business analyst with a high-level understanding of what LLMs are doing (but without your math knowledge) and have been trying to underdstand the hallucination problem as pertains to using AI in business contexts where hallucinating to any extent is unacceptable.
I've been thinking that the LLM needs to be somehow completely constrained by a knowledge model / graph of some sort such that hallucinating is impossible, but I had a sense that this would be computationally expensive. I also think this is some sort of optimisation or route-finding problem where the weights of connections between nodes is not constant but dependent on the path or intermediate solution.
Are there any current projects trying to implement something like you've suggested? I feel intuitively that, if humans (at least self-aware humans) can efficiently understand when they've made an assertion based on assumptions that need to be fact-checked, then it should be possible to build a model that can do the same thing.
I'm also imagining that, while this computation might be expensive, it should be able to be cached in some form since the underlying facts and accurate intermediate inferences from them don't change frequently.