Andrew Groom
1 min readMay 2, 2024

--

This is just the analysis I was looking for !!! I'm a software dev / business analyst with a high-level understanding of what LLMs are doing (but without your math knowledge) and have been trying to underdstand the hallucination problem as pertains to using AI in business contexts where hallucinating to any extent is unacceptable.

I've been thinking that the LLM needs to be somehow completely constrained by a knowledge model / graph of some sort such that hallucinating is impossible, but I had a sense that this would be computationally expensive. I also think this is some sort of optimisation or route-finding problem where the weights of connections between nodes is not constant but dependent on the path or intermediate solution.

Are there any current projects trying to implement something like you've suggested? I feel intuitively that, if humans (at least self-aware humans) can efficiently understand when they've made an assertion based on assumptions that need to be fact-checked, then it should be possible to build a model that can do the same thing.

I'm also imagining that, while this computation might be expensive, it should be able to be cached in some form since the underlying facts and accurate intermediate inferences from them don't change frequently.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Andrew Groom
Andrew Groom

Written by Andrew Groom

Working it all out as I go along, thinking about socialism and being creative as the ultimate expression of who I am, not how I make money for someone else

No responses yet

Write a response