Hey @phodal -- I came across AutoDev while researching how teams deal with AI code hallucinations (phantom imports, non-existent API calls, etc). Your work on the architecture-aware context in AutoDev is the closest thing I've seen to actually solving this at the generation stage rather than after the fact.
Quick question: in your experience, do most hallucination problems come from insufficient context (the model doesn't know the codebase), or is it more fundamental than that? And have you found static analysis catches enough of it, or do teams need something runtime-level?
Would appreciate your take -- no need for a long response, even a one-liner helps.
Hey @phodal -- I came across AutoDev while researching how teams deal with AI code hallucinations (phantom imports, non-existent API calls, etc). Your work on the architecture-aware context in AutoDev is the closest thing I've seen to actually solving this at the generation stage rather than after the fact.
Quick question: in your experience, do most hallucination problems come from insufficient context (the model doesn't know the codebase), or is it more fundamental than that? And have you found static analysis catches enough of it, or do teams need something runtime-level?
Would appreciate your take -- no need for a long response, even a one-liner helps.