Watched the latest live with Mark Barnes, but this question wasn’t addressed.
After thoroughly testing Logos' AI search capabilities, I’ve come across a structural limitation that deserves open discussion:
The AI assistant does not consult all resources within a collection unless those resources are explicitly mentioned.
Here’s what I mean: if I ask a theological, exegetical, or topical question and expect the assistant to draw from all my commentaries on, say, the Gospel of Matthew, the results are inconsistent. Even when I have a clearly defined collection, the AI does not seem to access the whole group unless I name individual resources in my prompt.
This is problematic for several reasons:
- It breaks the basic expectation of semantic or "intelligent" search. One would assume that if a collection is referenced (or implicitly relevant to the question), the AI would automatically draw from all the resources it includes.
- It undermines trust in the completeness of the results. It’s not reasonable to require users to write, “Comment on this according to the Word Biblical Commentary, NICNT, Dale Bruner’s Matthew, etc.” just to get substantial output. If we have collections, they should matter.
- There’s no clear indication of what the AI is or isn't consulting. We’re left guessing which books were accessed, and there’s no transparent logging or citation system confirming sources unless they’re mentioned explicitly.
If the assistant only works with specific titles named in the prompt, this is not true AI-powered search — it’s a glorified indexer. I believe this needs serious improvement, especially for those of us with large libraries who rely on collection-based workflows.
Have others noticed this? Any workarounds you’ve found? Or is this just a current limitation by design?