The new Study Assistant and the improved search are two powerful tools for exploring what we have in our library. However, when we compare them to some features in ChatGPT and Gemini, we still miss some functionalities that could really help us.
In Gemini, we have the “Deep Research” feature. In this mode, the answer is more profound, articulate, and helps us deepen our understanding of a topic. At least for me, that is not the case in Logos today.
Let’s say we want to study the intercession of the exalted Lord Jesus for the saints in heaven. When I enter the prompt: “Help me understand the intercession of Christ as seen, for example, in Romans 8:34 and Hebrews 7:25,” the Bible Assistant returns an answer that is about four paragraphs long and quotes only four resources. However, I selected my “Systematic Theology” collection, which contains 35 books.
It is good to have quick and brief responses, but it would be even better to have a more detailed and in-depth answer.
In “Deep Research,” Gemini usually takes about five minutes to process and returns a response that can be 10–15 pages long. If something like that could be implemented in Logos, it would be absolutely amazing. The truth is that, although we want to restrict our research to our Logos resources, the responses from the Study Assistant are often too superficial, so we end up using general AI tools.
So what is the proposal? That Logos should have an AI capable of performing in-depth research within our resources, analyzing multiple books and producing an answer that truly engages with them. In the example above, it would “read” all my systematic theology books and show what each contributes to the topic, highlighting the nuances between them.