I must say that I did not expect this.
I had that happen to me one time, when I was testing whether it would take my print library books into account. A 'general knowledge' answer is useless to me if I can't link it back to a source.
Displaying your query would have assisted formulating an alternative or a follow up, but specifying "Your books" may have restricted the response. With "All books" and "Can there be third person self references in a nominative absolute construction?" I got three references.
@Dave Hooton thank you. I was not looking for an alternative. I was simply reporting that the AI can have the behavior reported on given occasions. The feature is advertised as pulling from resources you can trust, namely your books or all books. I have had it where it said that it cannot find results and suggests a precise search and that is just fine. However, to credit itself with a proposed answer is not. I actually recognize some of the material in the answer from my Greek grammars, so it is not even accurate for the AI to label it as it "own general knowledge." It's not a huge deal as I encountered this only once but it seems like a blindspot that should be fixed. Logos' premise with regard to the use of AI should want to avoid at all cost newbies taking statements like this at face value without sources.
If it helps diagnose what triggers the behavior, here were the questions which had for purpose to find sources to substantiate a report I was giving about an erroneous statement I read (that the nominative absolute used in greetings is a self-designation in the third person). First, I did a general inquiry simply worded "nominative in greetings" and the answer as was useful as expected since it was a very general query. Then I followed up with "In such constructions (nominative absolute) is it correct or incorrect to speak of self-reference in the third person?" expecting that the answer would be that it is indeed incorrect with SOURCES (I actually use AI to find sources, not for its answers) and got the answer I posted earlier. I have my answers already. As I said, this was simply reporting that the AI can present answers as its own knowledge and I don't think it should.
Generally, in academic platforms, the use of general knowledge means that something occurs in at least five different resources without giving a reference i.e. it is common knowledge. I suspect that AI has a similar sense of something being so common as to not require noting sources. That is no reflection on reliability.
@Francis If it helps diagnose what triggers the behavior, here were the questions which had for purpose to find sources to substantiate a report I was giving about an erroneous statement I read (that the nominative absolute used in greetings is a self-designation in the third person
I was able to reproduce a response with "no sources" but it was more helpful than the response you received. But I agree that "general knowledge" is not adequate for me to trust (having little knowledge of grammar). So I followed up with "Can you provide an example of self references in a nominative absolute construction?" and it returned some examples with three sources from "My" books (Rev 1:1, 1 Pe 1:1, "Paul, an Apostle").
Available Now
Build your biblical library with a new trusted commentary or resource every month. Yours to keep forever.