Let's state from the onset here that I am not trying to advertise Bibleworks or Accordance or enter into a debate about which software is best. Please do not highjack this thread with this conversation.
AS A LOGOS USER, here is the problem:
1. Doing PhD work, we search a root in Hebrew, we get a count. Then we read on the root and find a different count in a lexical resource. After toying with different type of searches, we finally find the missing entries (as I recall, it was cognate participles).
2. The word is out in the academic community that Accordance or Bibleworks are more reliable when it comes to language work. As I heard it from an eminent professor, different Logos teams sometimes use different criteria to tag language resources.
What I am looking for:
Feedback from users who are REALLY COGNIZANT of these issues. Are you aware of some of the discrepancies when comparing what these different programs produce? Do you know -- for a fact (please no guesswork) -- what account for those discrepancies (or possibly, simply different approaches)? What does it mean for scholarly work reliance on Logos language resources (esp., morphologically tagged primary texts and associated functionalities)?
The purpose: if there is a problem with Logos tagging, what specific feedback can we give so that it can offer a better product for languages in particular.
What would be counter-productive: wasting everyone's time by going on a "defend Logos" campaign. This is not about criticizing Logos or coming to its defense. It's about asking whether there is a liability that can certainly be improved to make it a better academic product.