BUG: Inconsistent tagging in NASB95

JBR
JBR Member Posts: 211 ✭✭
edited November 21 in English Forum

I ran the rhetorical question search that was in a recent blog (https://www.logos.com/grow/how-to-find-rhetorical-questions-in-the-bible/). In the first attempt (NASB95: Paragraph Version) the Luke 1:43 passage was not included. As this was different from what MP had shown in the blog I ran it again using NASB95: 1995 Update, which was the same version MP used. Both show 56 results. The first is missing Luke 1:43 while the second is missing Luke 13:4.

For God and For Neighbor

Tagged:

Comments

  • Dave Hooton
    Dave Hooton MVP Posts: 35,682

    I ran the rhetorical question search that was in a recent blog (https://www.logos.com/grow/how-to-find-rhetorical-questions-in-the-bible/). In the first attempt (NASB95: Paragraph Version) the Luke 1:43 passage was not included. As this was different from what MP had shown in the blog I ran it again using NASB95: 1995 Update, which was the same version MP used. Both show 56 results. The first is missing Luke 1:43 while the second is missing Luke 13:4.

    Which text is actually WITHIN the other?  I tried reversing the terms:-

    {Section <Sentence ~ Interrogative>} WITHIN {Section <SpeechAct = Info: Assert>} and got many more results in Luke that included both passages.

    Note that {Section <SpeechAct = Info: Assert>} INTERSECTS {Section <Sentence ~ Interrogative>} gave the same results in Luke as INTERSECTS ensures there is some overlap between the two texts.

    A more direct approach is to use {Label Question WHERE Rhetorical=true} and you will find that Lk 1.43 is classed as rhetorical whilst Lk 13.4 is not (found with Rhetorical=false).

    "The Questions in the Bible dataset is an extension of the Sentence Types of the Old Testament, Speech Acts of the Old Testament, Sentence Types of the New Testament, and Speech Acts of the New Testament datasets. Given the importance of questions for theological reflection, this dataset attempts to further examine question types to provide users with more information about questions."
    Thompson, J. (2017). Questions in the Bible Dataset Documentation. Faithlife.

    I'm sure you will have your own opinion about rhetorical questions[H]

    Dave
    ===

    Windows 11 & Android 13

  • JBR
    JBR Member Posts: 211 ✭✭

    I wasn't trying to decide what was rhetorical and what was not in this case. I was simply observing that there was a difference between the two NASB95 editions that presumably are the same except for the paragraphing. As it is also a simple example taken from the blog post, one that is restricted in its query and one that is only looking in Luke, I have a suspicion that this may be an indication of the presence of tagging differences between the two NASB95 editions. I had expected them two be the same.

    For God and For Neighbor

  • Dave Hooton
    Dave Hooton MVP Posts: 35,682

    I was simply observing that there was a difference between the two NASB95 editions that presumably are the same except for the paragraphing. As it is also a simple example taken from the blog post, one that is restricted in its query and one that is only looking in Luke, I have a suspicion that this may be an indication of the presence of tagging differences between the two NASB95 editions. I had expected them two be the same.

    The two are tagged the same but your query was incorrect as it is important to know which text is likely to be WITHIN the other! You were comparing a larger body of text (Speech Acts) covering all of Lk 1 and Lk 13 to a smaller body covering several verses of both chapters.

    Reversing the terms (smaller WITHIN larger) will give you correct results:-

    {Section <Sentence ~ Interrogative>} WITHIN {Section <SpeechAct = Info: Assert>}

    I also noted that INTERSECTS will detect any overlap between the two texts.

    Dave
    ===

    Windows 11 & Android 13

  • Dave Hooton
    Dave Hooton MVP Posts: 35,682

    The two are tagged the same

    On a more technical note, the tagging is done against one Greek bible (SBLGNT) and one Hebrew bible (LHB) and is transferred to a bible translation via its reverse interlinear (matching English to Greek and English to Hebrew). If you notice the odd word(s) that is not highlighted, it is because the word is not included in the original Greek or Hebrew.

    Dave
    ===

    Windows 11 & Android 13

  • JBR
    JBR Member Posts: 211 ✭✭

    Thanks for the responses. I understand better now. As I had taken the original query from the blog I hadn't stopped to consider whether it was constructed correctly; I just assumed it was. Now with swapping the terms I get significantly more results, although there are still differences in the two editions. I still think there is a difference that should not be there. You say, 

    The two are tagged the same

    On a more technical note, the tagging is done against one Greek bible (SBLGNT) and one Hebrew bible (LHB) and is transferred to a bible translation via its reverse interlinear (matching English to Greek and English to Hebrew). If you notice the odd word(s) that is not highlighted, it is because the word is not included in the original Greek or Hebrew.

    I don't understand why there should be a word difference between these two editions that presumably only differ in the paragraphing and not the text.

    For God and For Neighbor

  • JBR
    JBR Member Posts: 211 ✭✭

    Here's an update to the query with the additional results due to search order. You'll notice that there is an extra result for Luke 1:43 in the NASB95 Update edition as compared to the NASB96 Paragraph Version edition. That difference is where there is a Greek word that is not assigned to a particular English word. There are three other cases in Luke for this query where there is a Greek word that is not assigned to a particular English word. Two of those cases are common for both editions of NASB95 with respect to this query. The third case shows up in the list for NASB95 Paragraph Version but not NASB95 Update and so the total number of results and verses for both editions are identical.

    When I take a look at Luke 1:43 with the interlinear pane, the difference is that the Greek word that is not assigned to an English word follows "has it happened" in one edition and is placed between "has it" and "happened" in the other edition.

    In this particular text the difference then is that the search counts 2 results for this verse in one edition and 3 results for this same verse in the other edition. 

    For God and For Neighbor

  • Whyndell Grizzard
    Whyndell Grizzard Member Posts: 3,497 ✭✭✭

    Lol inconsistent tagging everywhere in logos going on 5 years now- I have resources that avg 25-30% tagging left untouched- I no longer have confidence that the items I get will be tagged the way they were 10 yrs ago

  • MJ. Smith
    MJ. Smith MVP Posts: 53,132

    When I take a look at Luke 1:43 with the interlinear pane, the difference is that the Greek word that is not assigned to an English word follows "has it happened" in one edition and is placed between "has it" and "happened" in the other edition.

    The positive side of this is it forces users to recognize that they are using tagging that is to some extent arbitrary. You can either mindlessly adopt the Faithlife reading(s) or you can actually study the text rather than the tagging.

    Orthodox Bishop Alfeyev: "To be a theologian means to have experience of a personal encounter with God through prayer and worship."; Orthodox proverb: "We know where the Church is, we do not know where it is not."

  • Dave Hooton
    Dave Hooton MVP Posts: 35,682

    You'll notice that there is an extra result for Luke 1:43 in the NASB95 Update edition as compared to the NASB96 Paragraph Version edition.

    You are correct that the Aligned view shows the different result counts for verse 43 and that is due to the way that the reverse interlinear aligns the untranslated Greek word touto (the count is 1 for SBLGNT). But the important thing is that the verse is included in both sets and that both sets include the same 152 verses (not just the number) i.e. the total results count (253) could have been different, but they still represent the same interrogative data. 

    If you look at Lk 1.18, the result count is 1 despite the number of words added and inserted by the translation.

    Whilst it may have been instructive for query formulation and understanding the basis of tagging for this type of data, it certainly is not the most reliable way to find rhetorical questions.

    Dave
    ===

    Windows 11 & Android 13

  • Dave Hooton
    Dave Hooton MVP Posts: 35,682

    When I take a look at Luke 1:43 with the interlinear pane, the difference is that the Greek word that is not assigned to an English word follows "has it happened" in one edition and is placed between "has it" and "happened" in the other edition.

    The positive side of this is it forces users to recognize that they are using tagging that is to some extent arbitrary. You can either mindlessly adopt the Faithlife reading(s) or you can actually study the text rather than the tagging.

    JBR is analyzing the different result count for one verse in different translations that all depend on the same tagging but are affected by the alignment of English words to Greek words in the reverse interlinear.

    Dave
    ===

    Windows 11 & Android 13

  • Dave Hooton
    Dave Hooton MVP Posts: 35,682

    Lol inconsistent tagging everywhere in logos going on 5 years now- I have resources that avg 25-30% tagging left untouched- I no longer have confidence that the items I get will be tagged the way they were 10 yrs ago

    Whyndell

    As I stated above, the differences between the two NASB translations is not not due to the "tagging". One original language bible is tagged and the results are only available to a translation (ESV, NASB, etc.) that has a reverse interlinear to match original language words to the words in the translation. The differences here are due to the reverse interlinears (and the Search) BUT verse 43 still reflects the tagging for Interrogative phrases.

    Dave
    ===

    Windows 11 & Android 13

  • MJ. Smith
    MJ. Smith MVP Posts: 53,132

    JBR is analyzing the different result count for one verse in different translations that all depend on the same tagging but are affected by the alignment of English words to Greek words in the reverse interlinear.

    I know. I apparently use tagging in a broader sense than you do ... Pacific Northwest Coast Americanese considers alignment a subcategory of tagging.

    Orthodox Bishop Alfeyev: "To be a theologian means to have experience of a personal encounter with God through prayer and worship."; Orthodox proverb: "We know where the Church is, we do not know where it is not."

  • JBR
    JBR Member Posts: 211 ✭✭

    One comment on this thread regarding the rhetorical aspect for those that have followed it or may come to it later. After reviewing the results of the search for the rhetorical aspect, this recommendation 

    A more direct approach is to use {Label Question WHERE Rhetorical=true} and you will find that Lk 1.43 is classed as rhetorical whilst Lk 13.4 is not (found with Rhetorical=false).

    is the one to use. In fact, the original search described in the blog produces fewer results in total and of the results it produces only about half are what the recommended search categorizes as rhetorical.

    For God and For Neighbor