Good to check for accuracy

Veli Voipio
Veli Voipio MVP Posts: 2,103

I made this kind of AI search and the result is not accurate, partly because of the source book.

image.png

Gold package, and original language material and ancient text material, SIL and UBS books, discourse Hebrew OT and Greek NT. PC with Windows 11

Comments

  • DMB
    DMB Member Posts: 14,656 ✭✭✭✭✭
    edited May 29

    Good example of guesses piled on guesses. That's why I use my 'major guessing' visual filter (which AI is not oriented to).

    Actually, the AI was accurate, in the sense it quotes (without quotes) and then points to the source?

    "If myth is ideology in narrative form, then scholarship is myth with footnotes." B. Lincolm 1999.

  • Joseph Sollenberger
    Joseph Sollenberger Member Posts: 128 ✭✭✭
    edited May 29

    I am an old grumpy fellow, so you are forewarned. Back in the 70's we would yell GIGO ( Garbage In Garbage Out) at this kind of situation. In our programming we knew that the machine simply followed the instructions we gave as it processed the input data stream. The output stream was only as reliable as the validity of the input stream, the correctness of the manipulations during the processing of that input stream, plus a few times the output processing mangled what was being fed to that final step.

    The weak link in this process is quite often the content of the input stream, and now we get to the critical issue, as your example demonstrates. Large Language Models inhale a language stream and then process that stream. There is no evaluation of that language stream. GIGO—and people are now discovering the concept of feedback loops.

    As false input data is processed and creates false output, this false output stream may become an input stream for new LLM training. We all know the result of feedback in sound systems when a portion of the speaker output is picked up by the microphone and fed back through the amplification process quickly creating a totally distorted sound that offers no comprehensible message. Feeding LLM output containing false information into the training process of new LLM systems just amplified the erroneous data.

    This amplification is currently happening, and I fear it will escalate as a contagion. Extreme care is needed in LLM development or we will simply be surrounded by noise.

    Yes, I know that all written works may contain error, but I fear that error amplification via LLM feedback processing will make AI hallucinations become normative.

    Edited for spelling and grammar. ;)

    Joseph F. Sollenberger, Jr.

  • DMB
    DMB Member Posts: 14,656 ✭✭✭✭✭
    edited May 29

    Good point. But additionally, the LLMs don't give the user an error function (part of training). My neurals (hebrew/greek) do give me a 2-tailed reliability (warning, memorizing vs guessing).

    "If myth is ideology in narrative form, then scholarship is myth with footnotes." B. Lincolm 1999.

  • Joseph Sollenberger
    Joseph Sollenberger Member Posts: 128 ✭✭✭
    edited May 29

    @DMB, an error function should be required on all LLM output. One would never submit a physical chemistry laboratory result without a full propagation of error calculation or statistical analysis if absolute propagation of error was not possible. A numerical result was never just a simple single number—makes me relish those days of always thinking of ranges instead of a single datum. Well, I still do think of things in probabilistic and correlational ways instead of absolutes, which frustrates folk I interact with who want a single simple answer.

    Edited for clarity.

    Joseph F. Sollenberger, Jr.

  • Kristin
    Kristin Member Posts: 835 ✭✭✭

    Hi @Joseph Sollenberger, I might be wrong, but I think you wanted to tag @DMB. :)

  • Joseph Sollenberger
    Joseph Sollenberger Member Posts: 128 ✭✭✭

    @Kristin, thanks! Corrected.

    Joseph F. Sollenberger, Jr.