Highlighting in summaries

Milkman
Milkman Member Posts: 4,880 ✭✭✭
edited November 2024 in English Forum

Is there a way to highlight in Summaries? Or do any kind of markups? Underlines, notes, colours etc.

mm.

Comments

  • Simon’s Brother
    Simon’s Brother Member Posts: 6,822 ✭✭✭

    Milkman said:

    Is there a way to highlight in Summaries? Or do any kind of markups? Underlines, notes, colours etc.

    mm.

    Are you referring to the sidebar summaries or the search results summaries.  I am not aware of the ability to do with either type of summary.  Search summaries have not right-click menu while sidebar summaries have a very limited right-click menu that does not offer these options.

    The only thing I know you can do is copy them and then paste into notes or some other program.

  • Milkman
    Milkman Member Posts: 4,880 ✭✭✭
  • Justin Gatlin
    Justin Gatlin Member, MVP Posts: 2,217

    They aren't persistent, are they? It was my understanding that the summaries are generated fresh each time and are not saved anywhere. There isn't really anything to highlight unless you save them to a note.

  • Antony Brennan
    Antony Brennan Member Posts: 836 ✭✭✭

    They aren't persistent, are they? It was my understanding that the summaries are generated fresh each time and are not saved anywhere. There isn't really anything to highlight unless you save them to a note.

    That is my understanding. They are generated each time. 

    👁️ 👁️

  • Milkman
    Milkman Member Posts: 4,880 ✭✭✭

    So, what you're saying, is that each time you summarize you'll be getting a different/new/updated... synopsis? Does that occur for the same part of a book that you just summarized? Essentially then, one could get several (more than one) summaries for the same text that you just read. hmmm, interesting.

    They aren't persistent, are they? It was my understanding that the summaries are generated fresh each time and are not saved anywhere. There isn't really anything to highlight unless you save them to a note.

  • MJ. Smith
    MJ. Smith MVP Posts: 54,899

    Milkman said:

    Essentially then, one could get several (more than one) summaries for the same text that you just read.

    Right - think of it as in a classroom of 150 students, you'd get 150 different summaries. AI doesn't use a fix algorithm to generate a consistent summary, it uses AI/NLP (natural language processing) to "bounce" through the data.

    Orthodox Bishop Alfeyev: "To be a theologian means to have experience of a personal encounter with God through prayer and worship."; Orthodox proverb: "We know where the Church is, we do not know where it is not."

  • Milkman
    Milkman Member Posts: 4,880 ✭✭✭

    [Y][Y]

    MJ. Smith said:

    Milkman said:

    Essentially then, one could get several (more than one) summaries for the same text that you just read.

    Right - think of it as in a classroom of 150 students, you'd get 150 different summaries. AI doesn't use a fix algorithm to generate a consistent summary, it uses AI/NLP (natural language processing) to "bounce" through the data.

  • Justin Gatlin
    Justin Gatlin Member, MVP Posts: 2,217

    Milkman said:

    So, what you're saying, is that each time you summarize you'll be getting a different/new/updated... synopsis? Does that occur for the same part of a book that you just summarized? Essentially then, one could get several (more than one) summaries for the same text that you just read. hmmm, interesting.

    You might get the same result if Logos uses the same model and parameters, but at the very least, if you close the program and open it again, Logos generates the summary again. It isn't saved anywhere. They have also talked about changing the model that works in the background, which would change the results. You are just trying to use the summary tool in a way differently than it was intended.

  • Milkman
    Milkman Member Posts: 4,880 ✭✭✭

    Milkman said:

    So, what you're saying, is that each time you summarize you'll be getting a different/new/updated... synopsis? Does that occur for the same part of a book that you just summarized? Essentially then, one could get several (more than one) summaries for the same text that you just read. hmmm, interesting.

    You might get the same result if Logos uses the same model and parameters, but at the very least, if you close the program and open it again, Logos generates the summary again. It isn't saved anywhere. They have also talked about changing the model that works in the background, which would change the results. You are just trying to use the summary tool in a way differently than it was intended.

    So if "changing the model...would change the results" would that mean, a different summary for the same article, and if so, how many times would a summary be used before it becomes redundant?

    You know, I get it. A work in progess and I'm absolutely keen on Logos doing what they do best - being ahead of the bible program software world. Don't get me wrong - I love (not the greatest word to use) what they have offered over the decades. I'm a die-hard.

  • MJ. Smith
    MJ. Smith MVP Posts: 54,899

    Hmmm, somehow I don't think saying consider the transformer process (the innovative portion of the current AI fad) mixed with probabilistic neural network (think of a LLM as a gigantic non-traditional neural network) and the variable results due to order in computation logic would be very helpful. So, I asked Perplexity to explain. Here it is straight from the electronic horses mouth ... and yes, I know enough to know this is not an hallucination.

    [quote]The slight variations in summaries I provide for the same passage can be attributed to several factors inherent in the way large language models like myself operate:

    1. Stochastic nature: My responses involve a degree of randomness in the token generation process. This stochasticity is intentional and helps create more diverse and natural-sounding outputs.
    2. Temperature setting: The "temperature" parameter in the model affects how deterministic or random the output is. A higher temperature leads to more varied responses.
    3. Context sensitivity: The exact phrasing of the query and any preceding conversation can subtly influence the focus and wording of the summary.
    4. Sampling method: The specific method used for sampling tokens (e.g., top-k, nucleus sampling) can affect the variability of the output.
    5. Non-deterministic hardware operations: Some hardware used for AI computations may introduce slight variations due to parallel processing.

    Where differences typically occur:

    1. Word choice: Synonyms or alternative phrasings might be used to express the same concept.
    2. Sentence structure: The order of presenting information might vary slightly.
    3. Level of detail: Some summaries might include or omit certain minor details.
    4. Focus: Different aspects of the passage might be emphasized in different summaries.
    5. Length: The overall length of the summary might vary slightly.

    These variations are generally minor and shouldn't affect the core meaning or key points of the summary. They reflect the model's ability to generate human-like text with natural variations, rather than always producing identical, robotic responses.

    Orthodox Bishop Alfeyev: "To be a theologian means to have experience of a personal encounter with God through prayer and worship."; Orthodox proverb: "We know where the Church is, we do not know where it is not."