Is there a way to highlight in Summaries? Or do any kind of markups? Underlines, notes, colours etc.
mm.
Is there a way to highlight in Summaries? Or do any kind of markups? Underlines, notes, colours etc. mm.
Are you referring to the sidebar summaries or the search results summaries. I am not aware of the ability to do with either type of summary. Search summaries have not right-click menu while sidebar summaries have a very limited right-click menu that does not offer these options.
The only thing I know you can do is copy them and then paste into notes or some other program.
The sidebar summaries. Just a thought.
They aren't persistent, are they? It was my understanding that the summaries are generated fresh each time and are not saved anywhere. There isn't really anything to highlight unless you save them to a note.
That is my understanding. They are generated each time.
So, what you're saying, is that each time you summarize you'll be getting a different/new/updated... synopsis? Does that occur for the same part of a book that you just summarized? Essentially then, one could get several (more than one) summaries for the same text that you just read. hmmm, interesting.
Essentially then, one could get several (more than one) summaries for the same text that you just read.
Right - think of it as in a classroom of 150 students, you'd get 150 different summaries. AI doesn't use a fix algorithm to generate a consistent summary, it uses AI/NLP (natural language processing) to "bounce" through the data.
[Y][Y]
Essentially then, one could get several (more than one) summaries for the same text that you just read. Right - think of it as in a classroom of 150 students, you'd get 150 different summaries. AI doesn't use a fix algorithm to generate a consistent summary, it uses AI/NLP (natural language processing) to "bounce" through the data.
You might get the same result if Logos uses the same model and parameters, but at the very least, if you close the program and open it again, Logos generates the summary again. It isn't saved anywhere. They have also talked about changing the model that works in the background, which would change the results. You are just trying to use the summary tool in a way differently than it was intended.
So, what you're saying, is that each time you summarize you'll be getting a different/new/updated... synopsis? Does that occur for the same part of a book that you just summarized? Essentially then, one could get several (more than one) summaries for the same text that you just read. hmmm, interesting. You might get the same result if Logos uses the same model and parameters, but at the very least, if you close the program and open it again, Logos generates the summary again. It isn't saved anywhere. They have also talked about changing the model that works in the background, which would change the results. You are just trying to use the summary tool in a way differently than it was intended.
So if "changing the model...would change the results" would that mean, a different summary for the same article, and if so, how many times would a summary be used before it becomes redundant?
You know, I get it. A work in progess and I'm absolutely keen on Logos doing what they do best - being ahead of the bible program software world. Don't get me wrong - I love (not the greatest word to use) what they have offered over the decades. I'm a die-hard.
Hmmm, somehow I don't think saying consider the transformer process (the innovative portion of the current AI fad) mixed with probabilistic neural network (think of a LLM as a gigantic non-traditional neural network) and the variable results due to order in computation logic would be very helpful. So, I asked Perplexity to explain. Here it is straight from the electronic horses mouth ... and yes, I know enough to know this is not an hallucination.
[quote]The slight variations in summaries I provide for the same passage can be attributed to several factors inherent in the way large language models like myself operate:
Where differences typically occur:
These variations are generally minor and shouldn't affect the core meaning or key points of the summary. They reflect the model's ability to generate human-like text with natural variations, rather than always producing identical, robotic responses.