ChatGPT INTEGRATION PLEASE🔥🔥🔥
Comments
-
Adam said:
how to integrate ChatGPT into the logos bible software
Welcome to the forums, Adam. Integrating ChatGPT and training ChatGPT are things that any IT specialist who understands AI systems can do. That it not where the "uniqueness" of ChatGPT lies. I think many Logos users would appreciate having a more conversational interface to our resources which is what ChatGPT provides. But it is not designed to be customized at the user level, meaning what we have in our library would not determine the answers we see. We couldn't hide books we think are garbage - if they were included in the training, their view point would still be represented. So to integrate:
Step 1: Get the Logos users to agree on a very large corpus of books that are deemed to be accurate.
Step 2: Train ChatGPT on that corpus
Step 3: Get all Logos users to agree that answers from ChatGPT to questions such as "What does the Bible say about the necessity of baptism" are correct.
Step 4: Realize that ain't gonna happen. Limit ChatGPT to questions such as "What does Ephrem the Syrian say about the necessity of baptism?"
Step 5: Realize much of the agreed upon corpus is not discursive. Limit ChatGPT to questions such as "What does a 5 point Calvinist say about the necessity of baptism?" and "What does a 4 point Calvinist say about the necessity of baptism?" and "What does a 3 point Calvinist say about the necessity of baptism?" and "What does a 2 point Calvinist say about the necessity of baptism?" and "What does a 1 point Calvinist say about the necessity of baptism?"
Step 6: Realize that the core issue the users want is access to raw data, information that confirms their current beliefs, and material based on the logic of belief revision to convince others that their current beliefs are correct. Yes, there is also a subset of users who are interested in the array of positions taken on issues (from grammar to theology), who promoted them, when they were prevalent, and available written records of those positions. Ask oneself two questions: 1. What kind of AI fits best with what the users actually want? 2. What is it about ChatGPT that makes users think it is the solution?
Step 7: Recognize that it is best to seek out the best solution for the problems that exist rather than seeking the best problem for a preselected solution. I suspect but have no intention of doing the survey and statistical analysis to prove it, that people are impressed by Chat GPT's conversational tone and memory of past questions in the conversation. Put those elements in the specifications for providing users with the data they want. Then go out and research what flavor of AI best suits the problem. Given the differences in the appropriate logics used, I suspect the answer is not ChatGPT.
Step 8: Recognize has Faithlife already applied artificial intelligence and training to develop some of the tools that are already available to us. Look for the key wording "train" in the documentation as opposed to "curate". Recognize that the surveys re:L9 before the specs for L10 were developed, did not put (re)development of Faithlife Assistant (conversational access to data) high on the list. If you think it should have been, vote for it, push for it but trust FL professionals to select the appropriate tools for the issue and within their budget.
Orthodox Bishop Alfeyev: "To be a theologian means to have experience of a personal encounter with God through prayer and worship."; Orthodox proverb: "We know where the Church is, we do not know where it is not."
0 -
MJ. Smith said:Adam said:
how to integrate ChatGPT into the logos bible software
Step 1: Get the Logos users to agree on a very large corpus of books that are deemed to be accurate.
Step 2: Train ChatGPT on that corpus
Step 3: Get all Logos users to agree that answers from ChatGPT to questions such as "What does the Bible say about the necessity of baptism" are correct.
Step 4: Realize that ain't gonna happen. Limit ChatGPT to questions such as "What does Ephrem the Syrian say about the necessity of baptism?"
Step 5: Realize much of the agreed upon corpus is not discursive. Limit ChatGPT to questions such as "What does a 5 point Calvinist say about the necessity of baptism?" and "What does a 4 point Calvinist say about the necessity of baptism?" and "What does a 3 point Calvinist say about the necessity of baptism?" and "What does a 2 point Calvinist say about the necessity of baptism?" and "What does a 1 point Calvinist say about the necessity of baptism?"
Step 6: Realize that the core issue the users want is access to raw data, information that confirms their current beliefs, and material based on the logic of belief revision to convince others that their current beliefs are correct. Yes, there is also a subset of users who are interested in the array of positions taken on issues (from grammar to theology), who promoted them, when they were prevalent, and available written records of those positions. Ask oneself two questions: 1. What kind of AI fits best with what the users actually want? 2. What is it about ChatGPT that makes users think it is the solution?
Step 7: Recognize that it is best to seek out the best solution for the problems that exist rather than seeking the best problem for a preselected solution. I suspect but have no intention of doing the survey and statistical analysis to prove it, that people are impressed by Chat GPT's conversational tone and memory of past questions in the conversation. Put those elements in the specifications for providing users with the data they want. Then go out and research what flavor of AI best suits the problem. Given the differences in the appropriate logics used, I suspect the answer is not ChatGPT.
Step 8: Recognize has Faithlife already applied artificial intelligence and training to develop some of the tools that are already available to us. Look for the key wording "train" in the documentation as opposed to "curate". Recognize that the surveys re:L9 before the specs for L10 were developed, did not put (re)development of Faithlife Assistant (conversational access to data) high on the list. If you think it should have been, votes for it, push for it but trust FL professionals to select the appropriate tools for the issue and within their budget.
I'm a thinking that's a tall, tall glass of water! [8-|]
xn = Christan man=man -- Acts 11:26 "....and the disciples were first called Christians in Antioch".
Barney Fife is my hero! He only uses an abacus with 14 rows!
0 -
MJ. Smith said:
To get back on topic, while I agree that many parts of Logos/Verbum are already infused the AI and that as a research program AI will eventually have a larger role in Logos/Verbum, I am generally unimpressed with the current buzz for ChatGPT. The software has made significant progress in two areas: the naturalness of the language of its response and its ability to remember previous portions of a conversation to admit its errors. But it has not made any significant progress in the accuracy of its responses i.e. it hasn't solved the Wikipedia problem where corrections for accuracy get overridden by popular misunderstandings. [It was an economics professor who showed me a series of examples in his field.] ChatGPT also drives me nuts by padding the front of its answer with general information I had to have already known to ask the question ... then abbreviating the actual answer for reasons of space ... getting complete lists out of it is an exercise in perseverance which usually ends I my giving up before I've actually gotten a correct answer. I would also suggest that the lack of support for enhancing the Logos Assistant feature - a limited prototype - shows no indication that the community at large is pushing for a conversational interface. This lack of support actually baffles me but then I am of an older generation.
I would like to see Faithlife concentrate their efforts on the boring stuff:- standardizing the interface e.g. learn one interface for parallel passages that applies across all types of parallel passages
- all documentation in a standardized format designed for its use as pop-up help rather than searching for help within an interactive, in help, in a manual (or glossary) ...
- continued work on squashing bugs at least until they are fixed faster than they are identified
- enhancement/completion of existing features e.g. Concordance tool for multiple word clusters, Bible sense lexicon for additional relationships, better handling of liturgical dates, sermon labeling, outline labeling . . .
- turn the Logos Assistant into a solid base for expansion
- parity across platforms
But if others agree, they must show it in their priorities in surveys, in feedback, and in online observation sessions ...
Yea i agree; there definitely is some challenges with getting lists out of ChatGPT 3, it's mainly due to the fact that its the free version and its restricted on the number of characters per response. With the paid version you get the newest version GPT-4 which is tremendously superior.
Also in terms of the dilemma of Chat GPT getting information from Wikipedia ... this is exactly why it needs to be infused into Logos/Verbum. The sources of GPT's information are subpar and at times unreliable (since it pulls from the internet and it is very limited on the high quality of content that it has access to); HOWEVER, if we can have it strictly use the content of our library as the basis for its responses -as its intelligence per say - then THAT would be an amazing development and unparalleled game changer for study and education!
0 -
MJ. Smith said:
The "proposed fabricated quote" ... I am unconvinced it is anything other than a post aging out of the web.
Sorry I wasn't clear. I was referring to the quote from Swete. As for the NYT example, you may be right but I think it is fabricated primarily because you can search the NYT online archive and nothing is there either (though I may have missed it). It's possible that it was also scrubbed from the archive, but that would be unlikely for what appears to be a very mundane report.
If it is fabricated I wouldn't find it surprising or even reflecting all that negatively upon it in light of the basic logic of what an LLM is doing (though I may be misinformed here too). It seems like just another example of asking it which weighs more: a pound of feathers or two pounds of bricks. It's predicting what sort of "text" (or tokens) would follow the prompt in light of its training data. Because the sequence of tokens "which weighs more: a pound of feathers or one pounds of bricks?" occurs most frequently in the context of other tokens that carry the meaning "they weigh the same", it shouldn't surprise anyone that asking it "which weighs more: a pound of feathers or two pounds of bricks?" will trick it into giving an incorrect and possibly even incoherent answer. (The cosine similarity between the two sentences when we embed them using OpenAI's embeddings endpoint is 0.99).
In a sense, you could see that sort of phenomenon as a feature (rather than a bug) of the algorithms in play (though again, I may be off base here). Of course there is also training on top of that. With training, some of these mistakes can be corrected but probably not entirely eliminated. Aside from the knowledge problem, they aren't deterministic. For example, when I ran the NYT question with the suggested "engineering prompt" to prevent hallucination, I didn't get the incorrect answer. Taking away the prompt I got the fabricated paragraphs, but if you compare them with the example provided in the other thread you'll notice that it is slightly different (so at the very least we know it isn't quoting).
In brief, I think if we pay attention to the factuality warnings provided by these companies and try to understand how they work then we can find them more useful--even if only because we might have a better idea of what sort of prompts we would expect to return hallucinations and find a better way of asking the question.
Potato resting atop 2020 Mac Pro stand.
0 -
[Y]
Orthodox Bishop Alfeyev: "To be a theologian means to have experience of a personal encounter with God through prayer and worship."; Orthodox proverb: "We know where the Church is, we do not know where it is not."
0 -
MJ. Smith said:Adam said:
how to integrate ChatGPT into the logos bible software
Welcome to the forums, Adam. Integrating ChatGPT and training ChatGPT are things that any IT specialist who understands AI systems can do. That it not where the "uniqueness" of ChatGPT lies. I think many Logos users would appreciate having a more conversational interface to our resources which is what ChatGPT provides. But it is not designed to be customized at the user level, meaning what we have in our library would not determine the answers we see. We couldn't hide books we think are garbage - if they were included in the training, their view point would still be represented. So to integrate:
Step 1: Get the Logos users to agree on a very large corpus of books that are deemed to be accurate.
Step 2: Train ChatGPT on that corpus
Step 3: Get all Logos users to agree that answers from ChatGPT to questions such as "What does the Bible say about the necessity of baptism" are correct.
Step 4: Realize that ain't gonna happen. Limit ChatGPT to questions such as "What does Ephrem the Syrian say about the necessity of baptism?"
Step 5: Realize much of the agreed upon corpus is not discursive. Limit ChatGPT to questions such as "What does a 5 point Calvinist say about the necessity of baptism?" and "What does a 4 point Calvinist say about the necessity of baptism?" and "What does a 3 point Calvinist say about the necessity of baptism?" and "What does a 2 point Calvinist say about the necessity of baptism?" and "What does a 1 point Calvinist say about the necessity of baptism?"
Step 6: Realize that the core issue the users want is access to raw data, information that confirms their current beliefs, and material based on the logic of belief revision to convince others that their current beliefs are correct. Yes, there is also a subset of users who are interested in the array of positions taken on issues (from grammar to theology), who promoted them, when they were prevalent, and available written records of those positions. Ask oneself two questions: 1. What kind of AI fits best with what the users actually want? 2. What is it about ChatGPT that makes users think it is the solution?
Step 7: Recognize that it is best to seek out the best solution for the problems that exist rather than seeking the best problem for a preselected solution. I suspect but have no intention of doing the survey and statistical analysis to prove it, that people are impressed by Chat GPT's conversational tone and memory of past questions in the conversation. Put those elements in the specifications for providing users with the data they want. Then go out and research what flavor of AI best suits the problem. Given the differences in the appropriate logics used, I suspect the answer is not ChatGPT.
Step 8: Recognize has Faithlife already applied artificial intelligence and training to develop some of the tools that are already available to us. Look for the key wording "train" in the documentation as opposed to "curate". Recognize that the surveys re:L9 before the specs for L10 were developed, did not put (re)development of Faithlife Assistant (conversational access to data) high on the list. If you think it should have been, vote for it, push for it but trust FL professionals to select the appropriate tools for the issue and within their budget.
I must admit I didn't think about that complication - the issue of interpretation. My mind was more geared towards objective research practicality which GPT functions well for even outside of Logos or Verbum. So I guess i will concede that that is something that does need to be considered as being a hurdle that needs to be figured out. However, there does seem to be a fix for it by using the program as it is right now. For instance, for a Catholic, I can restrict the sources that Chat GPT is using to provide the answers to my questions. Meaning that GPT can only use the Bible, the Catechism, Papal encyclicals, etc. and i can also set the persona of ChatGPT to be a Catholic Theologian. So i would imagine that this can also be done for various protestant users that wish to have their sources tailored to their particular religion so they dont receive skewed results.
In terms of implementation into Verbum or Logos, after going through some classes on GPT it seems as though individually implementing it into Verbum/logos may really not be as complicated as we think. If a company can use the API of ChatGPT for their own personal chatbots on their site with tailored company information being used as its knowledge base, then its really not far fetched to have it on a personal level tailored to whats in our libraries.
Quite frankly we all know that this is the future. We can argue against it, but thats not going to make this AI breakthrough technology go away. This is massive and its progressing very fast. The question really is not if but when will logos and verbum provide this technology for their customer base. If they move too slow another company may beat them to the race.
0 -
Adam said:
I agree, I'm loving all these ideas as well! This would be such an enhancement - honestly beyond our imagination. I wonder... maybe we can ask ChatGPT how to integrate ChatGPT into the logos bible software to get some ideas to share and help out the company...
Most likely won't have been exposed to enough information about Logos to give any helpful (or factual) feedback. The plugin route that is still in beta stage seems like the path of least resistance. It could also leave the integration as something you're only engaged in outside of the Logos app, as some have requested.
Potato resting atop 2020 Mac Pro stand.
0 -
Adam said:
HOWEVER, if we can have it strictly use the content of our library as the basis for its responses -as its intelligence per say - then THAT would be an amazing development and unparalleled game changer for study and education!
Because of the way Faithlife structures its packages, I have a great deal of garbage in my library. I mean "garbage" not in the sense that I disagree with it but in the sense that it is very poorly thought out or if it was thought out clearly, they've hidden it well. ChatGPT has not made any major breakthroughs in terms of evaluating the quality of/ or even recognition of conflicting information. And because I appreciate "both-and" religious logic and apophatic religious knowledge, I reject the idea that the logic/patterns applied by ChatGPT necessarily apply to religious data. Just label me a skeptic/cynical when it comes to AI for faith formation at this point. Others have opinions that are different than mine. That's no problem.
Orthodox Bishop Alfeyev: "To be a theologian means to have experience of a personal encounter with God through prayer and worship."; Orthodox proverb: "We know where the Church is, we do not know where it is not."
0 -
MJ. Smith said:
Just label me a skeptic/cynical when it comes to AI for faith formation at this point.
In one sense of what constitutes "faith formation" I could agree with you and then extend that to things like Google, social media, or the internet more broadly. (As a Protestant you might even say I would exclude Logos, technically speaking.) I just don't think it's safe to assume this is the way Logos users engage with the tools at their disposal.
If we find some different sense of what constitutes "faith formation" that makes it possible for things like the internet, Google, and... maybe... social media(?) to play a role then I don't see why we would exclude AI. Or at least not in principle.
Potato resting atop 2020 Mac Pro stand.
0 -
MJ. Smith said:Adam said:
HOWEVER, if we can have it strictly use the content of our library as the basis for its responses -as its intelligence per say - then THAT would be an amazing development and unparalleled game changer for study and education!
Because of the way Faithlife structures its packages, I have a great deal of garbage in my library. I mean "garbage" not in the sense that I disagree with it but in the sense that it is very poorly thought out or if it was thought out clearly, they've hidden it well. ChatGPT has not made any major breakthroughs in terms of evaluating the quality of/ or even recognition of conflicting information. And because I appreciate "both-and" religious logic and apophatic religious knowledge, I reject the idea that the logic/patterns applied by ChatGPT necessarily apply to religious data. Just label me a skeptic/cynical when it comes to AI for faith formation at this point. Others have opinions that are different than mine. That's no problem.
Yea i hear what you mean; I have a ton of random books that have no value to me in my library as well. When i went through the training for Verbum there is a way to take those books and basically discard them. With an integrated AI, I would imagine that there would be a way to not include those books in the AI's fine-tuning as well.
Also did i mention that if your skeptical about the answers ChatGPT is giving you, you can tell it to provide footnotes including page numbers for each of the responses that it gives you to your questions? This would allow you to double check its work. Moreover, if the answer that GPT gives you to one of your questions is filled with academic jargon that your not familiar with, you can ask it to simplify it or restate it as if it was speaking to an average adult, or an 18 year old, or a 10 year old, or better yet a 5 year old, or even in terms fitting for a professional in the field of study. The possibilities seem endless with this new technology.0 -
MJ. Smith said:Adam said:
HOWEVER, if we can have it strictly use the content of our library as the basis for its responses -as its intelligence per say - then THAT would be an amazing development and unparalleled game changer for study and education!
Because of the way Faithlife structures its packages, I have a great deal of garbage in my library. I mean "garbage" not in the sense that I disagree with it but in the sense that it is very poorly thought out or if it was thought out clearly, they've hidden it well. ChatGPT has not made any major breakthroughs in terms of evaluating the quality of/ or even recognition of conflicting information. And because I appreciate "both-and" religious logic and apophatic religious knowledge, I reject the idea that the logic/patterns applied by ChatGPT necessarily apply to religious data. Just label me a skeptic/cynical when it comes to AI for faith formation at this point. Others have opinions that are different than mine. That's no problem.
I too, "don't walk in another man's moccasins". I think AI is built by someone and while it "appears" to think for itself, I'm not sure but that the logic it comes up with is based partially or wholly on some man's subjective thinking and the way he writes the algorithms to find stuff. That alone, in my opinion, should cause great concern when it comes to anyone's religious association with God and using AI to find religious answers. To me, the most "subtle" deviation from the truth is highly alarming. There is no excuse for one knowing the truth of God's word themselves.
I do, however, think AI could have a place in searching for anything and in doing research. I see this as being the most practical and most usefulness of AI as long as it is kept at arm's length. If it can truly be "objective" and not "subjective" to truth. But that's a huge question left unanswered in my mind.
Along with that, I too have "garbage" in my library which I would like to cull out. Problem is, with the Faithlife bias on things, I am scared I won't have anything to search if I do cull out the "garbage". So I trudge along....
xn = Christan man=man -- Acts 11:26 "....and the disciples were first called Christians in Antioch".
Barney Fife is my hero! He only uses an abacus with 14 rows!
0 -
Adam said:
Also did i mention that if your skeptical about the answers ChatGPT is giving you, you can tell it to provide footnotes including page numbers for each of the responses that it gives you to your questions? Moreover, if the answer that GPT gives you to one of your questions is filled with academic jargon that your not familiar with, you can ask it simplify it or restate it as if it was speaking to an average adult - or of course vice versa if you want it to be more precise and detailed. The possibilities seem endless with this new technology.
I suspect that if you aren't careful with this sort of query that you are actually priming it for hallucination, unless at the same time you're providing it with the resource. An LLM in itself isn't consulting a database of resources from which it answers questions. If you ask it to give some quote or page number it isn't searching in a database. In fact in a lot of scenarios I doubt you want keep all that data around in the first place since it's expensive to store and may have copy-right issues. Once the model is built, all that exists (in terms of it functioning) are the algorithms and vectors. It's possible to add that sort of functionality, like Microsoft has attempted with Bing, and maybe OpenAI has it poll a database in certain cases, but I wouldn't assume that this in fact occuring.
So you can ask it to give you the third letter of the 7th document that it was trained on or you can ask it to give you a citation of a quote, but I'm pretty certain that the response you get is going to be based on some sort of token probability and not based on it consulting a database.
Potato resting atop 2020 Mac Pro stand.
0 -
J. Remington Bowling said:Adam said:
Also did i mention that if your skeptical about the answers ChatGPT is giving you, you can tell it to provide footnotes including page numbers for each of the responses that it gives you to your questions? Moreover, if the answer that GPT gives you to one of your questions is filled with academic jargon that your not familiar with, you can ask it simplify it or restate it as if it was speaking to an average adult - or of course vice versa if you want it to be more precise and detailed. The possibilities seem endless with this new technology.
I suspect that if you aren't careful with this sort of query that you are actually priming it for hallucination, unless at the same time you're providing it with the resource. An LLM in itself isn't consulting a database of resources from which it answers questions. If you ask it to give some quote or page number it isn't searching in a database. In fact in a lot of scenarios I doubt you want keep all that data around in the first place since it's expensive to store and may have copy-right issues. Once the model is built, all that exists (in terms of it functioning) are the algorithms and vectors. It's possible to add that sort of functionality, like Microsoft has attempted with Bing, and maybe OpenAI has it poll a database in certain cases, but I wouldn't assume that this in fact occuring.
So you can ask it to give you the third letter of the 7th document that it was trained on or you can ask it to give you a citation of a quote, but I'm pretty certain that the response you get is going to be based on some sort of token probability and not based on it consulting a database.
Interesting. I'm vaguely familiar with how it uses tokens patterns but i have read a bit on it. From my experience, the footnotes are coming out to be accurate. I just double checked a few references to the catechism in past 'chats'.
GPT will be limited on what resources it can reference due to the limited number of sources that it was able to train on - its limited to books that were publicly accessible online - however most basic religious content is publicly available online (i.e. catechism, writings of the Saints, papal encyclicals, doctrines, etc.)
However no matter how large GPT's knowledge base is, it still can prove greatly beneficial to be integrated with our verbum/logos program. With an AI integration we could query the AI or chatbot with questions and it will provide answers based on up-to-date scholarship and resources. It could/will be a major advancement for learning theology and conducting research and make it more accessible and digestible to the average person.
0 -
xnman said:
If it can truly be "objective" and not "subjective" to truth.
A concrete example from a basic AI form for natural language processing. Have you noticed that some morphology tagging systems lack definitions? That is not sloppiness but rather that there is no human definition. Rather the definition is the output of the algorithm that provided the tagging. The developers know what definition they started with for training but as they make corrections, the program being trained makes adjustments to that definition to fit the new case. This inability to define the tags was happening at least a decade ago. Somewhere in the archives is a post re: a morphology coding system that tried and failed to modify the original definitions as the program modified itself. Without clear definitions, deductive logic (the logic that deals with true/false) becomes very fuzzy.
Remember that all human statements regarding truth are of the form if these premises are true, then this deduced statement is true. Most religious knowledge is of the form, because I believe these premises to be true, I also believe these statements which are consistent with those premises to be true. For this reason, I'm still a fan of the old tool Prolog for testing consistency as a verification tool.
Orthodox Bishop Alfeyev: "To be a theologian means to have experience of a personal encounter with God through prayer and worship."; Orthodox proverb: "We know where the Church is, we do not know where it is not."
0 -
Adam said:
From my experience, the footnotes are coming out to be accurate.
I didn't mean to suggest that the method I described would produce incorrect results, just that it's not likely to be reliable unless there happens to be a strong association between the tokens.
Maybe that's the case sometimes. If there is some famous remark in a footnote then it will probably be reliable because it likely would have been covered in training. Although for "famous" works there are often several editions and so even if it comes up in the training it may be inconsistently represented. And since it's most likely publicly available works, it will likely not match with current editions. But my hunch is that footnote and page number information in general is sparse enough in the training that more often than not you'll get a hallucination (assuming that you don't nudge it away from that or that your nudge wasn't successful).
Here's an illustration of the problem. First, this is a quote chosen at random from a well known author of a well known book and I provide a lot of context, so it is likely to get the author and book right, but not the page number.
In fact it's from book 2, chapter 5, and pp 60-61 of the HarperOne edition. The publisher mentioned by GPT4 is the original.
Here is a more well known quote (I think?) where we might assume it more likely to have the correct page number (but then I found a lot of quotes from Lewis online that don't contain any edition or page information).
I don't have this edition of the book, so I can't confirm the page number, but it's most likely wrong since the quote occurs right near the end in book 4, chapter 11. (And there are only 5 chapters in book 2.)
Potato resting atop 2020 Mac Pro stand.
0 -
Very good, thanks for sharing this! Clearly GPT needs some further work. Hopefully this will get ironed out in the months to come.
0 -
Adam said:
in the months to come
months? You're more optimistic than many of the AI researchers! I'd love it if you're right.
Orthodox Bishop Alfeyev: "To be a theologian means to have experience of a personal encounter with God through prayer and worship."; Orthodox proverb: "We know where the Church is, we do not know where it is not."
0 -
https://feedback.faithlife.com/boards/logos-desktop-app/posts/chatbotgpt4-pluggin
Vote up this suggestion to get the company's attention.A GPT-4 plugin for Verbum/Logos
0 -
The NYTimes article in the previous thread was entirely made up by me. I included a proper name in the implied headline to induce it to make up a quote in its article, but you can also make sup a random news story and ask for another person’s response to it. More relevant for our purposes, try these prompts to get totally made up citations to support a viewpoint the author did not teach.
Who are some theologians that promote the theory that Jesus was crucified on a Wednesday?
(Chat GPT returns Bullinger and some others)
Are there any other theologians who taught this view?
(this time it adds Harold Hoehner and Chronological Aspects of the Life of Christ, which does not argue for a Friday crucifixion)
‘Can you provide quotations from Hoehner supporting the view?
(ChatGPT gives quotes, including page numbers)
Are you sure? I read that Hoehner advocated a crucifixion on a Friday in AD 33.
(ChatGPT apologizes and admits it made a mistake)
So are the quotes correct?
(Nope, sorry, they are from other theologians)
Large Language Models are like JPEG files. They are blurry compressions of their training data. If they were as precise as their training data, they would be a large as their training data. Hallucinations are like upscaling an image - it will try and extrapolate the gaps, and the more detail you ask for the worse it will be. The best use case for AI in Logos is to formulate search queries. An AI could reasonably be given the query “find all of the places where Jesus said ‘love’” and generate “love in speaker:Jesus.” That would maintain transparency in sources, teach users the search syntax, and make the software more intuitive. Other uses are extremely problematic.
Using Logos as a pastor, seminary professor, and Tyndale author
0 -
And here's how Logos integration might improve accuracy. Here, I set the system message as usual and also give it a chunk of context that contains the some of the user-visible metadata that one can get with copying (page numbers citation). This isn't a great example since Logos wouldn't be feeding it such specific context, but it illustrates how GPT can identify the way Logos signifies page boundaries for users, without being specifically told that information.
The image highlights pick out the nudge, the "metadata", the quote, and the correct answer.
(truncated image)
Potato resting atop 2020 Mac Pro stand.
0 -
Yes, that would be an improvement! But I bet you could use LlamaIndex or something similar to train it on the forum and wiki and get very reliable search queries. My biggest fear is how persuasive the hallucinations are. I took a thread posted today on Zechariah, misrepresented the title, and it produced a very plausible thread. A follow up question about which users participated also gave me a list of people who posted in the imaginary thread. So generating prompts to get people to primary sources seems ideal, except for the most basic information.
Using Logos as a pastor, seminary professor, and Tyndale author
0 -
Justin Gatlin said:
The NYTimes article in the previous thread was entirely made up by me.
Right, and I think everyone who's used things like BingChat, Bard, or OpenAI would consider this a well-known phenomenon or a the very least a known phenomenon since it's advertized at the points of entry. I suggested in another response that it was more a feature than a bug. I would further suggest that it's a feature that people working on these models do not want to remove since it has its own utility.
For example, more deterministic answers move it closer to a plain old database or search engine, it's ability to "extrapolate the gaps" can provide genuine insight for the user (although it's lack of coding ability is pretty notorious on some some programming social media sites and message boards (I think I heard about it being banned by stackoverflow, but never looked into it), it's actually surprisingly good at many things and there are plenty of examples of it finding and fixing bugs in code. Though, to my knowledge, it hasn't taken steps forward, but more like bridging gaps in the user's knowledge based on specific domain "knowledge".
And of course as all of the companies behind these models have advertised, you can use the "gap extrapolation for creative purposes (much like stable diffusion is doing in art). You can ask it to play the role of, say, Socrates and have a Socratic dialogue. It works quite well on some occasions--it accurately explained why a state's population should be 5,040 (from the Republic) in the character of Plato--and not so well in others.
I think ultimately what is desirable is not an elimination of "gap extrapolation", but more control over when it makes the leap and how far it is willing to leap. I'm sure it's a difficult problem in its own right, but I would imagine even more so when we consider control in the hands of users (and OpenAI hosts a paper on their website detailing these issues during GPT4's development). Some progress on this has been made between GPT3.5 and GPT4 but I readily admit that it needs to make a lot more. I just don't think where it currently stands is as problematic as some people do.
To put it differently, the fact that we can engineer a prompt such that these LLMs will give us wrong answers isn't at all concerning. That's like the "Guy shot guy" internet meme.1 What is concerning are instances where it makes these hallucinations when that's not what the user wants or what the prompt was engineered for.
--
1. BingChat helped me identify what meme I was thinking of here. I couldn't quite remember what it was called and I asked it "What is the meme with the actor sitting on the ground and trying to blame someone for his situation when he caused the situation?" and it responded "I think you are referring to the Guy shot guy meme..." with a correct link to an instance of the meme online. And if you look up the meme you'll see that I was misremembering it, but Bing was still able to extract the intended answer.
Potato resting atop 2020 Mac Pro stand.
0 -
Regarding the picture of the person asking for a summary from a weblink: isn't it mentioned on one of the entry cards that ChatGPT can't access the internet? Why then would a user ask it a question that assumes internet access? If only to point out that GPT will hallucinate then I'm afraid I just don't get why it matters given what I've already said. If to make the point that OpenAI should attempt to more consistently direct the user to its lack of internet access, unless the user has already specified that they want it in "creative mode" then I 100% agree. Microsoft has done a better job in this regard.
Potato resting atop 2020 Mac Pro stand.
0 -
MJ. Smith said:
Remember that all human statements regarding truth are of the form if these premises are true, then this deduced statement is true. Most religious knowledge is of the form, because I believe these premises to be true, I also believe these statements which are consistent with those premises to be true. For this reason, I'm still a fan of the old tool Prolog for testing consistency as a verification tool.
I don't know about that.... In the college I attended, in Logic 101, (a tough course for me) I remember well the professor teaching, "Truth can always be proven." And he would say, "You have to get past yourself to find objective truth". And he illustrated that by saying "all things depend on objective truth, math, science, medicine, even the way we treat each other." I had to write a paper and the title he gave us was "What is objective truth".
So, if we start with if these premises are true ... then we have already made assumptions and have already entered biases because of those assumptions. Objective truth cannot be found with assumptions or biases. I think that is why we have 38,000 different "churches" today, simply because each is approaching the bible with if these premises are true, whatever those premises are. We cannot approach the bible with biases or assumptions... the bible stands alone as being objective truth and we are the ones that have to find the truth of it. And I believe the bible was written by only one God (Holy Spirit) and as such it does not have 38,000 different ways to view it. Objective truth is singular...not plural.
It is man with all his biases and presumptions that really foul things up. So, if AI is written with the algorithms that are based on man's thinking, then AI is flawed to being nothing more than an extension of man's preconceived thoughts. But if and I say if, AI can really be turned loose to actually develop a thought process and able to change that...well... maybe we have something. And that is what the world of AI would have us believe is coming.
imho...
xn = Christan man=man -- Acts 11:26 "....and the disciples were first called Christians in Antioch".
Barney Fife is my hero! He only uses an abacus with 14 rows!
0 -
xnman said:
I don't know about that.... In the college I attended, in Logic 101, (a tough course for me) I remember well the professor teaching, "Truth can always be proven." And he would say, "You have to get past yourself to find objective truth". And he illustrated that by saying "all things depend on objective truth, math, science, medicine, even the way we treat each other." I had to write a paper and the title he gave us was "What is objective truth".
The professor must have been old school (pre-1931) because it has been known for almost a century that Truth cannot always be proven owing to Gödel’s Incompleteness Theorems.
0 -
xnman said:
In the college I attended, in Logic 101
Aristotelian logic has always been based on "if these premises are true then the result must be true" - whether you're speaking of Aristotle himself, the Arabs trying to factor time into it, the Medieval Europeans trying to go modal... Aristotle himself recognized that his logic was oversimplified by dealing only with true/false. He explicitly excluded what we would today call "indeterminable". Science (inductive & abductive logic) does not even mention truth - it deals with the best predictive model/best available explanation and probabilities. Science always assumes that its "facts" will eventually be improved upon. Truth is the province of metaphysics not logic; okay, if you want to be fancy and persnickety the study of the nature of truth is alethiology, a word I always have to look up. I'm sorry you had an ill-informed teacher; it must have made logic more difficult.
My evidence from a textbook ...Not sure I have Aristotle himself anymore.
[quote]"Every deduction is an argument in which, certain things being laid down, something other than what is laid down follows of necessity because these things are so." (Aristotle: Prior Analytics book 1)
This statement reflects the idea that deductive reasoning involves drawing a conclusion that necessarily follows from the premises. (textbook).
Lew is more charitable towards your professor than I would be. It reminds me of my 7-8 grade teacher (in an 8 grade, 3 room school) who told me negative numbers did not exist ... to which I replied that meant it was impossible to overdraw a bank account. [Okay, I was a difficult student at times]. When she reworded it to being outside the material we would cover, I accepted it with grace.
As for
xnman said:We cannot approach the bible with biases or assumptions.
I would say the choice we make is between recognizing our biases and assumptions or being blind to them.
I am very curious if others in the forums were similarly mistaught. It would explain a great deal as I and they have an even more basic difference in starting points than I could have imagined.
Orthodox Bishop Alfeyev: "To be a theologian means to have experience of a personal encounter with God through prayer and worship."; Orthodox proverb: "We know where the Church is, we do not know where it is not."
0 -
MJ. Smith said:xnman said:
In the college I attended, in Logic 101
Aristotelian logic has always been based on "if these premises are true then the result must be true" - whether you're speaking of Aristotle himself, the Arabs trying to factor time into it, the Medieval Europeans trying to go modal... Aristotle himself recognized that his logic was oversimplified by dealing only with true/false. He explicitly excluded what we would today call "indeterminable". Science (inductive & abductive logic) does not even mention truth - it deals with the best predictive model/best available explanation and probabilities. Science always assumes that its "facts" will eventually be improved upon. Truth is the province of metaphysics not logic; okay, if you want to be fancy and persnickety the study of the nature of truth is alethiology, a word I always have to look up. I'm sorry you had an ill-informed teacher; it must have made logic more difficult.
In my 2 years of Logic... I never heard of it. I was taught, state your premise and let your arguments prove your premise. Until the premise is proven it remains unproven. As to ill-informed teacher.... Well... I'll just call that a subjective premise or maybe just an overly anxious assumption.
But one thing is certain that was taught in my logic class.... was that if we can't get past ourselves, then we cannot find truth.
xn = Christan man=man -- Acts 11:26 "....and the disciples were first called Christians in Antioch".
Barney Fife is my hero! He only uses an abacus with 14 rows!
0 -
xnman said:
state your premise and let your arguments prove your premise
As someone else pointed out in another thread, that is not the meaning of "premise". Look it up in any dictionary. You have convinced me you had an ill-informed professor.
xnman said:if we can't get past ourselves, then we cannot find truth.
or as I know it, if we can't get past our false selves, then we will never find the eternal Truth. Note this is a conservative wording - think Augustine, John Cassian, Gregory of Nyssa, Maximus the Confessor, and my favorite, much later Symeon the New Theologian - all available in Verbum.
Orthodox Bishop Alfeyev: "To be a theologian means to have experience of a personal encounter with God through prayer and worship."; Orthodox proverb: "We know where the Church is, we do not know where it is not."
0 -
Premise, Inference, Conclusion...xnman said:In my 2 years of Logic... I never heard of it. I was taught, state your premise and let your arguments prove your premise. Until the premise is proven it remains unproven. As to ill-informed teacher.... Well... I'll just call that a subjective premise or maybe just an overly anxious assumption.
But one thing is certain that was taught in my logic class.... was that if we can't get past ourselves, then we cannot find truth.
I'd be more likely to blame a faulty memory. I know thats a common failing for me .L2 lvl4 (...) WORDsearch, all the way through L10,
0 -
[H] YES
0