{"id":14289,"date":"2023-04-25T14:33:08","date_gmt":"2023-04-25T18:33:08","guid":{"rendered":"https:\/\/jasonapollovoss.com\/web\/?p=14289"},"modified":"2025-09-05T15:32:34","modified_gmt":"2025-09-05T21:32:34","slug":"top-10-reasons-chatgpt-is-not-a-threat-to-d-a-t-a","status":"publish","type":"post","link":"https:\/\/jasonapollovoss.com\/web\/2023\/04\/25\/top-10-reasons-chatgpt-is-not-a-threat-to-d-a-t-a\/","title":{"rendered":"Top 10 Reasons ChatGPT is Not a Threat to D.A.T.A."},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; admin_label=&#8221;section&#8221; _builder_version=&#8221;4.16&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_row admin_label=&#8221;row&#8221; _builder_version=&#8221;4.16&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.16&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_text admin_label=&#8221;Text&#8221; _builder_version=&#8221;4.16&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<figure class=\"x-el x-el-figure c2-1 c2-2 c2-3v c2-i c2-h c2-21 c2-2c c2-29 c2-2a c2-41 c2-4z c2-3 c2-4 c2-5 c2-6 c2-7 c2-8\">\n<div><\/div>\n<\/figure>\n<p><span style=\"font-family: futural;\">Deception And Truth Analysis (D.A.T.A.), Inc. has had pre-public access to OpenAI and ChatGPT for over a year and has conducted numerous tests of it and experiments with it. Furthermore, we continue to monitor it, along with other Large Language Models (LLMs) such as Alphabet\u2019s Bard, Meta\u2019s LLaMA AI, Chinchilla, and so on. We monitor LLMs closely to see if people or organizations can rely upon LLMs to replicate or replace our own deception and truth detection capabilities. In other words, is ChatGPT a threat to D.A.T.A.? We do not think so, and here are our Top 10 Reasons\u2026<\/span><\/p>\n<p><span style=\"font-family: futural;\"><\/span><\/p>\n<div>\n<h3 class=\"x-el x-el-h4 c2-6f c2-6g c2-v c2-w c2-40 c2-2c c2-2a c2-29 c2-2b c2-3 c2-z c2-42 c2-10 c2-43 c2-44 c2-45 c2-46\"><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">Top 10 Reasons ChatGPT is Not a Threat to D.A.T.A.<\/strong><\/span><\/h3>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\"><\/strong><\/span><\/p>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\"><\/strong><\/span><\/p>\n<\/div>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">10. D.A.T.A. is more than just its IP.<\/strong><\/span><\/p>\n<p><span style=\"font-family: futural;\">We believe that D.A.T.A.\u2019s Intellectual Property is innovative and that it has overcome technical challenges faced by others who have attempted to develop similar technologies. That said, D.A.T.A. is much more than just its IP. It is also about a leadership team that has over one hundred years of experience in the investment business. It is about its culture of centering on Client needs and striving to provide white-glove service, and the deep relationships our approach has fostered. Additionally, we believe we have the globe\u2019s only dataset and commensurate set of insights into the norms of deception and truth in finance and investing. That dataset grows every single day and would be difficult to replicate.<\/span><\/p>\n<p><span style=\"font-family: futural;\"><\/span><\/p>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">9. People Cannot Detect ChatGPT or Deception. Sounds good to D.A.T.A.<\/strong><\/span><\/p>\n<p><span style=\"font-family: futural;\">It has been demonstrated by researchers that\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/hai.stanford.edu\/news\/was-written-human-or-ai-tsu\" rel=\"\">ChatGPT can generate texts that are difficult for people to discern are generated by a computer<\/a>. Specifically, people can detect AI generated text with only about 50-52% accuracy. Incidentally, this accuracy range is remarkably similar to the accuracy of people attempting to detect deception of any kind; whether that is by audio, visual, or written means.\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/deceptionandtruthanalysis.com\/insights\/f\/key-scientific-paper-redux-%E2%80%93-accuracy-of-deception-judgments?blogcategory=Key+Scientific+Paper+Redux\" rel=\"\">For audiovisual cues people\u2019s detection accuracy is just 54%<\/a>\u00a0and\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/deceptionandtruthanalysis.com\/insights\/f\/key-scientific-paper-redux-%E2%80%93-accuracy-of-deception-judgments?blogcategory=Key+Scientific+Paper+Redux\" rel=\"\">people can detect deception in written texts just 50% of the time<\/a>. So, if anything ChatGPT\u2019s ability to generate bogus text, and people\u2019s inability to detect it, is not a threat to D.A.T.A., but an opportunity.<\/span><\/p>\n<p><span style=\"font-family: futural;\"><\/span><\/p>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">8. ChatGPT Does Not Use Its Users\u2019 Input to Improve Itself<\/strong><\/span><\/p>\n<p><span style=\"font-family: futural;\">OpenAI\u2019s Terms of Use \u00a73(c) states that its users\u2019 input is not used \u201cto develop or improve our Services.\u201d In other words, there is not a way for ChatGPT to utilize user input to create a deception detection algorithm. That is, unless one of their users inputs a deception detection algorithm themselves. In this instance, D.A.T.A. would be competing, not with ChatGPT, but with another company\u2019s deception detection algorithm. This is no different to the competition we face every single day.<\/span><\/p>\n<p><span style=\"font-family: futural;\"><\/span><\/p>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">7. ChatGPT\u2019s Own Terms of Service Seeks to Fight Its Use to Deceive People<\/strong><\/span><\/p>\n<p><span style=\"font-family: futural;\">OpenAI\u2019s own Terms of Use \u00a72(c)(v) forbids using ChatGPT to \u201crepresent that output from the Services was human-generated when it is not or otherwise violate our Usage Policies.\u201d This Term specifically forbids that users use ChatGPT to spoof others. OpenAI and other LLMs also actively monitor improper uses of their technologies.<\/span><\/p>\n<p><span style=\"font-family: futural;\"><\/span><\/p>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">6. Infinity is an Inappropriate Mental Model for Understanding ChatGPT<\/strong><\/span><\/p>\n<p><span style=\"font-family: futural;\">Most of us have had the experience of a new technology impressing us at a very deep level, whether that is our first MP3 player, video game console, or smartphone. For many people their interactions with ChatGPT have done exactly this: impressed at a deep level.<\/span><\/p>\n<p><span style=\"font-family: futural;\">Having studied behavioral economics and human psychology for more than two decades D.A.T.A. can attest that a typical response in these moments is for people to try and comprehend new unfamiliar ideas and information by relating them to familiar ideas and information already comprehended via mental models or analogy. But what do people do when their fallback mental models and analogies fail to aid their comprehension?<\/span><\/p>\n<p><span style=\"font-family: futural;\">In these instances, it is tempting to liken things that inspire awe, like ChatGPT, to the Infinite. Evidence of this is that in many of D.A.T.A.\u2019s conversations with investors or Clients where no matter how accurate, thoughtful, or sophisticated our response to their questions about ChatGPT they respond with, \u201cYeah, that may be true of ChatGPT 3.5, but what about ChatGPT in four additional iterations?\u201d This is evidence of the \u201cInfinite mental model\u201d at work. After all, each of us has had the experience of imagining the largest number we can think of, and we end up with Infinity as the largest number. But then there is the immediate and subsequent thought of \u201cInfinity + 1,\u201d \u201cInfinity + 2,\u201d and of course, \u201cInfinity + Infinity,\u201d and so on.<\/span><\/p>\n<p><span style=\"font-family: futural;\">It is easy to be presented with new evidence and simply say Infinity + 1 and never be satisfied that something can compete with that. But ChatGPT is not infinite, so it is the inappropriate mental model. In fact, as we discuss below, one of the limitations of LLMs like ChatGPT is that they rely on two key things in order to work, both of which are finite.<\/span><\/p>\n<p><span style=\"font-family: futural;\"><\/span><\/p>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">5. ChatGPT is Ignorant About Deception<\/strong><\/span><\/p>\n<p><span style=\"font-family: futural;\">D.A.T.A. believes that ChatGPT is highly unlikely to be used to construct a world class deception and truth detection algorithm. Why? Because it is ignorant of deception science and lacks the technical breakthroughs necessary to construct a commercial algorithm. We know this because\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/deceptionandtruthanalysis.com\/insights\/f\/chatgpt-and-deception-part-1\" rel=\"\">we asked ChatGPT what it knew about deception detection<\/a>\u00a0and it was far off, even reporting something that is a complete fiction, debunked many times by science.<\/span><\/p>\n<p><span style=\"font-family: futural;\">How could ChatGPT be so far off? ChatGPT relies on repackaging information contained on the Internet and the innovative deception and truth technology that is housed within D.A.T.A. is not contained on the Internet.<\/span><\/p>\n<p><span style=\"font-family: futural;\"><\/span><\/p>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">4. ChatGPT Refuses to Deceive Us<\/strong><\/span><\/p>\n<p><span style=\"font-family: futural;\"><a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/deceptionandtruthanalysis.com\/insights\/f\/chatgpt-and-deception-part-1\" rel=\"\">When we ask ChatGPT to deceive us, it refuses to do so<\/a>. When we try and trick it into deceiving us, it also refuses. Thus, we do not believe ChatGPT can be used to undermine our algorithm. Additionally, when we ask ChatGPT to author a deception and truth algorithm it does not build one. Most importantly, when we ask ChatGPT to \u201cclean\u201d deceptive texts up so that they are less deceptive, while it does alter the scores, it barely does so and thus, it fails to spoof the D.A.T.A. algorithm.<\/span><\/p>\n<p><span style=\"font-family: futural;\"><\/span><\/p>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">3. LLM Limitations<\/strong><\/span><\/p>\n<p><span style=\"font-family: futural;\"><a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/www.newyorker.com\/tech\/annals-of-technology\/chatgpt-is-a-blurry-jpeg-of-the-web\" rel=\"\">Large Language Models are similar to lossy compression, such as a Xerox photocopier<\/a>. In order to be an efficient representation of knowledge, they tend to identify statistical regularities in text and store them in a specialized format. When large computational power is thrown at the task of knowledge retrieval, LLMs can identify extraordinarily nuanced statistical regularities. But, just as a photocopy of a document already photocopied many times is fuzzy, LLMs are quite similar. Their primary shortcomings are that:<\/span><\/p>\n<p><span style=\"font-family: futural;\">a.\u00a0<u class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-31 c2-64 c2-67\">Garbage In, Garbage Out<\/u>. LLMs can only answer questions based on the information available to them. In our own tests of ChatGPT in terms of its knowledge of deception detection, the model offered up as a technique something soundly and consistently debunked by science. While this fact was available on the web and has been for over 20 years, ChatGPT seemed to favor popularity of answer over accuracy of answer.\u00a0<\/span><\/p>\n<p><span style=\"font-family: futural;\">b.\u00a0<u class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-31 c2-64 c2-67\">Facts Missing, Fabrication Okay<\/u>. LLMs seem so intent on answering questions that like a Xerox machine asked to copy an image that is already blurry, it answers questions based on inferences that are pure fabrication. That is, LLMs extrapolate information already present even when it results in fabrication. Horrifyingly, it was\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/www.msn.com\/en-us\/news\/opinion\/chatgpt-invented-a-sexual-harassment-scandal-and-named-a-real-law-prof-as-the-accused\/ar-AA19vNsJ\" rel=\"\">caught manufacturing a Washington Post article<\/a>\u00a0in support of its claim that a professor was guilty of sexual harassment. ChatGPT, similar to the above manufacturing of a Washington Post story to support its conclusions, leaned on a proven false method of detecting deception in \u2018avoidance of eye contact.\u2019<\/span><\/p>\n<p><span style=\"font-family: futural;\"><\/span><\/p>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">2. LLM Text is Easy to Identify<\/strong><\/span><\/p>\n<p><span style=\"font-family: futural;\">How easy, or how hard is it to detect text generated by ChatGPT? Easy. Multiple researchers have been able to build\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/arxiv.org\/abs\/2301.11305\" rel=\"\">detectors that have up to a 95% accuracy in classifying texts as authored by LLMs<\/a>, or other AI. In other words, this is not a difficult problem to solve. Additionally, as we saw above, makers of LLMs want it to be obvious when a document has been authored by AI.<\/span><\/p>\n<p><span style=\"font-family: futural;\"><\/span><\/p>\n<p><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">1. Deception Detection is Not an Optimization Problem<\/strong><\/span><\/p>\n<p><span style=\"font-family: futural;\">LLMs, like all computational models, are optimization problems. Yet, identifying deception is not an optimization problem because there is not just one type of person or even consistent behaviors from individuals. Thus, you need a general solution to account for the variance in people and not a solution whose sole purpose is optimization.<\/span><\/p>\n<p><span style=\"font-family: futural;\">While LLMs are impressive, they rely on directions from their Users. Every single interaction with a LLM, like ChatGPT, a User tells the model what needs to be optimized. Furthermore, many LLM researchers like\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/direct.mit.edu\/dint\/pages\/special_issue_on_large_language_models\" rel=\"\">a recent call for new optimization ideas from MIT<\/a>\u00a0believe that better optimization will result in better results.<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Deception And Truth Analysis (D.A.T.A.), Inc. has had pre-public access to OpenAI and ChatGPT for over a year and has conducted numerous tests of it and experiments with it. Furthermore, we continue to monitor it, along with other Large Language Models (LLMs) such as Alphabet\u2019s Bard, Meta\u2019s LLaMA AI, Chinchilla, and so on. We monitor [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":14284,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"<figure class=\"x-el x-el-figure c2-1 c2-2 c2-3v c2-i c2-h c2-21 c2-2c c2-29 c2-2a c2-41 c2-4z c2-3 c2-4 c2-5 c2-6 c2-7 c2-8\">\r\n<div>\r\n<div><span style=\"font-family: futural;\"><img class=\"x-el x-el-img c2-1 c2-2 c2-k c2-21 c2-1x c2-1y c2-29 c2-2b c2-s c2-69 c2-4j c2-3 c2-4 c2-5 c2-6 c2-7 c2-8\" title=\"Top 10 Reasons ChatGPT is Not a Threat to D.A.T.A.\" src=\"https:\/\/img1.wsimg.com\/isteam\/ip\/b4167b12-c211-4a45-9c4b-489be14138f8\/ChatGPT%202.png\/:\/cr=t:0%25,l:0%25,w:100%25,h:100%25\/rs=w:1280\" alt=\"Top 10 Reasons ChatGPT is Not a Threat to D.A.T.A.\" \/><\/span><\/div>\r\n<\/div>\r\n<figcaption class=\"x-el x-el-figcaption c2-1 c2-2 c2-v c2-w c2-3d c2-29 c2-2b c2-4d c2-6a c2-6b c2-6c c2-6d c2-3 c2-6e c2-3e c2-10 c2-3f c2-3g c2-3h c2-3i\"><span style=\"font-family: futural;\">Top 10 Reasons ChatGPT is Not a Threat to D.A.T.A.<\/span><\/figcaption><\/figure>\r\n<em><span style=\"font-family: futural;\">By Jason A. Voss, CFA<\/span><\/em>\r\n\r\n<span style=\"font-family: futural;\">Deception And Truth Analysis (D.A.T.A.), Inc. has had pre-public access to OpenAI and ChatGPT for over a year and has conducted numerous tests of it and experiments with it. Furthermore, we continue to monitor it, along with other Large Language Models (LLMs) such as Alphabet\u2019s Bard, Meta\u2019s LLaMA AI, Chinchilla, and so on. We monitor LLMs closely to see if people or organizations can rely upon LLMs to replicate or replace our own deception and truth detection capabilities. In other words, is ChatGPT a threat to D.A.T.A.? We do not think so, and here are our Top 10 Reasons\u2026<\/span>\r\n<div>\r\n<h4 class=\"x-el x-el-h4 c2-6f c2-6g c2-v c2-w c2-40 c2-2c c2-2a c2-29 c2-2b c2-3 c2-z c2-42 c2-10 c2-43 c2-44 c2-45 c2-46\"><span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">Top 10 Reasons ChatGPT is Not a Threat to D.A.T.A.<\/strong><\/span><\/h4>\r\n<\/div>\r\n<span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">10. D.A.T.A. is more than just its IP.<\/strong><\/span>\r\n\r\n<span style=\"font-family: futural;\">We believe that D.A.T.A.\u2019s Intellectual Property is innovative and that it has overcome technical challenges faced by others who have attempted to develop similar technologies. That said, D.A.T.A. is much more than just its IP. It is also about a leadership team that has over one hundred years of experience in the investment business. It is about its culture of centering on Client needs and striving to provide white-glove service, and the deep relationships our approach has fostered. Additionally, we believe we have the globe\u2019s only dataset and commensurate set of insights into the norms of deception and truth in finance and investing. That dataset grows every single day and would be difficult to replicate.<\/span>\r\n\r\n<span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">9. People Cannot Detect ChatGPT or Deception. Sounds good to D.A.T.A.<\/strong><\/span>\r\n\r\n<span style=\"font-family: futural;\">It has been demonstrated by researchers that\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/hai.stanford.edu\/news\/was-written-human-or-ai-tsu\" rel=\"\">ChatGPT can generate texts that are difficult for people to discern are generated by a computer<\/a>. Specifically, people can detect AI generated text with only about 50-52% accuracy. Incidentally, this accuracy range is remarkably similar to the accuracy of people attempting to detect deception of any kind; whether that is by audio, visual, or written means.\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/deceptionandtruthanalysis.com\/insights\/f\/key-scientific-paper-redux-%E2%80%93-accuracy-of-deception-judgments?blogcategory=Key+Scientific+Paper+Redux\" rel=\"\">For audiovisual cues people\u2019s detection accuracy is just 54%<\/a>\u00a0and\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/deceptionandtruthanalysis.com\/insights\/f\/key-scientific-paper-redux-%E2%80%93-accuracy-of-deception-judgments?blogcategory=Key+Scientific+Paper+Redux\" rel=\"\">people can detect deception in written texts just 50% of the time<\/a>. So, if anything ChatGPT\u2019s ability to generate bogus text, and people\u2019s inability to detect it, is not a threat to D.A.T.A., but an opportunity.<\/span>\r\n\r\n<span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">8. ChatGPT Does Not Use Its Users\u2019 Input to Improve Itself<\/strong><\/span>\r\n\r\n<span style=\"font-family: futural;\">OpenAI\u2019s Terms of Use \u00a73(c) states that its users\u2019 input is not used \u201cto develop or improve our Services.\u201d In other words, there is not a way for ChatGPT to utilize user input to create a deception detection algorithm. That is, unless one of their users inputs a deception detection algorithm themselves. In this instance, D.A.T.A. would be competing, not with ChatGPT, but with another company\u2019s deception detection algorithm. This is no different to the competition we face every single day.<\/span>\r\n\r\n<span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">7. ChatGPT\u2019s Own Terms of Service Seeks to Fight Its Use to Deceive People<\/strong><\/span>\r\n\r\n<span style=\"font-family: futural;\">OpenAI\u2019s own Terms of Use \u00a72(c)(v) forbids using ChatGPT to \u201crepresent that output from the Services was human-generated when it is not or otherwise violate our Usage Policies.\u201d This Term specifically forbids that users use ChatGPT to spoof others. OpenAI and other LLMs also actively monitor improper uses of their technologies.<\/span>\r\n\r\n<span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">6. Infinity is an Inappropriate Mental Model for Understanding ChatGPT<\/strong><\/span>\r\n\r\n<span style=\"font-family: futural;\">Most of us have had the experience of a new technology impressing us at a very deep level, whether that is our first MP3 player, video game console, or smartphone. For many people their interactions with ChatGPT have done exactly this: impressed at a deep level.<\/span>\r\n\r\n<span style=\"font-family: futural;\">Having studied behavioral economics and human psychology for more than two decades D.A.T.A. can attest that a typical response in these moments is for people to try and comprehend new unfamiliar ideas and information by relating them to familiar ideas and information already comprehended via mental models or analogy. But what do people do when their fallback mental models and analogies fail to aid their comprehension?<\/span>\r\n\r\n<span style=\"font-family: futural;\">In these instances, it is tempting to liken things that inspire awe, like ChatGPT, to the Infinite. Evidence of this is that in many of D.A.T.A.\u2019s conversations with investors or Clients where no matter how accurate, thoughtful, or sophisticated our response to their questions about ChatGPT they respond with, \u201cYeah, that may be true of ChatGPT 3.5, but what about ChatGPT in four additional iterations?\u201d This is evidence of the \u201cInfinite mental model\u201d at work. After all, each of us has had the experience of imagining the largest number we can think of, and we end up with Infinity as the largest number. But then there is the immediate and subsequent thought of \u201cInfinity + 1,\u201d \u201cInfinity + 2,\u201d and of course, \u201cInfinity + Infinity,\u201d and so on.<\/span>\r\n\r\n<span style=\"font-family: futural;\">It is easy to be presented with new evidence and simply say Infinity + 1 and never be satisfied that something can compete with that. But ChatGPT is not infinite, so it is the inappropriate mental model. In fact, as we discuss below, one of the limitations of LLMs like ChatGPT is that they rely on two key things in order to work, both of which are finite.<\/span>\r\n\r\n<span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">5. ChatGPT is Ignorant About Deception<\/strong><\/span>\r\n\r\n<span style=\"font-family: futural;\">D.A.T.A. believes that ChatGPT is highly unlikely to be used to construct a world class deception and truth detection algorithm. Why? Because it is ignorant of deception science and lacks the technical breakthroughs necessary to construct a commercial algorithm. We know this because\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/deceptionandtruthanalysis.com\/insights\/f\/chatgpt-and-deception-part-1\" rel=\"\">we asked ChatGPT what it knew about deception detection<\/a>\u00a0and it was far off, even reporting something that is a complete fiction, debunked many times by science.<\/span>\r\n\r\n<span style=\"font-family: futural;\">How could ChatGPT be so far off? ChatGPT relies on repackaging information contained on the Internet and the innovative deception and truth technology that is housed within D.A.T.A. is not contained on the Internet.<\/span>\r\n\r\n<span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">4. ChatGPT Refuses to Deceive Us<\/strong><\/span>\r\n\r\n<span style=\"font-family: futural;\"><a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/deceptionandtruthanalysis.com\/insights\/f\/chatgpt-and-deception-part-1\" rel=\"\">When we ask ChatGPT to deceive us, it refuses to do so<\/a>. When we try and trick it into deceiving us, it also refuses. Thus, we do not believe ChatGPT can be used to undermine our algorithm. Additionally, when we ask ChatGPT to author a deception and truth algorithm it does not build one. Most importantly, when we ask ChatGPT to \u201cclean\u201d deceptive texts up so that they are less deceptive, while it does alter the scores, it barely does so and thus, it fails to spoof the D.A.T.A. algorithm.<\/span>\r\n\r\n<span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">3. LLM Limitations<\/strong><\/span>\r\n\r\n<span style=\"font-family: futural;\"><a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/www.newyorker.com\/tech\/annals-of-technology\/chatgpt-is-a-blurry-jpeg-of-the-web\" rel=\"\">Large Language Models are similar to lossy compression, such as a Xerox photocopier<\/a>. In order to be an efficient representation of knowledge, they tend to identify statistical regularities in text and store them in a specialized format. When large computational power is thrown at the task of knowledge retrieval, LLMs can identify extraordinarily nuanced statistical regularities. But, just as a photocopy of a document already photocopied many times is fuzzy, LLMs are quite similar. Their primary shortcomings are that:<\/span>\r\n\r\n<span style=\"font-family: futural;\">a.\u00a0<u class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-31 c2-64 c2-67\">Garbage In, Garbage Out<\/u>. LLMs can only answer questions based on the information available to them. In our own tests of ChatGPT in terms of its knowledge of deception detection, the model offered up as a technique something soundly and consistently debunked by science. While this fact was available on the web and has been for over 20 years, ChatGPT seemed to favor popularity of answer over accuracy of answer.\u00a0<\/span>\r\n\r\n<span style=\"font-family: futural;\">b.\u00a0<u class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-31 c2-64 c2-67\">Facts Missing, Fabrication Okay<\/u>. LLMs seem so intent on answering questions that like a Xerox machine asked to copy an image that is already blurry, it answers questions based on inferences that are pure fabrication. That is, LLMs extrapolate information already present even when it results in fabrication. Horrifyingly, it was\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/www.msn.com\/en-us\/news\/opinion\/chatgpt-invented-a-sexual-harassment-scandal-and-named-a-real-law-prof-as-the-accused\/ar-AA19vNsJ\" rel=\"\">caught manufacturing a Washington Post article<\/a>\u00a0in support of its claim that a professor was guilty of sexual harassment. ChatGPT, similar to the above manufacturing of a Washington Post story to support its conclusions, leaned on a proven false method of detecting deception in \u2018avoidance of eye contact.\u2019<\/span>\r\n\r\n<span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">2. LLM Text is Easy to Identify<\/strong><\/span>\r\n\r\n<span style=\"font-family: futural;\">How easy, or how hard is it to detect text generated by ChatGPT? Easy. Multiple researchers have been able to build\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/arxiv.org\/abs\/2301.11305\" rel=\"\">detectors that have up to a 95% accuracy in classifying texts as authored by LLMs<\/a>, or other AI. In other words, this is not a difficult problem to solve. Additionally, as we saw above, makers of LLMs want it to be obvious when a document has been authored by AI.<\/span>\r\n\r\n<span style=\"font-family: futural;\"><strong class=\"x-el x-el-span c2-2w c2-2x c2-3 c2-63 c2-13 c2-3t c2-64\">1. Deception Detection is Not an Optimization Problem<\/strong><\/span>\r\n\r\n<span style=\"font-family: futural;\">LLMs, like all computational models, are optimization problems. Yet, identifying deception is not an optimization problem because there is not just one type of person or even consistent behaviors from individuals. Thus, you need a general solution to account for the variance in people and not a solution whose sole purpose is optimization.<\/span>\r\n\r\n<span style=\"font-family: futural;\">While LLMs are impressive, they rely on directions from their Users. Every single interaction with a LLM, like ChatGPT, a User tells the model what needs to be optimized. Furthermore, many LLM researchers like\u00a0<a class=\"x-el x-el-a c2-2w c2-2x c2-67 c2-v c2-w c2-x c2-j c2-68 c2-3 c2-30 c2-31 c2-11 c2-32\" href=\"https:\/\/direct.mit.edu\/dint\/pages\/special_issue_on_large_language_models\" rel=\"\">a recent call for new optimization ideas from MIT<\/a>\u00a0believe that better optimization will result in better results.<\/span>","_et_gb_content_width":"","footnotes":""},"categories":[3,465],"tags":[458,462,461,459],"class_list":["post-14289","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-the-blog","category-d-a-t-a","tag-artificial-intelligence","tag-large-language-model","tag-llm","tag-machine-learning"],"_links":{"self":[{"href":"https:\/\/jasonapollovoss.com\/web\/wp-json\/wp\/v2\/posts\/14289","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jasonapollovoss.com\/web\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jasonapollovoss.com\/web\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jasonapollovoss.com\/web\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/jasonapollovoss.com\/web\/wp-json\/wp\/v2\/comments?post=14289"}],"version-history":[{"count":0,"href":"https:\/\/jasonapollovoss.com\/web\/wp-json\/wp\/v2\/posts\/14289\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/jasonapollovoss.com\/web\/wp-json\/wp\/v2\/media\/14284"}],"wp:attachment":[{"href":"https:\/\/jasonapollovoss.com\/web\/wp-json\/wp\/v2\/media?parent=14289"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jasonapollovoss.com\/web\/wp-json\/wp\/v2\/categories?post=14289"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jasonapollovoss.com\/web\/wp-json\/wp\/v2\/tags?post=14289"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}