Home For Fiction – Blog

for thinking people

There are no ads, nor any corporate masters
How to show support


March 13, 2023

The Danger of Partial Knowledge: My ChatGPT Encounter

Philosophy, Society

ignorance, knowledge, programming, society, technology

4 comments

The old piece of wisdom might be right: It’s better to know nothing about something, rather than know a little. Of course, referring to partial knowledge is a sort of misnomer. Philosophically speaking, virtually all knowledge is partial – cogito ergo sum and all that. But socially speaking, the gradations are more intriguing.

The reason? Because of our old friend, the Dunning-Kruger effect. When it comes to partial knowledge – knowing a little of something – there is a peculiar paradox at play: When we know a little, we think we know a lot; when we know more (the threshold is subjective), we know that we only know a little.

As I said above, though there are philosophical dimensions in this topic, the focus of this post will be on society. I have always been interested in ignorance and the illusion of knowledge (as long-term readers of the blog have realized), but recently I had an experience that intrigued me with its repercussions: I tried asking the famous ChatGPT AI model questions on a topic I know very well enough to know I don’t know anything: the Gothic. The responses I got were very intriguing for our context of partial knowledge.

Partial Knowledge - woman reading
The only way out of the darkness of ignorance is through personal, active effort, paired with humility: “I only know that I know nothing”

What Does an AI Conversational Model Know About the Gothic?

I began asking ChatGPT some simple questions, such as “Is Dracula real?” You might recall this (apparently silly) question from my post on the five questions people ask the most about Dracula. The responses of these simple questions were, as expected, accurate. ChatGPT replied that the character of Dracula isn’t real, but he was based on Vlad Tepes. The answer was acceptable, though I would’ve liked to see a reference to other literary vampires before Bram Stoker’s, such as “The Vampyre”, by John William Polidori, or Carmilla, by Sheridan Le Fanu.

But where things began to break down, badly, was when the questions began getting more complex. The first thing I noticed was that ChatGPT, in an attempt to keep up appearances – both conversationally and functionally – began offering responses that were ambiguous and could be interpreted in a variety of ways. I recognized it from my “horoscope” generator! For example, ChatGPT would say something like “X isn’t true, but some symbolic interpretations might see it differently”.

Thanks, Captain Obvious

The breaking point was when I posed a deliberately difficult question: “In Dracula, is Quincey Morris a vampire?” As expected, ChatGPT said that no, Quincey Morris is not a vampire, but one of the characters trying to kill Count Dracula. I tried to make things easier for it, and asked “Has not Franco Moretti argued Quincey Morris is a vampire?” I expected ChatGPT to confess its error then.

I was surprised when it insisted: No, it replied, Franco Moretti has written about Dracula, but has not argued Quincey Morris is a vampire. At that point, I decided to put the poor AI model out of its misery, so I asked “Has not Franco Moretti, in his work ‘The Dialectic of Fear’, argued that ‘The American, Morris, must die, because Morris is a vampire.’?”

At that point, ChatGPT finally conceded. It apologized for its previous error and admitted that yes, I was right.

Well, thanks Captain Obvious. But the problem is, what kind of a picture about Dracula and the Gothic (in this example) would, say, a student researching an essay get?

Don’t Blame the Tool

AI-based search is a massive, massive leap when it comes to finding information. Based on my experience so far, the best application of ChatGPT (to me) is programming help, autocompletion, and troubleshooting. I mean, I asked ChatGPT to “Generate a list of random street names and city names, and structure them as an array of objects”, and I got this, three seconds later:

[  {    "street": "Maple Street",    "city": "Eldoria"  },  {    "street": "Broadway",    "city": "Amberland"  },  {    "street": "Main Street",    "city": "Calindor"  },  {    "street": "Oak Street",    "city": "Zorathia"  },  {    "street": "Elm Street",    "city": "Galdorin"  },  {    "street": "Cedar Avenue",    "city": "Silverhaven"  },  {    "street": "Park Place",    "city": "Oceandale"  },  {    "street": "High Street",    "city": "Flintwood"  },  {    "street": "Pine Avenue",    "city": "Hazelwood"  },  {    "street": "River Road",    "city": "Ravenshire"  },  {    "street": "Birch Lane",    "city": "Emberland"  },  {    "street": "Cherry Street",    "city": "Mistwood"  },  {    "street": "Willow Lane",    "city": "Everdawn"  },  {    "street": "Church Street",    "city": "Mooncrest"  },  {    "street": "Chestnut Street",    "city": "Crystalwyn"  }]

Or how about asking ChatGPT “Using plain JavaScript, generate code that removes an object from an array of objects, based on object value”. Another three seconds later, I got the following code, plus detailed explanations:

const arr = [
  { id: 1, name: 'John' },
  { id: 2, name: 'Mary' },
  { id: 3, name: 'Bob' }
];

// remove object with id = 2
const idToRemove = 2;
const filteredArr = arr.filter(obj => obj.id !== idToRemove);

console.log(filteredArr); // [{ id: 1, name: 'John' }, { id: 3, name: 'Bob' }]

This is seriously impressive.

Nonetheless, like any tool, ChatGPT is only as good as the intent and scope of the use. I shouldn’t blame a screwdriver if it doesn’t work well in mixing flour. The same way, it’s probably not fair to criticize an AI model for not understanding what “Quincey Morris is a vampire” really means.

AIs like ChatGPT will be invaluable in crafting useful responses to questions like “Can I fit a 52″ TV in the trunk of a Ford Fiesta?” or “In PHP, how can you append an array to a text file?” but asking it help with an essay on the topic of, say, differences between Todorov and Tolkien in Gothic ambiguity is probably a bad idea.

The thing is, most people don’t realize that.

Partial Knowledge Is Dangerous Because You Don’t Know It’s Partial

The way most people passively accept industrialized fiction and so, as a result, will see no problem with an AI “creating art”, I fear most people don’t realize that getting knowledgeable-looking responses isn’t the same as getting a knowledgeable response.

In the words of Aristotle, “knowledge of the fact differs from knowledge of the reason for the fact”.

Tehcnology like ChatGPT, if used improperly, will facilitate a world where partial knowledge constitutes knowledge (if not – gasp! – expertise). I mean, for all the criticism it’s getting, Wikipedia is a pretty great source of information. I would never cite Wikipedia in an academic article, but I would certainly start from Wikipedia and find my way through its cited sources, which I could assess. If an article based its claims on rightwingconspiracynut-dot-com I would certainly not view it the same way I’d view one relying on, say, the Stanford Encyclopedia of Philosophy.

home for fiction

Partial Knowledge: How Do We Know We Don’t Know?

Socrates pushed the matter of partial knowledge to the extreme with this statement:

ἃ μὴ οἶδα οὐδὲ οἴομαι εἰδέναι

(what I do not know I do not think I know either)

Plato, Apology

The problem though is complex: It’s easy to claim ignorance on a philosophical level. Indeed, extreme skepticism follows the very same thing. However, things are different on a social level.

To put it simply, in our everyday lives we are simply forced to “concede” that we know, rather that we don’t. Otherwise we wouldn’t be able to go to school, manage employees, or even drive a car.

This all too often leads us to equate the ability to do something to knowledge of something. For instance, I can certainly operate a faucet, but I have no idea how to fix a broken one, because I don’t know how it works. But not all people are like that. And whereas with leaky faucets the repercussions are limited to minor damage and a Donald Duck style nervous breakdown, things are far more insidious with abstract concepts.

4 Comments

  1. There’s been so much in the media about ChatGPT lately that I was both intrigued, and a little hesitant, to read your article. I’m so glad I did! Knowing the limitations of any tool is key to using it properly. Or not using it at all. I’ll stick to the latter. 😀

    1. Chris🚩 Chris

      As I said in the post, there are things it’s really good at, and things it’s outright dangerous. Personally, I’m seriously impressed by its ability to understand what I want to do programmatically and deliver the necessary code – indeed, I’ll have a separate post on that at some point. But I wouldn’t go anywhere near it for anything having to do with research or creative work.
      Thanks for your comment!

  2. Heraclitóris Heraclitóris

    The fact most of us do not systematize the implicit knowledge, thus rendering it explicit, excludes other people from social games. I was, and still am partially, a victim of that. Being excluded from a sick society could be considered healthy, and it is to some degree, but it also costs a high price at other dimensions of social interaction. Thus, partial knowledge may also hurt those on the receiving end of the communication, those receiving partial knowledge and yet expected to act upon it.

    1. Chris🚩 Chris

      An intriguing point. Quite be definition, mediocrity has the numbers on its side. The world is finely tuned to accommodate the lowest common denominator – for reasons having to do with, you guessed it, the market. It’s far easier to sell stuff to many people at once when they’re all conditioned to be a certain way and expect the same things


Punning Walrus shrugging

Comments are closed for posts older than 90 days