2 min read


ChatGPT is reactionary. It takes the input and looks for replies that are related. It's an excellent pattern matcher, which is why in most cases people are very impressed. But ChatGPT is the world's greatest bullshitter. It's not actually thinking. Thinking requires uniting disparate ideas. Thinking is synthesis. Reciting is not thinking. Good writing is thinking - it forces the writer to reflect on what he knows and bring it to bear in new contexts.

ChatGPT inserts incorrect facts and gives conclusions that are obvious. It sounds like Malcom Gladwell - impressive on the surface but with repeated exposure or specific topic knowledge, tiresome.

For writing, ChatPGT is an engine of confirmation bias.

People are worried about ChatGPT being used to cheat. NYC public schools recently banned students from accessing ChatGPT.  If teachers think that ChatGPT is good writing, that speaks more to the quality of instruction than the usefulness of ChatGPT.

There are uses for AI. AI has mastered chess and go. The difference is that games are structured, with clear rules and win conditions.

The limitations of ChatGPT are the limitations of the training model. But how can you train for good writing? Or even for factuality.

This video of Richard Feynman demonstrates the difficulty of answering "why?"

The problem, you see, when you ask why something happens, how does a person answer why something happens? For example, Aunt Minnie is in the hospital. Why? Because she went out, slipped on the ice, and broke her hip. That satisfies people. It satisfies, but it wouldn't satisfy someone who came from another planet and who knew nothing about why when you break your hip do you go to the hospital. How do you get to the hospital when the hip is broken? Well, because her husband, seeing that her hip was broken, called the hospital up and sent somebody to get her. All that is understood by people. And when you explain a why, you have to be in some framework that you allow something to be true. Otherwise, you're perpetually asking why. - Richard Feynman

The language model must be biased in some way. And the connections the AI will make will likewise be biased.

I've heard that ChatGPT is helpful for programming. Would love to hear how it works? Is it creative? Does it piece blocks of code together? I think coding has clearer win conditions than writing.

AI pattern matchers work for games because the rules and win conditions are clear. They can iterate the rules until they find a pattern that works. The infinite variations look like creativity. Maybe this is actual creativity. Brute force creativity. Monte Carlo creativity. Humans can't do this because of time constraints. Instead, humans have intuition, probability distributions based on their experience. They can't simulate every iteration but their decision-making is based on probability distributions which lets them know what to try out.

Brute force iterations work in structured domains. In ill-structured domains, there has to be more nuance.