5 Comments

Insightful post Matt! I often read your writing. Good stuff. Keep going!

Expand full comment

Do LLMs exhibit some of the same cognitive fallacies / biases / failure modes that occur in humans during S1 thinking?

I’m not sure how far the parallels to human thinking will go - we’re already seeing some failure modes come up much more frequently in these models than they do in humans (hallucinations etc). It’ll be interesting if a whole new class of biases arise from purely stochastic models

Expand full comment

"System 2 thinking is still reserved for humans. We might use LLMs to get a first draft, but we don't have the tools to do analytical thinking with LLMs (yet). "

What do you think getting a first draft would be a better option? or using LLM to find counterarguments, limitations and other options/solutions after your first draft is ready would be a better option? In my experience, you want to avoid being biased by what LLM is offering and stop thinking of other solutions/options.

Expand full comment

I like this distinction thanks. Another thought I had is that problems that are easy for humans are easy for LLMs, and problems that are hard for humans are hard for LLMs. That contrasts with other ML models like chess players, and also in general, computer programs that can do things humans can't do.

Expand full comment

Not sure I follow. Breaking an article down and explaining things using first principles thinking is one of the most important example of System 2 thinking and LLMs do that with ease among numerous other system 2 tasks that it can do.

Expand full comment