The distinction between fast and slow thinking, or system 1 and system 2 thinking, made popular by Daniel Kahneman’s book *Thinking Fast and Slow*, might be a helpful lens to view LLMs.
Do LLMs exhibit some of the same cognitive fallacies / biases / failure modes that occur in humans during S1 thinking?
I’m not sure how far the parallels to human thinking will go - we’re already seeing some failure modes come up much more frequently in these models than they do in humans (hallucinations etc). It’ll be interesting if a whole new class of biases arise from purely stochastic models
"System 2 thinking is still reserved for humans. We might use LLMs to get a first draft, but we don't have the tools to do analytical thinking with LLMs (yet). "
What do you think getting a first draft would be a better option? or using LLM to find counterarguments, limitations and other options/solutions after your first draft is ready would be a better option? In my experience, you want to avoid being biased by what LLM is offering and stop thinking of other solutions/options.
I like this distinction thanks. Another thought I had is that problems that are easy for humans are easy for LLMs, and problems that are hard for humans are hard for LLMs. That contrasts with other ML models like chess players, and also in general, computer programs that can do things humans can't do.
Not sure I follow. Breaking an article down and explaining things using first principles thinking is one of the most important example of System 2 thinking and LLMs do that with ease among numerous other system 2 tasks that it can do.
Insightful post Matt! I often read your writing. Good stuff. Keep going!
Do LLMs exhibit some of the same cognitive fallacies / biases / failure modes that occur in humans during S1 thinking?
I’m not sure how far the parallels to human thinking will go - we’re already seeing some failure modes come up much more frequently in these models than they do in humans (hallucinations etc). It’ll be interesting if a whole new class of biases arise from purely stochastic models
"System 2 thinking is still reserved for humans. We might use LLMs to get a first draft, but we don't have the tools to do analytical thinking with LLMs (yet). "
What do you think getting a first draft would be a better option? or using LLM to find counterarguments, limitations and other options/solutions after your first draft is ready would be a better option? In my experience, you want to avoid being biased by what LLM is offering and stop thinking of other solutions/options.
I like this distinction thanks. Another thought I had is that problems that are easy for humans are easy for LLMs, and problems that are hard for humans are hard for LLMs. That contrasts with other ML models like chess players, and also in general, computer programs that can do things humans can't do.
Not sure I follow. Breaking an article down and explaining things using first principles thinking is one of the most important example of System 2 thinking and LLMs do that with ease among numerous other system 2 tasks that it can do.