1 Comment
User's avatar
⭠ Return to thread
jonluca's avatar

Do LLMs exhibit some of the same cognitive fallacies / biases / failure modes that occur in humans during S1 thinking?

I’m not sure how far the parallels to human thinking will go - we’re already seeing some failure modes come up much more frequently in these models than they do in humans (hallucinations etc). It’ll be interesting if a whole new class of biases arise from purely stochastic models

Expand full comment