π€ Why aren't bigger models *always* better? This post explores a curious paradox similar to one seen in educational attainment β gains don't always equal progress! #AI #LLMs #Paradox #Tokens #MachineLearning
- The article explores the disconnect between increasing 'tokens' and expected outcomes.
- It references a previous 'micro-macro paradox' involving education.
- It suggests something is amiss in how we understand the relationship between scale and results.