The knowledge dividend of large language models

AI
LLMs
Cross-post: Starschema

Crosspost from the work blog: a pragmatic perspective on the knowledge dividend in large language models (and what it may, or may not, mean for knowledge in models, and what this can do for you).

Author

Chris von Csefalvay

Published

2 October 2023

Over at the work blog, I’m discussing what knowledge means for large language models (LLMs), and the ways in which we can leverage this knowledge dividend for better inference.

As I’m writing this, the sun hasn’t risen over the Denver skyline in earnest. There’s still pink in the sky over the Front Range, and most of the world is still blissfully asleep. And so far, a small, moderately fine-tuned Large Language Model (LLM) trained on $500 worth of free credits has explained to me just how bad the Broncos’ recent 20–70 embarrassment against the Miami Dolphins is (very), made some useful suggestions for a Caddoan language to learn if I wanted to help with language preservation (Pawnee) and created a fairly acceptable recipe to salvage whatever is left in my fridge (spicy tomato and cheese omelet with a chia side salad). Not too shabby for something that has absolutely no understanding of language preservation, omelets or American football (then again, neither do I, as far as the last one is concerned).

And therein lies one of the pervasive paradoxes of LLMs: they generate very confident, very credible and very often correct answers to questions on subjects they really don’t know all that much about.

Read the full post here.