What are "true names" in the context of AI alignment?
True names are precise mathematical formulations of intuitive concepts that capture all the properties that we care about for those concepts. “True names” is a term introduced by alignment researcher John Wentworth, possibly inspired by the idea from folklore that knowing a thing's “true name” grants you power over it.
Wentworth gives many examples of true names. Concepts like "force", "pressure", "charge" and "current" were all once poorly understood, based on vague intuitions about the physical world, but have now been robustly formalized mathematically.
To put it another way, a “true name” can be thought of as a mathematical formulation that robustly generalizes as intended. An important property of true names is that they are not susceptible to failing via Goodhart's law
“When a measure becomes a target, it ceases to be a good measure.”
Many alignment researchers care about human values. It would be a huge boon for AI alignment
In addition to human values, alignment researchers also seek true names for components of agency such as optimization, goals, world models, abstraction, counterfactuals, and embeddedness.