Developer Velocity
I've often written about developer velocity, but haven't formally defined it. In general, velocity is speed with direction. So I think of it like this:
Developer velocity is a measure of productivity related the rate of software changes.
Developer velocity isn't the whole of developer productivity. I think of developer velocity as post-commit workflow. Once a feature or change set is ready, how much "red tape" is there to get those changes out to customers? But "red tape" isn't just bureaucratic with software.
How fast can changes go from development to production? Most organizations don't have a continuous or automated pipeline, so a corollary to this is how often changes are deployed.
How often do changes fail? There's never true parity between development and production. It's why even in organizations with the most advanced tooling, you still see outages due to tough-to-test changes like BGP routing (see Meta). Flaky tests, flaky deploys, and bad deploys all create friction that works against developer velocity.
How fast does it take to reach desired state? Rolling back changes, patching security vulnerabilities, recreating environments? Even in automated systems, full build and deployment times can wildly vary. A build that takes an hour (like recompiling a Linux kernel) can make a deployment cycle frustratingly long. Longer loops mean less feedback for developers.