Some time earlier this year, an employee at tech giant Meta built a system to track how much each staff member was using artificial intelligence (AI).
Named "Claudeonomics" after the Claude chatbot, the system created a leaderboard ranked by the number of tokens each user was exchanging with AI models, with leaders given titles such as "Token Legend". (Tokens are tiny chunks of text , each around four characters long, that language models use for processing.)
Meta is not alone in its fascination with " tokenmaxxing ": AI labs OpenAI and Anthropic, e-commerce company Shopify, and tech investment firm Sequoia capital are all reportedly monitoring AI usage and rewarding heavy users, some of whom burn billions of tokens in a week.
Reducing a person's performance to a single metric can be appealing for management in large corporations. But the choice of what to measure isn't a neutral one - and if we're not careful, it can start to rewrite our vision of what we actually value.
The score keeps the score
One of the more full-throated advocates of tokenmaxxing is Jensen Huang, chief executive of chipmaker Nvidia, who envisions a future in which tech employees negotiate high token budgets and consume tokens at rates commensurate with their salaries. Around 80% of those tokens are currently processed via Nvidia's chips, so Huang's enthusiasm makes sense.
But is token consumption a helpful metric for those of us who do not profit directly from AI processing volume?
In a recent book, The Score , philosopher C. Thi Nguyen analyses the rise of metrics throughout modern society and offers some helpful insights.
As Nguyen emphasises, what we measure shapes our goals. We develop metrics as tools of convenience; they standardise our measurement of values so we can compare large numbers of otherwise disparate things.
This standardisation comes at the expense of variation and distinctiveness, Nguyen argues. In business, it can make workers seem interchangeable.
Determining which employees in a large organisation are consuming the most tokens in a week is fairly straightforward. But it tells us nothing about the quality or impact of their work.
Bad metrics, bad results
In the past, questionable metrics have contributed to dramatically bad outcomes.
Prior to the 2008 global financial crisis, for example, many financial institutions had sophisticated systems of measures designed to incentivise selling as many loans as possible, as quickly as possible. Perhaps unsurprisingly, many of those loans turned out to be far riskier than anyone realised.
Nguyen emphasises that these types of metrics can tempt us into thinking they are unavoidable. But one of the central lessons of moral philosophy is that we ought to pause at moments like these and ask a couple of basic questions: what is a good life, and what values are actually worth chasing?
Huang and others usually don't present tokenmaxxing as an answer to these question. But that's how it functions. What is worth devoting your professional and creative energy to? Simple: grinding through tokens.
A new vision of the good life?
Silicon Valley has, of late, produced a striking number of manifestos and quasi-constitutions.
Consider Anthropic's Claude's Constitution , published in January 2026, which sets out the company's aspirations for its model's values and speech. Or look at venture capitalist Marc Andreessen's Techno-Optimist Manifesto , which makes the case for ambitiously accelerating technological advancements in the service of promoting human flourishing.
Some of the most influential texts in the history of moral and political philosophy take this form. Thomas Jefferson wrote one - the US Declaration of Independence. Karl Marx and Friedrich Engels wrote another - The Communist Manifesto.
One way to view these Silicon Valley proclamations, and trends like tokenmaxxing, is as repackaging familiar commonplaces of corporate life - recasting mission statements and key performance indicators in a loftier register. But another is to see them as attempts to do something far more ambitious: sketch the outlines of a new and far-reaching vision of the good life.
On that view, the metrics used to measure progress against the vision matter. Tokenmaxxing, for example, is already creeping beyond the bounds of the tech industry - one report from the Wharton School at the University of Pennsylvania suggests many organisations are prioritising staff AI usage and spending as metrics.
Metrics can be useful - if we're careful
Metrics do have their place in an ordered and complex society. There are many instances in which we might happily defer to the scores produced by simple metrics, trading nuance for convenience. Aggregate ratings on product or restaurant review sites, for example, can simplify our decision-making, even if they aren't tailored to our specific preferences.
The problem is what Nguyen calls "value capture" - when we uncritically allow external metrics to determine our own goals and behaviour. Resisting this process involves questioning what is being measured and reframing it.
Instead of counting tokens, for example, we might use an equivalent metric such as energy consumption . Energymaxxing might sound more like conspicuous wastage, rather than improved performance.
Counting tokens is one measure of AI activity, which is itself intended as a measure of productivity, which in turn leaves aside the question of what is being produced. Not only is tokenmaxxing a dubious metric in itself, but it may also distort our vision of what matters.
![]()
Victoria Lorrimar receives funding from the John Templeton Foundation.
Tim Smartt does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.