Ted Evans Lecture, Brisbane

Australian Treasury

The Ted Evans legacy

It is an honour to deliver the Ted Evans Lecture this evening. Ted Evans belonged to a generation of economic policymakers who understood that ideas matter most when they are tested against reality. He combined analytical discipline with institutional seriousness, and he approached economic reform as a practical responsibility to improve the functioning of the nation. Those are demanding standards. They are also standards that continue to shape how many of us think about the craft of economics.

Ted's life reminds us that intellectual authority does not always begin in predictable places. He grew up in modest circumstances - his father was a fitter and turner - and attended Ipswich High School before leaving at 15 to work as a linesman in the Postmaster‑General's Department. The trajectory from teenage linesman to Secretary of the Treasury illustrates a quality that economists sometimes underweight: the capacity of institutions to recognise talent that does not arrive with conventional signals.

One of the pivotal moments in his early career reportedly occurred over a drink with the late great Max Corden. Evans had been considering a PhD in New Zealand. Corden persuaded him instead to join Treasury and learn economics from inside government. In retrospect, that advice shaped the intellectual architecture of Australian economic policy for decades. It is a reminder that careers, like economies, are path dependent.

Evans served as Treasury Secretary from 1993 to 2001, working with treasurers John Dawkins, Ralph Willis and Peter Costello, during a period in which the reform momentum of the Hawke-Keating era continued to reverberate through the policy system. The reforms associated with that period transformed Australia from what Paul Keating memorably described as a 'sclerotic command and control economy' into one that was more competitive and flexible. Large reductions in marginal tax rates, the introduction of capital gains and fringe benefits taxation, dividend imputation, and changes to the taxation of superannuation were structural shifts that altered incentives across the economy.

Keating later observed that Evans believed in policy innovation and was always brave in breaking formerly forbidden policy ground, adding that his convictions were held in a vice‑like grip and deployed in powerful advocacy. It is a beautifully Keatingesque description of what intellectual courage looks like in a public institution. Economics advances partly through journal articles, but it also advances when policymakers are willing to act on ideas whose consequences cannot be known with certainty.

Yet to describe Ted Evans solely in terms of reform risks missing something essential. Those who worked closely with him emphasised not only his intellect but his character. His friend David Morgan recalled a man who was intellectually honest, possessed a brutally honest view of current reality, and never tugged his forelock to anyone.

Morgan also left us with a story that reveals another side of Evans. Evans was the best man in Morgan's wedding to Ros Kelly, and one of his jobs was to drive the groom to the altar. Partway there, Evans pulled the car over, opened 2 cans of Victoria Bitter, and asked, 'David, are you sure you want to go through with this?'. Evans actually thought Ros Kelly was terrific. But like a good economist, he figured that it was worth double‑checking before the marital contract was concluded.

How does Ted Evans' career justify my focus on artificial intelligence? Because Evans represents a tradition within economics: the belief that when technology changes, our theories need to change too. In 1776, Adam Smith published The Wealth of Nations the day after James Watt's first profitable dual‑cylinder steam engine entered the market. Our discipline's development has always been intertwined with technological development.

Introduction: AI as an informational shock

Tonight, I want to argue that artificial intelligence requires updating our theories in several ways.

Much of the public discussion treats AI primarily as a productivity story. That framing is not mistaken. General‑purpose technologies have historically driven long‑run growth, and there are good reasons to expect AI to expand the production possibility frontier. But for economists, there's more at stake.

Recent estimates suggest that large language models now serve roughly 800 million to one billion weekly users, who collectively generate billions of prompts each day across professional and personal domains. Diffusion at this scale is unusual. Electricity required the reconfiguration of factories before it dramatically boosted productivity. In this case, the technology is being placed directly into the hands of individuals, who are using it in unexpected ways.

When a technology penetrates tasks associated with production, learning, contracting, and choice, economists should ask not only how much output rises, but whether the informational assumptions underlying our models remain reliable.

The claim I want to defend this evening is that artificial intelligence is both a productivity shock and an informational shock. It increases what economies can produce while also weakening many of the signals economists rely upon to interpret behaviour.

Output has long served as a proxy for effort. Credentials have signalled skill. Prices have conveyed scarcity. Choices have revealed preferences.

If those signals become noisier, manipulable, or partially synthetic, then some of our most familiar theories might need updating. Artificial intelligence may challenge standard assumptions and shift constraints.

This is not the first time economists have faced such a moment. The marginal revolution, the formalisation of national income accounting, the integration of information into microeconomics, and the rise of endogenous growth theory each required the discipline to rethink what it took for granted. Ted Evans spent his career translating some of these intellectual shifts into institutional reality.

Tonight, I will discuss 8 major assumptions of modern economic analysis and consider how artificial intelligence may reshape them. In each case, I will outline the canonical idea, explain the source of the tension, and suggest a research question that might reward further inquiry.

Economics progresses when we update our models to reflect new realities. AI is the biggest general‑purpose technology to come along during the twenty‑first century. It's only right to consider what it means for economic theory.

Let me begin with the production function.

1. Technology in the production function: from scalar TFP to task‑specific productivity

The standard representation of technology in economics is simple. In neoclassical growth theory, technology enters as a scalar shift parameter (A) in the production function (Y = A F(K,L)), raising output in a way that permits aggregation across tasks and workers. Even endogenous growth models typically preserve this abstraction: technology improves productivity broadly rather than reshaping the internal structure of production.

Task‑based models complicate this picture by allowing technologies to substitute for or complement particular activities, yet they often retain 2 implicit assumptions: that task boundaries are relatively stable, and that technological effects are monotonic.

Generative AI challenges both assumptions. While earlier technologies have also reshaped tasks and organisational structure, AI is a substantial shock, targeting specific cognitive tasks rather than factors and producing effects that are frequently non‑monotonic - simultaneously substituting for some activities, complementing others, and reorganising workflows within the same occupation. Moreover, AI often operates as a flow input rather than an embodied capital good: prompts and orchestration matter alongside ownership.

Recent work on task‑based growth is relevant here. In 'weak links' frameworks, production depends on the successful completion of multiple essential tasks, and total output is constrained by the bottleneck - the task that is hardest to automate or perform (Jones 2026; Aghion, Jones and Jones 2019). Even infinite supply of an 'easy' task leaves output bounded by the remaining hard task.

Gans and Goldfarb (2026) formalise this idea by modelling production as an O‑ring process in which task qualities multiply. Under such complementarities, automating one task does more than replace labour; it reallocates worker time toward the remaining manual tasks, increasing their quality through what they term a 'focus' mechanism. Task‑by‑task substitution logic is therefore incomplete. Automation changes the return to automating other tasks, adoption may occur in discrete bundles, and labour income can rise under partial automation because the value of remaining bottlenecks increases.

There are 2 implications of this. First, aggregation is no longer innocuous. If output is governed by bottlenecks, the macroeconomic effects of AI depend disproportionately on which tasks remain scarce rather than on average productivity gains.

Second, automation should be understood partly as organisational redesign. Technologies that alter task complementarities can flatten hierarchies and redraw task boundaries - much as electrification forced factories to reorganise around new production possibilities (Agrawal, Gans and Goldfarb 2022).

Few people expect the reorganisation of knowledge work in response to AI to take as long as the reorganisation of factory work took in response to electricity. There are also good reasons to believe that automating intelligence itself could have consequences broader than earlier general‑purpose technologies (Jones 2026). But when thinking about AI and economic theory, technology may need to be modelled less as a scalar and more as a high‑dimensional vector of task productivities with endogenous interactions.

This raises a potential research question: In bottlenecked production systems, does AI accelerate growth by eliminating the binding constraints - or does it merely shift scarcity to new tasks? In such an environment, aggregation becomes a theoretical outcome rather than a maintained assumption. The implication is that scarcity shifts from average productivity toward the tasks that remain hardest to automate. Production theory has always been concerned with constraints; AI may alter where those constraints bind.

The next assumption moves from production to labour markets. What happens to skill‑biased technical change when cognition itself is no longer the scarce input?

2. Skill‑biased technical change: when cognition is no longer the scarce input

Skill‑biased technical change (SBTC) has long provided a powerful organising framework for understanding wage inequality. The idea is that new technologies complement skilled labour while substituting for less‑skilled labour, raising the relative demand for education and cognitive ability. Cognitive skill is therefore treated as the scarce factor that commands a wage premium.

The empirical success of this framework is considerable. The 'race between education and technology' model explains much of the long‑run evolution of wage structure by emphasising the interaction between the relative supply of educated workers and the relative demand for skills driven by technological change (Katz and Murphy 1992; Goldin and Katz 2008; Autor, Goldin and Katz 2020). Rising educational attainment alone is estimated to account for roughly 45 per cent of worldwide income growth since 1980, with about one‑third of education's contribution reflecting skill‑biased technological change that increased returns to schooling (Gethin 2025; Katz 2025).

Formally, the canonical model treats output as produced by imperfectly substitutable skilled and unskilled workers within a CES production function. Skill‑biased technical change can operate along 2 margins: an intensive margin that raises the relative productivity of skilled workers, and an extensive margin that reallocates tasks toward them through automation (Katz 2025). The college wage premium then depends on a race between relative skill supply and technology‑driven demand shifts.

Yet the same framework that performed well in explaining late twentieth‑century inequality now fits less comfortably. In the US case, while education returns explain most of the rise in inequality between 1980 and 2000, a growing share of inequality since 2000 has occurred within education groups, particularly among college graduates (Katz 2025). This suggests that a simple 'college versus non‑college' model explained a lot in the 1980s and 1990s, but does a poor job of explaining contemporary labour‑market dynamics.

Artificial intelligence may have different impacts from previous technologies. Classical computerisation largely automated routine tasks while complementing professionals whose work relied on tacit knowledge, judgement and interpersonal skill (Autor, Levy and Murnane 2003). AI differs in that it begins to automate aspects of non‑routine cognitive work - drafting text, writing code, synthesising information and recognising patterns - activities once regarded as the preserve of highly educated workers. Such skills were scarce in the old SBTC model, because you could only get them through education. Now, AI makes them abundant.

Early evidence suggests that AI assistance can raise productivity disproportionately for lower‑performing workers by supplying procedural knowledge they previously lacked. Rather than amplifying innate ability, the technology often narrows performance gaps.

But before we celebrate the potential of AI to reduce earnings inequality, it is worth considering the general equilibrium implications. Acemoglu (2024) shows that even when AI boosts the productivity of lower‑skill workers, inequality can still rise. This can occur because of an increase in inequality between labour and capital, or because of 'ripple effects' in which impacted demographic groups compete for new jobs, to the detriment of lower‑skilled workers. The result is that narrowing productivity differences in some tasks may coexist with widening wage inequality.

Acemoglu's research emphasises the importance of thinking about the balance between capital and labour, and the ways that labour markets can shake down as a result of a technology shock: reshaping task allocation across the entire production system. He also emphasises the distinction between 'easy‑to‑learn' and 'hard‑to‑learn' tasks. Automating the former may generate modest productivity gains while displacing workers, whereas sustained wage growth and broadly shared prosperity are more likely when technological change creates new tasks - particularly for middle‑ and lower‑paid workers. Whether AI ultimately proves equalising or polarising may therefore hinge less on automation itself than on the economy's capacity for task creation.

Katz similarly argues that the traditional race‑between‑education‑and‑technology framework remains useful but requires modification for an era in which AI can learn rather than merely execute instructions. Unlike classical computing, which excelled at codifiable routines, machine learning expands the frontier of tasks that can be automated, including some that rely on tacit knowledge. For example, software coders are increasingly managing AI agents, while marketing professionals are increasingly moving from writing to editing. As this becomes more widespread across the economy, it will change the nature of occupations, with possible implications for SBTC models (Katz 2025).

Rather the formal skill categories, such as school completion, vocational qualifications and university degrees, the more relevant distinction in the future might be differences in the type of cognition performed. Potentially, the significant distinction might be judgement versus execution, oversight versus production, or conceptual reasoning versus procedural cognition.

As Acemoglu, Autor and Johnson (2026) point out, some technologies reduce the value of human expertise by making it easier to replicate, while other technologies increase its value by creating new things for people to do. They describe the latter as 'pro‑worker AI': systems that expand human capability rather than substitute for it. Over the long run, sustained wage growth has depended less on automating existing tasks than on creating new forms of expertise that were previously neither needed nor possible. The question is which path artificial intelligence will follow.

If AI lowers returns to routine cognitive skill while increasing the value of meta‑skills - framing problems, identifying errors, allocating attention, and bearing responsibility - then inequality may have less to do with years of education and more to do with whether or not someone occupies a judgement‑intensive role.

This does not immediately render SBTC irrelevant. The SBTC model still focuses on the interaction between technology and the supply of human capability. But if AI commodifies routine cognition - allowing tasks once reserved for the highly educated to be performed on demand -then formal schooling may no longer be as predictive of productivity. Scarcity may instead migrate toward capacities that machines struggle to replicate: judgement under uncertainty, problem framing, error detection, and the willingness to bear responsibility. This raises a new research question: As AI weakens the link between education and productivity, does judgement emerge as the new scarce factor organising wage inequality - playing the role that schooling occupied in earlier eras of skill‑biased technical change?

The next assumption moves from labour markets to human capital. If education is no longer the primary proxy for scarce cognition, how should economists think about the formation of productive capability in an age of intelligent machines?

3. Human capital theory: when learning and doing decouple

Human capital theory rests on a tight linkage: skills are costly to acquire, internalised through education and experience and embodied in workers. Productivity reflects accumulated knowledge, and credentials serve as imperfect signals of that stock. Learning and doing are therefore mutually reinforcing. To perform a task is to have learned it.

Artificial intelligence upsets that premise by allowing individuals to execute complex tasks without fully internalising the underlying capabilities.

Universities provide the clearest early test of this tension because they are the institutions tasked with converting learning into credible signals of capability. Historically, educators could treat student output as evidence of accumulated knowledge. When AI mediates that output, the inference becomes less reliable.

Experimental evidence suggests that students who use AI for writing produce work that is less original and less accurate (Niloy et al 2024). As New York University's Clay Shirky writes, 'Learning is a change in long‑term memory… Now that most mental effort tied to writing is optional, we need new ways to require the work necessary for learning' (Shirky 2025). Any student can copy an essay question into a large language model and copy the resulting answer into a Word document. Most students can then edit the essay so as to avoid being caught by AI detection algorithms such as Turnitin, GPTZero and Pangram. In such a scenario, the resulting essay provides no signal of how well the student understands the subject matter.

AI has killed take‑home assessment. Many universities are returning to in‑class writing, oral examinations, and other supervised forms of assessment designed to ensure that students demonstrate what they have internalised rather than what they can generate with tools. Ironically, this new technology is forcing institutions back to old methods of evaluating students.

For economists, one issue is measurement. Human capital has never been directly observable; it has been inferred from proxies such as grades, test scores, completed coursework, and degrees. These proxies worked tolerably well because producing academic work generally required the underlying skill. AI weakens that relationship by lowering the effort required to generate polished output.

This introduces a measurement problem. If 2 students submit equally sophisticated essays, but one relied heavily on AI while the other did not, what does the observed performance tell us about their respective stocks of human capital? The old assumption that output quality reflects capability fails. Copy‑paste with ChatGPT is as close to true tertiary learning as watching a marathon on television is to actually running one.

The implications extend to the signalling role of education. Degrees function partly because they certify that students have cleared cognitively demanding hurdles. If those hurdles become easier to surmount with assistance, institutions may need new ways to distinguish between mastery and orchestration. Greater use of monitored assessment is an attempt to produce a valid signal in an AI age.

This is also a question about the objective function of educational institutions. Is the objective to assess what students can recall unaided, what they can produce with tools, or their judgement in deciding when and how to rely on those tools? Each is a different conception of human capital.

Human capital theory has always relied on an implicit equivalence: doing implies knowing. AI dissolves that equivalence within the very institutions responsible for certifying skill. The use of large language models in educational institutions isn't just a question for those administering assessment. It is also an issue that matters for economic analysis, since the empirical measurement of human capital begins in the classroom.

This leads to a new research question: If doing no longer implies knowing, how should we measure human capital in education?

The next section moves from education to organisational structure. If capability can be augmented on demand, what happens to the boundaries of the firm and the nature of expertise within it?

4. Contract theory: delegation without humans

Contract theory begins from a basic premise: when one party delegates decisions to another, incentives must be aligned to manage agency problems. Contracts exist because effort is costly to observe, preferences may diverge, and opportunism is always possible. Much of the modern theory of the firm, from Coase through Holmström, can be read as an effort to understand how contractual arrangements economise on transaction costs while controlling these risks.

AI agents challenge this framework because they introduce a new type of delegate: tireless, computationally scalable, instantaneous and capable of acting autonomously in market environments.

An AI agent can 'perceive, reason, and act in digital environments to achieve goals on behalf of human principals,' searching, negotiating, and transacting directly with counterparties (Shahidi et al. 2026). This means that many activities that constitute transaction costs (for example, learning prices, drafting contracts, monitoring compliance) are precisely the kinds of tasks that agents can execute at extremely low marginal cost.

Seen through a Coasean lens, this matters since transaction costs help determine the boundary between firms and markets. When those costs fall, the traditional make‑or‑buy calculus shifts with them. Activities once internalised within firms may migrate outward to markets, while entirely new forms of exchange become feasible. Lower transaction change the contracting decision.

Individuals deploy AI agents because they expect better decisions or strategic advantages such as anonymity in bargaining. This delegation raises an old question in a new form: what does alignment mean when the agent is software?

The design challenge has 2 parts: eliciting preferences accurately and ensuring that the agent executes them faithfully. Even advanced systems cannot capture every contingency that a principal cares about, particularly when preferences are high‑dimensional. A homeowner selling a property may focus primarily on price and timing; a buyer may weigh dozens of attributes simultaneously. If the agent makes errors about the principal's preferences, the result can be that the buyer ends up with the wrong home or the seller accepts an early lowball offer.

When the agent was human, much of the principal-agent problem focused on whether they would deploy enough effort, when effort was unobservable. But with software agents, the principal‑agent problem is less about shirking, and more about accurate execution of the principal's preferences.

Another complication arises from what might be termed meta‑rationality: agents must determine when to act autonomously and when to defer to their principals. Excessive autonomy invites costly mistakes; excessive deference erodes efficiency. Working out when to act and when to defer is likely to become a central design problem for both economists and engineers.

Where will agent‑mediated contracting be adopted? Agents are especially valuable in markets characterised by high stakes, large pools of counterparties, substantial evaluation effort, information asymmetries, or experience asymmetries. Unsurprisingly, these are settings where human intermediaries already play a significant role.

Consider real estate or job search, where agents can analyse vast documentation without fatigue and conduct due diligence at near‑zero marginal cost. In markets with enormous counterparty spaces, such as freelance hiring or rental platforms, agents can evaluate thousands of options simultaneously. Where evaluation is costly, agents can read every review and compare every metric. Where information is asymmetric, they can cross‑reference sources continuously. And where experience is uneven - buying a house once a decade versus selling daily - agents can grant each user the negotiating leverage of a frequent transactor.

AI agents are not merely low‑priced digital versions of human agents. They may also take away some of the informational advantages that professional agents currently rely upon.

New frictions accompany these efficiencies. Agents may create congestion - for example, by flooding employers with applications once the marginal cost of applying approaches zero. They may also enable new forms of price obfuscation or strategic behaviour as firms adapt to algorithmic counterparties - for example by using AIs to conduct first‑round job interviews. Efficiency gains therefore do not automatically translate into welfare gains. For example, in the area of housing approvals, AI agents are already being used to reduce the cost of preparing a development approval, and to reduce the cost of preparing objections to a development approval.

One possible implication is that agents expand the feasible set of market designs. By lowering the cost of eliciting preferences, enforcing commitments, and verifying identity, mechanisms that once seemed theoretically elegant but practically unattainable may become operational at scale.

Equilibrium outcomes remain uncertain. Even when it is individually rational for consumers and firms to deploy agents, the resulting market may diverge from the social optimum - particularly when externalities arise across interacting agents or when alignment is imperfect.

This has implications for contract theory. Historically, the problem centred on writing better contracts between humans - parties with preferences, incentives, and the capacity for opportunism. Increasingly, the challenge may lie in designing environments in which software agents contract with one another on our behalf.

This changes the foundations on which much of contract theory rests. Incentives discipline human agents because effort is costly and interests diverge. Software agents do not shirk. Instead, the central risks arise from mis‑specified objectives and incomplete preference capture. The main design problem isn't from motivating behaviour, it's specifying goals.

If the twentieth century forced economists to grapple with incomplete contracts, the twenty‑first may require us to confront contracts executed at machine speed - precise in form, yet only as faithful as the preferences they encode. This suggests a new research question: When delegation shifts from humans to AI agents, what replaces incentives as the central problem of contract design?

The next assumption moves from delegation to information. What happens when the signals on which markets rely are increasingly generated, filtered, and negotiated by machines?

5. Expertise and professional markets: when advice becomes abundant

Traditional economics treats expertise as scarce because it bundles 3 elements: specialised information, trained judgement and accountability. Professional markets evolved institutions such as licensing and reputation precisely to allocate this scarce resource. These mechanisms help clients identify competence in environments where quality is difficult to observe and mistakes are costly. Scarcity led to high returns to expertise. Scarcity also helped develop today's professional regulatory structures.

Artificial intelligence challenges this foundation by unbundling the components of expertise. The informational layer - once costly to acquire and slow to transmit - now scales almost without friction. Systems can synthesise research, generate differential diagnoses, draft legal submissions, and analyse financial positions in seconds. Advice, long constrained by human attention, becomes abundant. The informational component of expertise is no longer scarce.

Yet the remaining elements of the bundle behave differently. Professional decision‑making still occurs under uncertainty: client needs are imperfectly observed and even well‑judged interventions can produce adverse results. These conditions help explain why professions rely on ethical codes and peer oversight rather than simple performance metrics.

It is important to distinguish between prediction and judgement. Many expert decisions proceed from data to prediction to action. AI strengthens the predictive stage by processing vast information sets quickly and consistently. But mere prediction does not resolve the decision. Because forecasts are probabilistic, someone must still exercise judgement, determining how risks should be weighed and whose welfare takes priority.

Early evidence suggests that algorithmic tools compress variation in performance rather than redefining the frontier. In fields such as radiology, AI often improves the outcomes of weaker practitioners more than those of top experts, pulling the lower tail toward the mean while leaving the frontier largely human (MacLeod 2025). The informational component of expertise diffuses; the judgement component remains unevenly distributed.

AI changes what is scarce. Information becomes plentiful, but responsibility remains scarce. A person must still sign the audit, approve the treatment plan, sign off on the report or issue the ruling. These acts carry legal liability and reputational stakes that software does not absorb.

Professional norms shape this responsibility. Judges are expected to uphold impartiality; physicians are bound to place patient welfare above self‑interest. Because outcomes are noisy, performance is often evaluated by fellow professionals rather than by crude output measures. Trust functions as an institutional response to uncertainty, and its value may rise as informational abundance makes it harder to distinguish reliable judgement from confident error.

Economically, this can be thought of as the partial unbundling of expertise. As informational rents erode, value shifts toward roles that combine judgement with institutional authority. Routine advisory tasks are likely to migrate toward low‑cost automated systems, while humans concentrate in positions where the cost of error is high and accountability unavoidable. The partner who signs the opinion, the surgeon who leads the procedure, the engineer who approves the plan, the judge who authors the decision - these roles anchor trust in ways that scalable intelligence cannot easily replicate.

Professional services may therefore bifurcate. At one end sits commodified advice: inexpensive and machine‑generated. At the other lies judgement under uncertainty, where contextual reasoning and ethical responsibility dominate. Consumers may rely on AI for routine guidance yet continue to seek human expertise when stakes rise.

Seen in this light, AI does not eliminate expertise; it changes it. Scarcity shifts away from information and toward judgement and accountability. Across domains, a similar pattern is emerging: artificial intelligence expands supply in some dimensions while concentrating scarcity in others. Licensing and reputation may come to signal something subtly different - not who knows the most, but who is prepared to bear responsibility when knowledge runs out. This suggests a research question: If AI makes advice abundant, what becomes the scarce input in expertise - information, judgement or the willingness to bear responsibility?

The next assumption moves from expertise to welfare. What happens when the preferences economists seek to measure are increasingly shaped by the same systems that advise us?

6. Welfare economics and revealed preference: when preferences are co‑produced

Welfare economics rests on the proposition that preferences are stable and exogenous, and choices reveal welfare. From this foundation follows the machinery of consumer surplus, cost-benefit analysis, decision‑making under uncertainty and much of modern policy evaluation. If individuals choose option A over option B, economists infer that A yields higher utility. Revealed preference becomes a sufficient statistic for welfare.

Artificial intelligence complicates this by intervening upstream of the choice itself. Digital markets have long attempted to infer preferences from behaviour, but conventional recommendation systems operate within relatively fixed domains. AI agents differ in operating through open‑ended language and interaction, allowing any statement or request to serve as input. They invite extended engagement, and are increasingly being used as therapists, counsellors, friends and lovers. A decade ago, customers might have read reviews on Amazon or Google Shopping to decide what brand of sneakers to buy. Today, customers might have a conversation with a large language model about whether they should buy sneakers or splurge on a night out with friends.

The result is a shift toward preference construction. When an agent filters information or frames alternatives, the eventual choice reflects a joint process between human and machine. Preferences no longer precede market interaction; they also emerge through dialogue with AI systems. Many people are turning to chatbots for advice before engaging in market transactions. In a survey of customers in the Asia‑Pacific region, 74 per cent reported using AI‑powered tools to discover, track or learn about products (AAP 2026).

Economists are familiar with the idea that information design can shape behaviour. The literature on Bayesian persuasion shows how a sender can structure signals to influence the receiver's actions while remaining formally consistent with rational choice (Kamenica and Gentzkow 2011). What distinguishes AI is the sheer magnitude of this influence. Rather than a discrete intervention, persuasion becomes personalised and embedded in everyday decision environments.

Autor et al. (2025) highlight an additional complication: AI systems themselves face incentives. Machine learners perform 2 distinct tasks: choosing how to classify and learning what is worth knowing in the first place. Alignment strategies that make sense from a welfare perspective can therefore backfire. Training models with utility‑weighted loss functions embeds human preferences directly into predictions, yet this adjustment can weaken incentives for learning by flattening the value of additional information.

This matters not only for model design. If the informational environment is shaped by systems whose learning incentives are distorted, the preferences that users express - and the options they encounter - may already reflect those distortions. Welfare analysis then rests on behaviour generated within an engineered choice architecture.

Take the example of AI for medical diagnosis. Because failing to detect pneumonia is far costlier than a false alarm, engineers often penalise false negatives more heavily during training. Intuition suggests that such weighting aligns the model with human objectives. Yet Autor et al. (2025) show that it can produce systematically worse outcomes than training an unbiased predictor and adjusting decisions afterward, because the weighted objective discourages the acquisition of useful information. Alignment, in this sense, must address incentives for learning as well as incentives for choice.

The difficulty is compounded by the structure of human preferences themselves. In some domains they can be simplified into a few dimensions (such as price‑sensitivity or environmental sustainability). In other domains, preferences are sprawling and internally inconsistent. When preferences are high‑dimensional, even small reporting errors can propagate through the decision process, steering outcomes in unintended directions. Individuals may struggle to articulate what they want, while agents may misinterpret what is expressed.

This matters when decisions are delegated. For AI agents to act effectively, they must know the principal's preferences and remain aligned with them: a modern variant of the principal-agent problem (Bostrom 2014; Christian 2020; Shahidi 2025). Consumers will demand systems that are capable, knowledgeable about their goals, and faithful in pursuing them, while benchmarks and reputational signals are likely to emerge to guide adoption.

Yet even perfectly aligned agents may alter welfare analysis. An algorithm that searches thousands of options without opportunity cost, reads every review and evaluates attributes humans would ignore expands the effective choice set. Expanded choice typically signals welfare improvement. But it is more difficult to be sure of this result if the process simultaneously shapes the preferences themselves.

Welfare economics has traditionally focused on preference satisfaction. An economy shaped by AI may require economists to pay more attention to preference formation. In domains where intelligent systems actively participate in shaping desires - from consumption bundles to career paths - revealed preference becomes a less reliable guide to wellbeing.

This does not invalidate the revealed‑preference framework, but it narrows the conditions under which it can be applied without qualification. When preferences are endogenous to the decision environment, policy evaluation that relies solely on observed choices risks conflating influence with welfare.

Historically, economists worried about paternalism: when is it justified to override individual choice? The emerging challenge related, but different. When choices are co‑produced by adaptive systems whose own learning incentives may be imperfectly aligned, whose preferences are we observing? This raises a new question for welfare economics: When preferences are co‑produced by humans and AI systems, under what conditions does revealed preference remain a valid guide to welfare?

There is a further consequence worth noting. If AI systems increasingly shape what people want, they may also shape what innovators attempt to build. The direction of technological change has always responded to demand signals. But when those signals are themselves partially generated by algorithmic systems, there is no longer a clear boundary between discovery and selection. That leads to the question of how AI will reshape the innovation process itself.

7. Innovation and the direction of technological change: from scarcity of ideas to scarcity of judgement

Innovation theory has traditionally begun from the premise that valuable ideas are scarce. From Romer onward, growth models have treated technological progress as the outcome of purposeful research effort, with skilled labour expanding the stock of non‑rival knowledge (Romer 1990). Whether framed through recombinant innovation or creative destruction, the central constraint has typically been the difficulty of discovering economically valuable ideas. Artificial intelligence forces us to reconsider that constraint.

A useful starting point is the observation that modern AI systems are fundamentally prediction technologies. By interpolating across existing knowledge, they reduce uncertainty in decision‑making and allow agents to act effectively even when a question has not been directly answered (Agrawal, Gans and Goldfarb 2025). This capability ties the value of new knowledge to the structure of what is already known: dense clusters of knowledge boost predictive accuracy, while large gaps reduce it. It is no accident that the first AI innovation to win a Nobel Prize was AlphaFold2, which won the chemistry prize in 2024 for predicting the 3D structure of proteins: a field characterised by a dense knowledge cluster. It will take longer for an AI model to win a Nobel Prize in peace or literature, fields where gaps between knowledge clusters are wider.

This creates a tension for AI‑driven innovation. On one hand, better prediction raises the payoff to incremental research that fills nearby gaps. On the other, by lowering the penalty associated with unanswered questions, AI can encourage more exploratory projects that span wider intellectual territory. The direction of technological change may therefore depend on AI capability. When interpolation is modest, researchers are pulled toward densifying the knowledge frontier; when it becomes powerful, incentives tilt toward novelty.

Even before the AI breakthroughs of the early‑2020s, economists had been concerned that ideas are becoming harder to find. Bloom et al. (2020) document declining research productivity across multiple sectors, suggesting that maintaining exponential growth requires ever‑greater research effort. AI has the potential to offset that headwind by improving decision quality and enabling researchers to exploit existing knowledge more effectively. But the general equilibrium effect is ambiguous. If highly capable AI shifts effort toward distant projects whose marginal contribution to growth is lower, overall growth could slow despite direct productivity gains.

Market structure matters too. Because AI adoption shapes research incentives, the organisation of AI provision can influence long‑run knowledge creation. Restricting adoption may, counterintuitively, preserve knowledge density and improve welfare by limiting excessive exploratory research, whereas universal adoption can amplify distortions in research direction (Gans 2025). This is a reminder that competition policy and innovation policy may become increasingly intertwined in an AI‑intensive economy.

AI may alter the allocation of inventive effort itself. If generative systems can perform portions of the research process - generating hypotheses, drafting designs, building digital prototypes, testing permutations - the effective supply of problem‑solving capacity rises. Rather than replacing human ingenuity, such systems may reallocate it toward higher‑level judgement: framing questions and integrating disparate insights. In that sense, AI resembles earlier general‑purpose technologies (such as electricity, computers and the internet) that allowed more experimentation while increasing the premium on discernment.

The result may be that the scarcity is not in generating ideas, but in exercising judgement over which ideas are worth pursuing. Scientific attention, venture funding, regulatory scrutiny, and managerial bandwidth all serve as selection mechanisms. Their performance will matter more when the flow of potential innovations becomes a torrent.

The new technology might also lead to a feedback loop. Because AI systems derive power from existing data, the trajectory of innovation may become more path dependent. Researchers could be nudged toward domains where training data are rich and predictive performance is high, potentially underweighting areas where knowledge is sparse but breakthroughs might be transformative. The result would not be technological stagnation, but a reorientation of inventive effort toward areas that are prevalent in the training data. To take a trivial example, the training data for current AI models includes millions of romance novels, yet health care records are scarce (for good reasons of privacy). The result may be an overproduction of AI‑generated romance novels and an underproduction of rare disease diagnostics. If these AI‑generated outputs are then fed into the training data for the next generation of models, the problem may worsen.

At the same time, lower barriers to experimentation may broaden participation in the innovation process. Smaller firms and geographically dispersed researchers can access capabilities once confined to large laboratories. If realised, this democratisation could increase variance in outcomes - more flops alongside more breakthrough discoveries - echoing earlier periods of technological upheaval. Some evidence supporting this theory comes from Reimers and Waldfogel (2026), who assess the impact of large language models on the quality of new books. They find that large language models reduced the average quality of new books, while increasing the quality of top‑tier books. In this case, AI has led to more slop, and more innovation.

The conceptual adjustment for growth theory is therefore not that AI guarantees faster innovation, but that it moves the bottleneck from knowledge to discovery. The production function for ideas begins to depend not only on research inputs, but on how predictive systems interact with the existing knowledge base. This suggests a fresh question for growth theory: If AI makes ideas abundant, does the central constraint on innovation shift from discovery to selection?

This brings us to the final assumption. If AI alters both the rate and the direction of innovation, it may ultimately reshape the macroeconomic trajectory of growth itself - raising the question of whether our existing frameworks are equipped to describe an economy in which technological progress is increasingly co‑produced with intelligent machines.

8. Trade and comparative advantage: a Heckscher-Ohlin world with tradable cognition

The Heckscher-Ohlin framework explains trade patterns through differences in factor endowments. Countries export goods that intensively use their abundant factors - historically land, capital, or labour, and more recently human capital. Skilled workers have therefore functioned as a geographically anchored source of comparative advantage. High‑value cognitive work tended to cluster where education systems were strong and talent pools deep.

Artificial intelligence challenges that assumption. When cognitive capabilities can be accessed through software rather than embodied exclusively in workers, portions of high‑skill labour become effectively tradable without migration. Tasks such as drafting documents, writing code, analysing data, or building financial models can increasingly be performed anywhere connectivity exists.

A useful way to see this shift is to compare AI usage with the global distribution of economic output. As of February 2026, India accounts for 16.5 per cent of ChatGPT visitors while producing only about 3-4 per cent of global GDP (FirstPageSage 2026). Brazil shows a similar pattern, contributing 5.8 per cent of traffic against roughly 2 per cent of world output. Several middle‑income economies, including Mexico, Colombia, and the Philippines, also appear substantially overrepresented relative to economic size.

By contrast, some of the world's largest advanced economies appear underrepresented once usage is normalised by GDP. The United States accounts for 17.1 per cent of ChatGPT visitors despite producing roughly a quarter of global output. Germany generates about 4-5 per cent of world GDP but only 2.4 per cent of platform traffic. Japan produces around 4 per cent of global output, yet accounts for less than 1 per cent of ChatGPT usage.

Normalised in this way, it appears that access to frontier cognitive tools is diffusing more broadly than per‑capita income levels would predict. Algorithmic capability appears to be weakening the traditional link between national prosperity and effective knowledge work.

This matters because comparative advantage has historically rested on accumulated human capital. If a lower‑income country can augment its workforce with high‑quality machine intelligence, the productivity gap separating it from richer economies may narrow in tasks where cognition is the primary input. AI does not equalise infrastructure, institutions, human capital or managerial quality, but it can compress one of the most persistent sources of cross‑country inequality: access to expertise.

Seen through a trade lens, this looks less like factor convergence and more like factor substitution. Instead of waiting decades to build deep skill bases, economies may import cognition directly through software. The classical prediction that high‑skill activities cluster in high‑skill countries may not hold if 'skill' can be acquired simply by logging onto a large language model.

At the same time, the relative underrepresentation of countries such as the United States, Germany and Japan is worth noting. These countries all have high rates of internet penetration and significant advanced manufacturing sectors, implying that technological frontier status does not automatically translate into the fastest adoption. Organisational practices, regulatory caution, language environments, demographic structure, and workplace norms may all shape how quickly AI integrates into production. Comparative advantage may therefore depend not only on access to the technology, but on a country's willingness to reorganise around it.

This points to an interesting question in international trade. Perhaps the question is shifting from 'Who has the most human capital?' to 'Who combines human capability with artificial cognition most effectively?' Economies that adapt quickly may capture disproportionate gains even without the deepest domestic talent pools. Because AI models happily work in many languages, speaking English may become less of an advantage than in the past.

There is also a strategic dimension. If middle‑income countries adopt AI intensively, they may enter segments of the global services market previously dominated by advanced economies. Activities such as software development, design, marketing and technical analysis could become more geographically contestable. Comparative advantage in high‑end global services may no longer depend on a country having a high‑quality tertiary education sector.

Another implication is that factor endowments may no longer predict trade patterns with the same precision. Access to scalable intelligence begins to function as a new factor of production: globally available, yet unevenly combined with local complements. For trade theory, comparative advantage may increasingly reflect complementarities among AI systems, data availability, governance quality and organisational readiness rather than raw human skill.

In this environment, it would not make much sense to treat knowledge as nationally bounded. Scarcity still exists, but new models of trade in services may have to place less emphasis on human capital as a limiting factor. This raises a new question for trade theory: When cognition becomes partially tradable, does comparative advantage continue to reflect national factor endowments - or the capacity to combine human and machine intelligence?

Conclusion: signals, scarcity, and the adaptive discipline

Across the domains we have considered, a common diagnosis emerges. Artificial intelligence is not just increasing what economies can produce; it is altering the informational environment in which economic activity is interpreted. Many of the proxies that economists have long relied upon are becoming less straightforward.

We have often inferred latent objects from observable ones. Credentials have proxied for mastered knowledge. Advice has bundled information with judgement. Choices have been treated as evidence about preferences. R&D effort has been used to speak about the flow of new ideas.

When those links weaken - when performance, advice, and choice can be augmented, filtered, or partially generated by machines - some of the discipline's most familiar inferences require rethinking.

A pattern runs through each of the assumptions we have revisited. Scarcity is being relocated (see Table 1). Where production theory once treated technology as a scalar, attention shifts toward bottlenecks. Where labour economics focused on education, the locus of scarcity moves toward judgement. Where expertise rested on privileged access to information, value increasingly attaches to accountability. Where welfare analysis relied on exogenous preferences, economists must now reckon with environments in which preferences are shaped through interaction. Where growth theory emphasised the difficulty of generating ideas, the constraint may become our capacity to select among them. And where trade theory located comparative advantage in nationally bounded skill, cognition itself begins to move across borders.

Economic frameworks rarely collapse; more often, they are extended, recombined, and put to work under new conditions. The task is not to jettison these intellectual achievements, but to recognise when their assumptions no longer match the structure of the real economy. If there is a single implication I hope you take from this lecture, it is that the discipline may need to devote less energy to models that treat key signals as given, and more to models that explain how signals are generated, distorted, restored and transmitted.

Ted Evans belonged to a tradition that understood this instinctively. He recognised that economics is not a static catalogue of results, but a living framework - one that must evolve when the structure of the economy changes. The reforms he helped shape were grounded in theory, yet responsive to evidence. That balance between analytical discipline and practical adaptation remains a demanding standard.

Artificial intelligence presents us with a similar test, challenging us to examine our assumptions, and update our theories. Moments like this are clarifying for a discipline. They remind us that economics advances not only through technical refinement, but through a willingness to revisit first principles. The marginal revolution forced economists to rethink value. Information economics reshaped our understanding of markets. Endogenous growth theory redirected attention toward ideas.

AI may prove to be another such moment - less a single research agenda than a chance to reconsider how we model knowledge, decision‑making, agency and coordination in an economy increasingly co‑produced with intelligent machines.

For researchers, this gives plenty of new chances for fresh discovery. Each tension is also a research frontier. How should production be modelled when technology operates at the level of tasks? What becomes the scarce factor in labour markets when cognition is partially commodified? How should we measure human capital when doing no longer implies knowing? What replaces incentives when agents are software? When advice is abundant, what anchors trust? Under what conditions does revealed preference remain a guide to welfare? If ideas proliferate, what governs selection? And when cognition becomes tradable, what anchors comparative advantage? These questions sit close to the foundations of our discipline.

This exercise should fill economists with optimism. Economics has always been at its best when it held its assumptions lightly. The models we inherit are tools - powerful ones - but their durability depends on our willingness to adapt them to changing reality. If economics is the study of how societies manage scarcity, then an economy in which scarcity shifts demands a discipline willing to follow it.

Ted Evans understood that good economics isn't just descriptive. It is constructive. It helps societies interpret change without being overwhelmed by it.

Artificial intelligence will test many institutions: firms, universities, governments, and labour markets. It will test economics as well. But if the history of the discipline is any guide, periods of structural change are also periods of theoretical renewal.

The foundations of economics are strong. AI asks us to build upon them - carefully, empirically, modestly and with the same intellectual seriousness that figures such as Ted Evans brought to the craft.

Note: My thanks to Ann Evans, Joshua Gans, Richard Holden and other experts for valuable insights in preparing this talk. All errors are mine.

References

AAP (2026) '74 per cent of Asia Pacific Consumers Already Use AI to Shop, But Trust and Transparency Hold the Key to Checkout: Visa Survey', AAP, 10 February.

Acemoglu, D. (2024) 'The Simple Macroeconomics of AI', NBER Working Paper 32487, NBER, Cambridge, MA.

Acemoglu, D., Autor, D. and Johnson, S. (2026) 'Building Pro‑Worker Artificial Intelligence', NBER Working Paper No. 34854, NBER, Cambridge, MA.

Aghion, P., Jones, B.F. and Jones, C.I. (2019), 'Artificial Intelligence and Economic Growth', in Agrawal, A., Gans, J. and Goldfarb, A. (eds), The Economics of Artificial Intelligence: An Agenda, University of Chicago Press, Chicago.

Agrawal, A., Gans, J. and Goldfarb, A. (2022), Power and Prediction: The Disruptive Economics of Artificial Intelligence, Harvard Business Review Press, Boston.

Agrawal, A., Gans, J. and Goldfarb, A. (2025), Genius on Demand: The Value of Transformative Artificial Intelligence, Unpublished paper, University of Toronto

Autor, D., Caplin, A., Martin, D.J. and Marx, P. (2025), 'Misaligned by Design: Incentive Failures in Machine Learning', NBER Working Paper No. 34504.

Autor, D., Goldin, C. and Katz, L.F. (2020), 'Extending the Race between Education and Technology', AEA Papers and Proceedings, 110, 347-351.

Autor, D.H., Levy, F. and Murnane, R.J. (2003), 'The Skill Content of Recent Technological Change: An Empirical Exploration', Quarterly Journal of Economics, 118(4), 1279-1333.

Bloom, N., Jones, C.I., Van Reenen, J. and Webb, M. (2020), 'Are Ideas Getting Harder to Find?', American Economic Review, 110(4), 1104-1144.

Bostrom, N. (2014), Superintelligence: Paths, Dangers, Strategies, Oxford University Press, Oxford.

Christian, B. (2020), The Alignment Problem: Machine Learning and Human Values, W. W. Norton & Company, New York.

First Page Sage (2026), 'ChatGPT Usage Statistics: February 2026', SEO Blog, updated 5 February.

Gans, J.S. (2025), The Microeconomics of Artificial Intelligence, MIT Press, Cambridge, MA.

Gans, J.S. and Goldfarb, A. (2026), 'O‑Ring Automation', NBER Working Paper No. 34639.

Gethin, A. (2025), 'Distributional Growth Accounting: Education and the Reduction of Global Poverty, 1980-2019', Quarterly Journal of Economics, 140(4), 2571-2618.

Goldin, C. and Katz, L.F. (2008), The Race Between Education and Technology, Harvard University Press, Cambridge, MA.

Kamenica, E. and Gentzkow, M. (2011), 'Bayesian Persuasion', American Economic Review, 101(6), 2590-2615.

Katz, L.F. (2025), 'Beyond the Race Between Education and Technology', paper prepared for the Federal Reserve Bank of Kansas City Economic Policy Symposium, Jackson Hole.

Katz, L.F. and Murphy, K.M. (1992), 'Changes in Relative Wages, 1963-1987: Supply and Demand Factors', Quarterly Journal of Economics, 107(1), 35-78.

MacLeod, W.B. (2025), 'The Economics of Professional Decision‑Making: Can Artificial Intelligence Reduce Decision Uncertainty?', unpublished manuscript, Yale University.

Niloy, A. C., Akter, S., Sultana, N., Sultana, J., & Rahman, S. I. U. (2024), 'Is Chatgpt a menace for creative writing ability? An experiment', Journal of Computer Assisted Learning, 40(2), 919-930.

Reimers, I. and Waldfogel, J. (2026), 'AI and the Quantity and Quality of Creative Products: Have LLMs Boosted Creation of Valuable Books?', NBER Working Paper No. 34777, Cambridge, MA.

Romer, P.M. (1990), 'Endogenous Technological Change', Journal of Political Economy, 98(5, Part 2), S71-S102.

Shahidi, P., Rusak, G., Manning, B.S., Fradkin, A. and Horton, J.J. (2026), 'The Coasean Singularity? Demand, Supply, and Market Design with AI Agents', unpublished paper, MIT and Harvard.

Shirky, C. (2025), 'Students Hate Them. Universities Need Them: The Only Real Solution to the AI Cheating Crisis', New York Times, 26 August.

Table 1: Rethinking core economic assumptions in an AI economy
DomainCanonical AssumptionAI‑Induced TensionLikely New Scarcity
ProductionTechnology raises average productivity (scalar TFP).AI operates at the task level and reorganises complementarities.Bottleneck tasks that resist automation.
Labour MarketsEducation proxies scarce cognition and SBTC shapes wage inequality.Routine cognition becomes partially commodified.Judgement under uncertainty.
Human CapitalOutput reveals internalised skill; doing implies knowing.AI weakens the link between performance and learning.Credible measurement of capability.
ContractsIncentives discipline human agents.Delegation shifts from humans to software agents.Goal specification and alignment.
ExpertiseInformation is scarce and bundled with judgement.Advice scales cheaply; information diffuses.Accountability and responsibility.
WelfarePreferences are exogenous and revealed by choice.Preferences are shaped through human-AI interaction.Reliable inference about welfare.
InnovationValuable ideas are scarce.AI expands idea generation.Selection and evaluative capacity.
TradeComparative advantage reflects national factor endowments.Cognition becomes partially tradable via software.Institutional capacity to combine human and machine intelligence.
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.