Sunday 17 March 2024

The Economics of the AI Revolution

In part two of this three-part series on so-called Artificial Intelligence (AI), our guest poster Per Bylynd acknowledges that even though AI is arguably not an intelligence—at least not in the sci-fi sense—it does not mean that it is unimportant or lacks implications. The technological advance that it represents is nothing short of revolutionary and will have far-reaching implications for both the economy and society.

The Economics of the AI Revolution

by Per Bylund

In a recent article, we briefly summarised what it is that we today call artificial intelligence (AI). Whereas these technologies are certainly impressive and may even pass the Turing test, they are not beings and have no consciousness. Thus, this is neither the time nor the place to discuss philosophical issues of how to define a true or full AI—an artificial general intelligence—and whether we should recognize AI software legally as a person (after all, corporations are).

Economically speaking, AI as technology, whether it is used for entertainment or in production, is a good. As Carl Menger taught, what makes something a good is that it (whatever it may be) has the ability to satisfy a human need, that it must be recognised as such, and that a person—the consumer—has or can gain command over it to satisfy those actual needs. In other words, it must be scarce (there is less of it than we can use to satisfy wants) and understood as valuable (because we believe it can satisfy wants). AI certainly fits the criteria.

The economic system's stages of production form a 'production structure':
increasingly higher order of capital goods producing consumer goods, over time ...

AI as a Consumption Good

When people entertain themselves by “discussing” with AI (try, for example, Windows Copilot) or generating quirky images using DALL-E (try it here), it is a good of the lowest order—a consumption good. As such, the economic consequences are limited to the effect this has on consumer behavior. But this may in turn have a significant impact on production.

Some consumption goods revolutionise the economy and society. Examples of such goods include the automobile (from the introduction of Ford’s Model-T) and the smartphone (starting with Apple’s iPhone). The former disrupted transportation and infrastructure and facilitated just-in-time manufacturing and urban sprawl, just to mention a few effects. The latter changed everything from how we bank to how we travel.

The point here is that as consumer behaviour changes, the production structure follows along. For example, with the broad adoption of the smartphone, paper map production has all but disappeared; whereas, digital location services and intelligent logistics have seen enormous growth and development. And change leads to more change because entrepreneurs build on, add to, and challenge the new discoveries.

AI has the potential to change consumer behaviour well beyond its designed functionality. Exactly how and in what ways remains to be seen. But it is safe to say that it has potential. (On the other hand, many goods have had potential to disrupt but didn’t leave a mark.) For example, we may see people produce their own stories, songs, images, and even movies. So perhaps, instead of relying on television or Netflix and Hollywood producers, we’ll make movie night into a make-a-movie night where we watch content we have generated and that fits us perfectly.

AI as a Higher-Order Good

As a tool and thus a good of a higher order, AI has already had an effect and promises to disrupt several trades. Because it is very effective at producing and presenting content, including translating and editing texts, content-related professions are threatened by AI. This includes journalists and copyeditors, as AI programs can write and edit faster than humans. After all, anyone can ask AI to produce or edit a text. Students already use AI to spice up or improve their papers—or let AI write them from scratch.

AI is similarly affecting photographers and illustrators. It only takes a minute to have DALL-E produce a new image exactly as directed, or to have an AI algorithm remove or add things in a picture you snapped. Whereas, having an illustrator create something takes much longer (not to mention the cost).

Programmers and system developers are also seeing the effects of AI, which has no problem both generating new code (without bugs!) or checking already written code. Legacy software written in dated and ineffective programming languages can be run through an AI to make the coding more efficient—and converted into a modern language.

AI is also affecting academia. Why have an instructor tell students about some subject matter instead of letting AI do it? After all, the AI can easily present content in a way that the student prefers. For example, make a movie to explain, say, biology or chemistry in an entertaining way. And it can answer all kinds of questions without ever getting bothered or cranky—and it has nowhere else to be. In research, AI can analyse data more effectively and run thousands of different regressions on data to find something that is significant and important (so-called HARKing, which is very poor research practice—but who will know?). It can write up the paper too, with citations and everything, in just seconds.

AI as Production Capital

All of this means AI can and will be used in production. In fact, it already is and we have only started to see the effects.

AI is best categorised as capital, which is used to make labor more productive (more value output per hour of labor invested) through facilitating more roundabout (but more effective) production structures. Capital goods in general have one (or both) of two functions: it makes existing production processes more effective by increasing productivity, or it makes possible types of production that were not previously possible. AI checks both boxes.

We have already seen how people working in several types of content-based professions can easily be made more productive or replaced entirely by AI. It can also do things that people may have been unable to do—or never thought of doing. This of course can cause so-called technological unemployment as people lose their jobs because AI can do them better (and cheaper). But this is a dystopian way of describing something quite normal and highly useful: that we relieve people, with all their ingenuity, from comparatively simple tasks so that they can create much more value elsewhere.

It is of course problematic for any person losing their source of income, but it is highly beneficial to consumers (and therefore society at large) that these (and other) professions are “creatively destroyed.” The economic point of employment is not to provide people with an income so they can pay taxes (although politicians seem to think so) but to produce goods that can satisfy consumer wants—to make our lives better. Just like there are very few stable boys or buggy-whip producers since the automobile revolution, the future will see fewer people doing news reporting, copyediting, or coding.

Note also that this revolution is not nearly as sudden and disruptive as it may at first seem: the news media, for example, have for many years reduced the number of journalists doing reporting (most outlets nowadays merely republishing standard articles from AP or Reuters). And software development already uses increasingly effective development environments that correct and predict commands, allow for WYSIWYG and drag-and-drop development, and can debug code and suggest solutions to bugs.

AI is only another step in this process. But the threat is greatly exaggerated. We tend to overestimate the impact of technology in the short term but underestimate it in the long term.

Limitations to Overcome

There is a problem, however, and it has to do with how large language models work and what responses they generate. When used in a setting that is strictly rules-based, such as in computer programming, the AI “understanding” of code can greatly improve the productivity of coders (or replace them). AI will not introduce bugs in software unless the specifications are incomplete or contradictory, and it will not make errors.

The same is true for AI’s language generation: it draws from large troves of text data and has a good “understanding” for how humans use language. But there are no rules-based ways by which it can distinguish fact from fiction. Instead, AI draws from what statistically is more likely to be a human-sounding response. For this reason, it produces content that can be entirely wrong.

For example, I asked AI to summarise the content of my 2022 economics primer, How to Think about the Economy. [A highly recommended free book - Ed.] Since it has access to the text, it did a pretty good job summarising what is in the book. But it also added comments on content that is typically in economics books but that is not in the primer (such as equilibrium theory, perfect competition, and mathematical equations). The AI is correct that economics books typically discuss such things and thus it is statistically probable that my primer would do the same. But it doesn’t.

There is a difference between statistical probability and truth. We will look at this problem and the potential threat that AI poses to human society in the next article.

=> CONTINUED IN PART THREE: 'Separating Information from Disinformation'
PART ONE: 'Understanding the AI Revolution'
Per Bylund is the Associate Professor of Entrepreneurship and Johnny D. Pope Chair in the School of Entrepreneurship in the Spears School of Business at Oklahoma State University.
He is the author of three full-length books: How to Think about the Economy: A PrimerThe Seen, the Unseen, and the Unrealized: How Regulations Affect our Everyday Lives; and The Problem of Production: A New Theory of the Firm. He has edited The Modern Guide to Austrian Economics and The Next Generation of Austrian Economics: Essays In Honor of Joseph T. Salerno.
His article first appeared at the Mises Institute blog.

1 comment:

Duncan Bayne said...

AI _can_ improve the productivity of coders, but at a cost: it increases code churn and complexity.

It also most certainly _does_ introduce bugs; in extreme cases, hallucinating non-existent libraries and standard library functions. I've experienced the former myself, with an LLM coding tool writing code to consume a Lisp MQTT library that simply didn't exist.

There's also the interesting question of whether an LLM model trained on code constitutes a derivative work of that code. If the courts rule that it does, many of the existing LLMs will need re-work, because they've been trained on code release under licenses that constrain derivative works.

The author is correct that LLMs will change many lines of work. But it's a massively over-hyped technology, in a similar way that autonomous cars were overhyped.

In fact, autonomous cars are a great example. Cruise had to employ ~ 1.5 human supervisors per "autonomous" car they produced, to the extent that there's a joke in Indian IT circles that AI stands for "Absent Indians". As with creative LLM tools, they are a great _aid_ to humans, but still require direct human oversight as they're not general AIs.