The behavioral economics of strong AI
The latest EconTalk features Robin Hanson on the singularity and AI. The latter part of the discussion focuses on strong AI, achieved through whole brain emulation, and the subsequent plummeting in the value of labor. In this scenario, the wage is equal to the cost of renting a machine to do the same task, i.e. very low. Consequently, anyone who does not own capital lives at subsistence as the number of laborer-equivalents increases into the trillions.
I think that Robin gets the straightforward economics of AI exactly right; as certain kinds of intelligence get cheaper, they are worth less on the market. Nevertheless I think there are behavioral elements of the economics of AI that may affect this scenario.
A common story in the 1980s and 90s was that, in the future, your travel agent was going to be replaced by an artificial intelligence. Instead of calling up your human travel agent and discussing your trip with him or her, you would call up a robot and discuss your trip with it. It would make suggestions, you would mull them over, make a decision, and then hang up the phone.
In the last decade, the travel reservations industry has undergone significant transformation. Instead of calling a travel agent, most people make their travel reservations through a web browser. This is not just a change in the interface of travel agencies; it is also a change in the type of intelligence behind the interface. Instead of interfacing with a general intelligence (a human travel agent), we are today interfacing with a specialized intelligence (an algorithm for displaying flights).
General intelligence or strong AI is worth much more, ceteris paribus, than specialized intelligence or narrow/weak/applied AI. But can the ceteris paribus condition ever hold? I am skeptical. Take Robin’s suggested approach for the development of strong AI, whole brain emulation. When reproducing a particular human brain in software, each instance of the virtual brain (let’s call each emulated instance an “em”) will feel like a human being (even after you convince it that it is a copy of some original). Further, its feelings of humanness will be entirely valid, at least to the extent that our feelings of humanness are also valid. The em is just a copy in silicon of what we are in carbon.
The humanness of ems raises costly practical and moral issues relating to their treatment. Practically, we will have to relate to ems in commerce the way we relate to other humans in commerce, which is to say highly inefficiently. One of the problems with a human travel agent is that we have to engage in small talk, be polite, and more generally, to borrow another of Robin’s ideas, “show that we care.” This is necessary if we want the travel agent to do a good job for us. In contrast, if we are dealing with a weak-AI specialized algorithm for selecting flights, we don’t have to show that we care. The algorithm can’t be offended or discouraged.
The human sentience of ems also create problems of management. The principal-agent problem is not often thought of in behavioral terms, but it is more behavioral than you think. In simplistic theory, the principal-agent problem is soluble through perfect monitoring. In a modern office context, you install cameras in every office to ensure that employees aren’t wasting time on Facebook. The problem with this solution is that workers would feel bad if you did it. People feel bad when they think they are constantly being evaluated. So would ems. But a narrow AI algorithm does not feel bad when constantly evaluated or managed.
Morally, the rest of us might feel bad if we created a trillion human-like entities, each with hopes and aspirations, and then we crushed those aspirations by giving them only subsistence wages. We would feel that ems are entitled to the same or similar moral consideration to which humans are entitled. We might not only need to show that we care; we might actually care.
What a pain it would be to actually care about the welfare of a trillion ems in our employ. As much as we might say how good it is for more morally considerable beings to exist (even if they have lives worth living), there is a part of us that admits, L’enfer, c’est les autres. The burden of their moral claims on us might be worth it if labor is costly or if the ems are also our close friends, but labor will be cheap even if weak AI is widely used, and there will be too many ems in Robin’s scenario for them all to be close friends with all of us.
It’s hard to make firm predictions about future markets when we don’t know how technology will develop, but if very advanced, non-sentient weak AI tools are widely available, I don’t think the demand for strong AI will be as high as Robin thinks. We prefer our commerce to be impersonal; it’s more efficient that way. I don’t want to talk to a travel agent, whether human or robotic. I just want my tickets as cheaply and conveniently as possible.
So while I agree with Robin that the value of labor is likely to decrease substantially in the future, I think this will occur alongside large increases in per capita income, even if “per capita” is broadened to include morally considerable non-biological beings.