Bob Gordon released a provocative working paper (ungated) back in August that made quite a splash on the blogs. It is an extreme, more pessimistic version of Tyler Cowen’s The Great Stagnation. Gordon argues—rightly, in my opinion—that economic growth is not automatic. There is no _a priori_ reason to believe that real per capita GDP will grow at 2 percent in the future when it has grown at a rate closer to 0 for most of human history. Maybe the current period is unique—and coming to an end.
The question is worth considering, but in the details of his analysis, there is much that Gordon gets wrong. For example, Gordon looks at growth in the “frontier” economy, the economy that is most advanced in each period. This means the UK from 1300 to 1906 and the US from 1906 to 2007 (where he stops his story to abstract from the financial crisis). When looking at a single wealthy economy, global factor-price equalization that results in lower middle-class wages seems like a bad thing. But of course, these lower wages are the result of higher wages elsewhere—they are wages for poor people who can increasingly contribute to the frontier of innovation as they get wealthier. Limiting the analysis to a frontier national economy seems inappropriate when one of the major global trends is a reduction in the discreteness of national economies.
I have a lot of other complaints—for instance, I wanted to refer Gordon to Noah Smith on global warming—but for the rest of this post, I am going to focus only on one particular issue. Gordon divides our progress over the past 250 years into not one, but three Industrial Revolutions. IR #1 was from 1750 to 1830 and gave us steam power and railroads. IR #2 ran from 1870 to 1900 and yielded electricity, internal combustion, running water, indoor toilets, communications, entertainment, chemicals, and petroleum. IR #3 started in 1960 and gave us computers, the Internet, and mobile phones.
Gordon takes the view—entirely defensible—that IR #2 is the one that is the most important, and that it took about 100 years for its “full effects to percolate through the economy.“ But both in his definition and discussion, he gives short shrift to IR #3.
The computer and Internet revolution (IR #3) began around 1960 and reached its climax in the dot.com era of the late 1990s, but its main impact on productivity has withered away in the past eight years. Many of the inventions that replaced tedious and repetitive clerical labor by computers happened a long time ago, in the 1970s and 1980s. Invention since 2000 has centered on entertainment and communication devices that are smaller, smarter, and more capable, but do not fundamentally change labor productivity or the standard of living in the way that electric light, motor cars, or indoor plumbing changed it.
Later in the paper, he writes,
Attention in the past decade has focused not on labor-saving innovation, but rather on a succession of entertainment and communication devices that do the same things as we could do before, but now in smaller and more convenient packages. The iPod replaced the CD Walkman; the smartphone replaced the garden-variety “dumb” cellphone with functions that in part replaced desktop and laptop computers; and the iPad provided further competition with traditional personal computers. These innovations were enthusiastically adopted, but they provided new opportunities for consumption on the job and in leisure hours rather than a continuation of the historical tradition of replacing human labor with machines.
I can see how if you’re comparing the advancements of the past few decades to the benefits of indoor plumbing you might come away a little disappointed, and I’m not trying to play IRs 2 and 3 against each other. But I think that Gordon unfairly or unwittingly understates the magnitude of IR #3, because IR #3 has only just begun.
What is IR #3 and where is it going?
Again, Gordon defines IR #3 as the arrival of computers, the Internet, mobile phones, etc. But rather than focusing on the products, let’s _focus on the processes and innovations_ that got us here—computation, miniaturization, packet switching, and so on. These ideas feature prominently in the products that Gordon uses to define IR #3, but they also have much wider conceivable applicability than just those products.
I think we are on the cusp of an important transition _within_ IR #3. So far, we have used these innovations to make ever faster, smaller, and more useful computers, including mobile phones. We have created, as Gordon notes, a whole lot of dot-coms and online services. But we’re already starting to see engineers and companies dabble with new kinds of products. Rather than merely accepting, transforming, relaying, and displaying information, some new computer-based products have more of a physical—really, a kinetic—effect on the world.
The most obvious example of this new kind of kinetic computing is the autonomous car. Rather than simply gathering information and displaying it to the driver, like a GPS mapping system, we are empowering an onboard computer to make decisions about driving. These decisions have consequences, and it is difficult to program a computer to get them right—much harder than, say, inventing Facebook. But despite the difficulty of the problem, we have made a lot of progress in the last decade, and most of us can look forward to one day owning a robotic car or ordering a robotic taxi to come pick us up.
The point is that computing innovation is going to shift, and is already starting to shift, from the virtual to the physical world. The products that IR #3 has brought us so far are great fun, but because they only really _display_ information to us, they leave a lot for us to do. The main benefit of iR #3 is going to arrive when new innovations _make_ and _do_ things for us.
Golden Krishna wrote an excellent blog post recently entitled “The best interface is no interface.” Read the whole thing. The point of the post is that we have not yet done a good job of replacing early computer interface paradigms like WIMP—window, icon, menu, pointer—with natural, unobtrusive, adaptive paradigms. Instead we slap a display on everything and call it progress.
Krishna provides some great examples of the alternative vision, what he calls “No UI,” which include the Auto Tab feature of Pay with Square and Nest. What these products and services have in common is that users empower them to make decisions without direct supervision. They require a little human interaction to set up, but from then on, unless something goes wrong, there is no need to _do_ anything to use the product. The product adapts to you, it gets out of the way, and it feels natural.
We are only just now getting to the point where products like these are becoming possible. So far in IR #3, we have mainly trusted computers with information, not with decisions about the physical world. But as computing improves, we are going to automate more.
In What Technology Wants, Kevin Kelly writes about the “home motors” you could buy a century ago. The idea was that buy a single motor for interchangeable use in a sewing machine, a mixer, a fan, or an egg beater.
One hundred years later, the electric motor has seeped into ubiquity and invisibility. There is no longer one home motor in a household; there are dozens of them, and each is nearly invisible. No longer stand-alone devices, motors are now integral parts of many appliances. They actuate our gadgets, acting as the muscles for our artificial selves. They are everywhere. I made an informal census of all the embedded motors I could find in the room I am sitting in while I write:
That’s 20 home motors in one room of my home. A modern factory or office building has thousands. We don’t think about motors. We are unconscious of them, even though we depend on their work. They rarely fail, but they have changed our lives. We aren’t aware of roads and electricity either because they are ubiquitous and usually work. We don’t think of paper and cotton clothing as technology because their reliable presences are everywhere.
Once computer chips become as ubiquitous and invisible as motors, and we get competent enough at using them to empower them to make decisions for us without direct supervision, the result will be something like ambient intelligence. It’s hard to predict what people will use AmI for, but it certainly feels to me like a much bigger advance than Angry Birds and Facebook. We’re probably a decade or two away from high-quality ambient intelligence, but given its reliance on the innovations generated on IR #3, AmI should be counted as an IR #3 innovation when it arrives.
The audacious idea that economic growth was a one-time-only event has no better illustration than transport speed. Until 1830 the speed of passenger and freight traffic was limited by that of “the hoof and the sail” and increased steadily until the introduction of the Boeing 707 in 1958. Since then there has been no change in speed at all and in fact airplanes fly slower now than in 1958 because of the need to conserve fuel.
Gordon is right that travel speeds have not increased much in recent decades. If you had told me in that 1980s that by 2012 I would still never have traveled faster than sound (relative to the Earth), I would have been very disappointed. And while some interesting technologies are in the pipeline—scramjets, spaceplanes, and so on—it will be a while before these are commercialized.
But in the meantime, the _efficiency_ of transporting people and goods could explode in the near future. Gordon is well aware of autonomous cars, so I won’t belabor the point, but it seems obvious to me that a morning commute during which I am able to productively get started on my day is almost like no commute at all. An evening commute during which I am able to relax and unwind is almost like no commute at all. If we calculate _effective_ speed by dividing travel distance by _wasted_ time, then technologies like autonomous vehicles and to a lesser extent in-flight Wi-Fi are starting to make up for some of the stagnation in proper transport speed.
I have already written about how revolutionary commercial drones are likely to be. Local deliveries will be made robotic quadrocopters instead of by humans, and FedEx will switch to blended-wing unmanned cargo freighters that will reduce the cost of long-range goods transport by a factor of five, making air transport competitive with (only about twice as expensive as) ocean transport. A key point about the quadrocopter revolution is that it needed the iPhone market to get started:
Commercial drones face some regulatory hurdles, but assuming these can be overcome, they will be an important contribution of IR #3.
Traditional printers have a kinetic effect on the world—they put ink to paper—but not really. We value them for the informational quality of the printed product, not for the physical structure of the object that comes out of the printer. 3D printing is not _that_ different from traditional printing, but its impact is likely to be much larger. It is another element of IR #3 that is still in development.
When I got a chance to see a 3D printer in person earlier this year, I was underwhelmed. There is still very little that consumer 3D printers can produce that I would actually want. But future generations of printers will almost certainly be much more useful as they become able to print in a wider array of materials.
In particular, I am excited to see chemical printers. People will be able to make their own drugs—both medical and recreational. This may sound dangerous, and perhaps it will be. But with the adoption of quantum computing we will be able to simulate chemical reactions in advance, something that we still cannot do efficiently with classical computers. Such simulation will greatly improve the feasibility of moving quickly to human trials on new drugs, including self-experimentation. The combination of quantum simulation and chemical printing could lead to a golden age of pharmaceutical discovery.
Relatedly, synthetic biology is another area where we seem to be observing rapid progress. I am woefully ignorant about synthetic biology—I am ashamed of this and will remedy it soon—so I should probably not be making very strong claims. But it seems important to mention that few if any of the advances in this field would have been possible without computers or prior research that has made heavy use of computers. Consequently, these advances are attributable to IR #3.
Total educational spending in the United States is something like 7 percent of GDP (5.5% of GDP is public expenditure, I believe around 1.5% or so is private expenditure). And the quality of education for anybody but the best or richest students is not especially good—the US routinely posts middling scores in international comparisons for primary and secondary education. Even at the college level, where the US excels, a lot of students are being underserved, often because they need remedial help.
We are still using a medieval technology, the lecture, to educate our students. But increasingly entrepreneurs—both for- and non-profit—are looking for better ways of teaching. Many of the new crop of online educational institutions, such as Khan Academy, Udacity, and Marginal Revolution University, are completely free.
People are still experimenting with educational models (and business models), but education that leverages new technologies has several advantages over the old classroom model. For example, in what is known as “flipping the classroom,” students can watch lectures for homework, and do problem sets in class, where they can get help from teachers. The quality of teaching can be higher because _everybody_ can be taught by the very best teachers. And separating the teaching component of school from the coaching and supervision component of school means lower costs and greater specialization, including jobs for people who are not good at _teaching_ but who are nevertheless good at working with kids. At least until the robots can do that too.
Gordon argues that we got a one-time economic boost from educating more people, but now educational achievement has plateaued and we can no longer rely on more education as a source of economic growth. But this seems like a narrow perspective to me. The quality of education certainly has a lot of room for improvement, as does the cost. If we let computers help us teach, we can improve on both of these margins.
While it remains to be seen what the ultimately successful models of online education will be, it would be surprising to me if there is not a major change in the educational industry in the next couple decades. And when that change comes, I bet it will be due to IR #3.
A new phase of IR #3
I’ve tried to review a number of emerging technologies that are likely to transform our daily lives, how we transport people and goods, how we make stuff, our health, and our educational system. Obviously this is an incomplete list; see Wikipedia for more.
There is still a lot of oomph in IR #3. All of the technologies that I have described are in development, and all of them owe their existence to digital computing. Some of them may founder, and some different technologies may turn out to be more important. But it is a big mistake to think that the world of computing can remain separate from the rest of the world for long. Computing started out set apart because it is safer that way—if your browser crashed or your web server goes down, there are not very large external consequences.
Experience and practice in the safe virtual world are leading to a greater desire and capability to extend these technologies to the physical world. It has taken 50 years, but we are now on the cusp of these changes. The remaining question is whether we will welcome them or try to smother them with regulations and arguments over the transitional gains. The best way out of the Great Stagnation is to eagerly embrace and support the new technologies. But they may be coming whether we want them or not, and that is why I am a long-run growth optimist.