Dear Amazon: Convert My Dead-Tree Library to Kindle Books

I have too many books. It’s a first-world problem, I know, and I should probably accept that I am not going to re-read many of them and sell or give them away to a good home. But I am unlikely to do this. In the meantime, my floor-to-ceiling bookshelves overflow, taking up valuable square footage in my modest townhouse.

Amazon, you can solve this problem. Here’s how.

You already have a strong partnership with UPS, which you use for shipping. Make another deal with them. There are UPS Stores all around the country. If I bring a dead-tree book to any UPS Store, they should recycle the book for me and give me a credit for the Kindle version of that same book. The cost of handling or recycling the book can be split between me and Amazon.

This is a win-win-win-win proposition.

I win because I have fewer physical books cluttering up my house, while retaining access to my library.

Amazon wins because more consumers will have large Kindle libraries. This will create an incentive to make future purchases in the Kindle ecosystem.

Book publishers win because when used books are recycled, the market for used books shrinks. Physical books are durable and resalable; converting to Kindle books solves the durable goods problem and makes publisher profits higher because they would sell more copies.

UPS wins because they get a small fee-per-book that comes out of the gains to the other parties.

When I talk about this idea, I find that the main objection I get is an emotional one: “Isn’t it wasteful to destroy used books?” people ask. And the answer is not really. No information is destroyed by recycling the book, because Kindle books are a pretty good substitute. And if the book were not destroyed, then the publishers would never go for the deal, and we would be stuck in a more wasteful situation, one in which a significant fraction of real estate goes toward book storage.

Amazon, you started the ebook revolution. Now take it to the next level by helping everyone complete the transition.

Price Discrimination Enables New Products and Services to Exist

A common sentiment that I encounter in the tech policy world is a visceral opposition to price discrimination. This is odd to me, because as an economist, I know that price discrimination often leads to more efficient outcomes. One particular element of this added efficiency is that when fixed costs are present, price discrimination allows products and services to be profitable that would not be profitable under standard pricing. This means that if we were to ban price discrimination, we would not get these products at all.

The tech world is filled with lots of smart people who understand math, so for this post, I am going to try to make the case with algebra and a wee bit of calculus. If you can follow along, great; if not, I’ll get you in some other post.

$Q = 1 - P$

$Q$ is quantity and $P$ is price. The results of this exercise will translate easily to any linear demand function, and they will apply broadly to all demand functions, so why not make it easy on ourselves?

Let’s assume that firms have a fixed cost $F$ and a marginal cost $C$. Firms’ total costs are:

$F + QC$

Total revenue for the firm is just price times quantity, so it is equal to $QP$. If we are concerned that not even one firm might serve this market, then it is useful to look at the monopoly case, where market $P$ and $Q$ are equal to firm $P$ and $Q$. In this context, we can substitute $1 - Q$ for $P$, and therefore, total revenue is equal to $Q(1-Q)$.

Total profit is simply total revenue minus total costs. Therefore:

$\pi = Q(1-Q) - F - QC$

What prices and quantities maximize profit? To calculate this, we can take a partial derivative of profit with respect to $Q$ and set it to zero. This condition will hold where profit is maximized.

$\dfrac{\partial\pi}{\partial Q} = 0 = (1 - Q) - Q - C = 1 - 2Q - C$

Solving for $Q$,

$Q = \dfrac{(1-C)}{2}$

Plugging this expression for $Q$ into the demand function lets us solve for $P$:

$P = 1 - \dfrac{(1-C)}{2}$

$P = \dfrac{(1+C)}{2}$

This is the profit-maximizing $P$ and $Q$ for a monopolist in this market in terms of $C$. Note that $F$ drops out. The profit-maximizing values don’t depend on $F$.

What does depend on $F$, however, is whether the firm is earning enough at these values of $P$ and $Q$ to stay in business. In particular, profit needs to be zero or positive for the firm not to shut down.

$\pi = Q(1-Q) - F - QC \geq 0$

Substituting $\frac{(1-C)}{2}$ for $Q$:

$\frac{(1-C)}{2}(1-\frac{(1-C)}{2}) - F - \frac{(1-C)}{2}C \geq 0$

Gathering terms:

$\frac{(1-C)}{2}(1 - \frac{(1-C)}{2} - C) \geq F$

Simplifying:

$\dfrac{(1-C)}{2}\dfrac{(1-C)}{2} \geq F$

$\dfrac{(1-C)^2}{4} \geq F$

So without price discrimination, this market will be served if and only if $F \leq \frac{(1-C)^2}{4}$. If we want to plug in some numbers, assume that $C = 0$; in this case the market will be served only if $F \leq 0.25$.

Want to try it with price discrimination now?

With marginal cost equal to $C$, a monopolist would produce $1 - C$ units. Assuming that the monopolist is able to charge each consumer the maximum they are willing to pay, then profit can be expressed like this:

$\pi = \int^{1-C}_0 (1 - Q - C) dQ - F$

Computing the integral:

$\pi = [Q - \dfrac{Q^2}{2} - CQ]_0^{1-C} - F$

This is equal to:

$\pi = (1 - C) - \dfrac{(1 - C)^2}{2} - C(1 - C) - F$

$\pi = \dfrac{(1 - C)^2}{2} - F$

Since profits must be non-negative for the firm to stay in business:

$\dfrac{(1 - C)^2}{2} - F \geq 0$

or

$\dfrac{(1 - C)^2}{2} \geq F$

So this market, with perfect price discrimination, will be served if $F \leq \frac{(1 - C)^2}{2}$. This means that the fixed cost can be twice as high (with linear demand) and the product or service will still be provided. If we want to plug in $C = 0$, then the market will be served as long as $F \leq 0.5$.

Why does this matter in the tech world? Because a lot of tech products and services have very high fixed costs. Building out wired and wireless broadband networks, for instance, is extremely costly. Marginal costs are often relatively low.

If we want to reap the benefits of new and innovative tech products, we must be prepared to accept price discrimination at least some of the time. There are products that are viable with price discrimination that are not viable without it—and if we ban price discrimination like some people thoughtlessly advocate, we won’t get them.

The Third Industrial Revolution Has Only Just Begun

Bob Gordon released a provocative working paper (ungated) back in August that made quite a splash on the blogs. It is an extreme, more pessimistic version of Tyler Cowen’s The Great Stagnation. Gordon argues—rightly, in my opinion—that economic growth is not automatic. There is no a priori reason to believe that real per capita GDP will grow at 2 percent in the future when it has grown at a rate closer to 0 for most of human history. Maybe the current period is unique—and coming to an end.

The question is worth considering, but in the details of his analysis, there is much that Gordon gets wrong. For example, Gordon looks at growth in the “frontier” economy, the economy that is most advanced in each period. This means the UK from 1300 to 1906 and the US from 1906 to 2007 (where he stops his story to abstract from the financial crisis). When looking at a single wealthy economy, global factor-price equalization that results in lower middle-class wages seems like a bad thing. But of course, these lower wages are the result of higher wages elsewhere—they are wages for poor people who can increasingly contribute to the frontier of innovation as they get wealthier. Limiting the analysis to a frontier national economy seems inappropriate when one of the major global trends is a reduction in the discreteness of national economies.

I have a lot of other complaints—for instance, I wanted to refer Gordon to Noah Smith on global warming—but for the rest of this post, I am going to focus only on one particular issue. Gordon divides our progress over the past 250 years into not one, but three Industrial Revolutions. IR #1 was from 1750 to 1830 and gave us steam power and railroads. IR #2 ran from 1870 to 1900 and yielded electricity, internal combustion, running water, indoor toilets, communications, entertainment, chemicals, and petroleum. IR #3 started in 1960 and gave us computers, the Internet, and mobile phones.

Gordon takes the view—entirely defensible—that IR #2 is the one that is the most important, and that it took about 100 years for its “full effects to percolate through the economy.” But both in his definition and discussion, he gives short shrift to IR #3.

The computer and Internet revolution (IR #3) began around 1960 and reached its climax in the dot.com era of the late 1990s, but its main impact on productivity has withered away in the past eight years. Many of the inventions that replaced tedious and repetitive clerical labor by computers happened a long time ago, in the 1970s and 1980s. Invention since 2000 has centered on entertainment and communication devices that are smaller, smarter, and more capable, but do not fundamentally change labor productivity or the standard of living in the way that electric light, motor cars, or indoor plumbing changed it.

Later in the paper, he writes,

Attention in the past decade has focused not on labor-saving innovation, but rather on a succession of entertainment and communication devices that do the same things as we could do before, but now in smaller and more convenient packages. The iPod replaced the CD Walkman; the smartphone replaced the garden-variety “dumb” cellphone with functions that in part replaced desktop and laptop computers; and the iPad provided further competition with traditional personal computers. These innovations were enthusiastically adopted, but they provided new opportunities for consumption on the job and in leisure hours rather than a continuation of the historical tradition of replacing human labor with machines.

I can see how if you’re comparing the advancements of the past few decades to the benefits of indoor plumbing you might come away a little disappointed, and I’m not trying to play IRs 2 and 3 against each other. But I think that Gordon unfairly or unwittingly understates the magnitude of IR #3, because IR #3 has only just begun.

What is IR #3 and where is it going?

Again, Gordon defines IR #3 as the arrival of computers, the Internet, mobile phones, etc. But rather than focusing on the products, let’s focus on the processes and innovations that got us here—computation, miniaturization, packet switching, and so on. These ideas feature prominently in the products that Gordon uses to define IR #3, but they also have much wider conceivable applicability than just those products.

I think we are on the cusp of an important transition within IR #3. So far, we have used these innovations to make ever faster, smaller, and more useful computers, including mobile phones. We have created, as Gordon notes, a whole lot of dot-coms and online services. But we’re already starting to see engineers and companies dabble with new kinds of products. Rather than merely accepting, transforming, relaying, and displaying information, some new computer-based products have more of a physical—really, a kinetic—effect on the world.

The most obvious example of this new kind of kinetic computing is the autonomous car. Rather than simply gathering information and displaying it to the driver, like a GPS mapping system, we are empowering an onboard computer to make decisions about driving. These decisions have consequences, and it is difficult to program a computer to get them right—much harder than, say, inventing Facebook. But despite the difficulty of the problem, we have made a lot of progress in the last decade, and most of us can look forward to one day owning a robotic car or ordering a robotic taxi to come pick us up.

The point is that computing innovation is going to shift, and is already starting to shift, from the virtual to the physical world. The products that IR #3 has brought us so far are great fun, but because they only really display information to us, they leave a lot for us to do. The main benefit of iR #3 is going to arrive when new innovations make and do things for us.

Ambient computing

Golden Krishna wrote an excellent blog post recently entitled “The best interface is no interface.” Read the whole thing. The point of the post is that we have not yet done a good job of replacing early computer interface paradigms like WIMP—window, icon, menu, pointer—with natural, unobtrusive, adaptive paradigms. Instead we slap a display on everything and call it progress.

Krishna provides some great examples of the alternative vision, what he calls “No UI,” which include the Auto Tab feature of Pay with Square and Nest. What these products and services have in common is that users empower them to make decisions without direct supervision. They require a little human interaction to set up, but from then on, unless something goes wrong, there is no need to do anything to use the product. The product adapts to you, it gets out of the way, and it feels natural.

We are only just now getting to the point where products like these are becoming possible. So far in IR #3, we have mainly trusted computers with information, not with decisions about the physical world. But as computing improves, we are going to automate more.

In What Technology Wants, Kevin Kelly writes about the “home motors” you could buy a century ago. The idea was that buy a single motor for interchangeable use in a sewing machine, a mixer, a fan, or an egg beater.

One hundred years later, the electric motor has seeped into ubiquity and invisibility. There is no longer one home motor in a household; there are dozens of them, and each is nearly invisible. No longer stand-alone devices, motors are now integral parts of many appliances. They actuate our gadgets, acting as the muscles for our artificial selves. They are everywhere. I made an informal census of all the embedded motors I could find in the room I am sitting in while I write:

[...]

That’s 20 home motors in one room of my home. A modern factory or office building has thousands. We don’t think about motors. We are unconscious of them, even though we depend on their work. They rarely fail, but they have changed our lives. We aren’t aware of roads and electricity either because they are ubiquitous and usually work. We don’t think of paper and cotton clothing as technology because their reliable presences are everywhere.

Once computer chips become as ubiquitous and invisible as motors, and we get competent enough at using them to empower them to make decisions for us without direct supervision, the result will be something like ambient intelligence. It’s hard to predict what people will use AmI for, but it certainly feels to me like a much bigger advance than Angry Birds and Facebook. We’re probably a decade or two away from high-quality ambient intelligence, but given its reliance on the innovations generated on IR #3, AmI should be counted as an IR #3 innovation when it arrives.

Transport efficiency

The audacious idea that economic growth was a one-time-only event has no better illustration than transport speed. Until 1830 the speed of passenger and freight traffic was limited by that of “the hoof and the sail” and increased steadily until the introduction of the Boeing 707 in 1958. Since then there has been no change in speed at all and in fact airplanes fly slower now than in 1958 because of the need to conserve fuel.

Gordon is right that travel speeds have not increased much in recent decades. If you had told me in that 1980s that by 2012 I would still never have traveled faster than sound (relative to the Earth), I would have been very disappointed. And while some interesting technologies are in the pipeline—scramjets, spaceplanes, and so on—it will be a while before these are commercialized.

But in the meantime, the efficiency of transporting people and goods could explode in the near future. Gordon is well aware of autonomous cars, so I won’t belabor the point, but it seems obvious to me that a morning commute during which I am able to productively get started on my day is almost like no commute at all. An evening commute during which I am able to relax and unwind is almost like no commute at all. If we calculate effective speed by dividing travel distance by wasted time, then technologies like autonomous vehicles and to a lesser extent in-flight Wi-Fi are starting to make up for some of the stagnation in proper transport speed.

I have already written about how revolutionary commercial drones are likely to be. Local deliveries will be made robotic quadrocopters instead of by humans, and FedEx will switch to blended-wing unmanned cargo freighters that will reduce the cost of long-range goods transport by a factor of five, making air transport competitive with (only about twice as expensive as) ocean transport. A key point about the quadrocopter revolution is that it needed the iPhone market to get started:

As Dan Shapiro notes, “A single high-quality gyro used to go for a thousand bucks.  Now, you can get 3 gyros, 3 accelerometers, and a nice CPU to manage the whole thing for under a sawbuck.”

Commercial drones face some regulatory hurdles, but assuming these can be overcome, they will be an important contribution of IR #3.

Matter compilers

Traditional printers have a kinetic effect on the world—they put ink to paper—but not really. We value them for the informational quality of the printed product, not for the physical structure of the object that comes out of the printer. 3D printing is not that different from traditional printing, but its impact is likely to be much larger. It is another element of IR #3 that is still in development.

When I got a chance to see a 3D printer in person earlier this year, I was underwhelmed. There is still very little that consumer 3D printers can produce that I would actually want. But future generations of printers will almost certainly be much more useful as they become able to print in a wider array of materials.

In particular, I am excited to see chemical printers. People will be able to make their own drugs—both medical and recreational. This may sound dangerous, and perhaps it will be. But with the adoption of quantum computing we will be able to simulate chemical reactions in advance, something that we still cannot do efficiently with classical computers. Such simulation will greatly improve the feasibility of moving quickly to human trials on new drugs, including self-experimentation. The combination of quantum simulation and chemical printing could lead to a golden age of pharmaceutical discovery.

Synthetic biology

Relatedly, synthetic biology is another area where we seem to be observing rapid progress. I am woefully ignorant about synthetic biology—I am ashamed of this and will remedy it soon—so I should probably not be making very strong claims. But it seems important to mention that few if any of the advances in this field would have been possible without computers or prior research that has made heavy use of computers. Consequently, these advances are attributable to IR #3.

Online education

Total educational spending in the United States is something like 7 percent of GDP (5.5% of GDP is public expenditure, I believe around 1.5% or so is private expenditure). And the quality of education for anybody but the best or richest students is not especially good—the US routinely posts middling scores in international comparisons for primary and secondary education. Even at the college level, where the US excels, a lot of students are being underserved, often because they need remedial help.

We are still using a medieval technology, the lecture, to educate our students. But increasingly entrepreneurs—both for- and non-profit—are looking for better ways of teaching. Many of the new crop of online educational institutions, such as Khan AcademyUdacity, and Marginal Revolution University, are completely free.

People are still experimenting with educational models (and business models), but education that leverages new technologies has several advantages over the old classroom model. For example, in what is known as “flipping the classroom,” students can watch lectures for homework, and do problem sets in class, where they can get help from teachers. The quality of teaching can be higher because everybody can be taught by the very best teachers. And separating the teaching component of school from the coaching and supervision component of school means lower costs and greater specialization, including jobs for people who are not good at teaching but who are nevertheless good at working with kids. At least until the robots can do that too.

Gordon argues that we got a one-time economic boost from educating more people, but now educational achievement has plateaued and we can no longer rely on more education as a source of economic growth. But this seems like a narrow perspective to me. The quality of education certainly has a lot of room for improvement, as does the cost. If we let computers help us teach, we can improve on both of these margins.

While it remains to be seen what the ultimately successful models of online education will be, it would be surprising to me if there is not a major change in the educational industry in the next couple decades. And when that change comes, I bet it will be due to IR #3.

A new phase of IR #3

I’ve tried to review a number of emerging technologies that are likely to transform our daily lives, how we transport people and goods, how we make stuff, our health, and our educational system. Obviously this is an incomplete list; see Wikipedia for more.

There is still a lot of oomph in IR #3. All of the technologies that I have described are in development, and all of them owe their existence to digital computing. Some of them may founder, and some different technologies may turn out to be more important. But it is a big mistake to think that the world of computing can remain separate from the rest of the world for long. Computing started out set apart because it is safer that way—if your browser crashed or your web server goes down, there are not very large external consequences.

Experience and practice in the safe virtual world are leading to a greater desire and capability to extend these technologies to the physical world. It has taken 50 years, but we are now on the cusp of these changes. The remaining question is whether we will welcome them or try to smother them with regulations and arguments over the transitional gains. The best way out of the Great Stagnation is to eagerly embrace and support the new technologies. But they may be coming whether we want them or not, and that is why I am a long-run growth optimist.

Replies to My Critics

Last week, I argued that the short run is short—that there is good reason to believe that we’re now past the point where monetary stimulus can do much to help the economy. Again, I am broadly friendly to market monetarism and not especially hawkish on inflation. I am not so much against QE3 as skeptical that it will work. I think that the broad facts and a lot of mainstream macro theory back me up.

My post garnered a fair bit of criticism around the blogosphere. Let me make one quick empirical point to get everyone on the same page, and then I will try to respond to my critics point by point.

The empirical point is summed up in the graph below. NGDP grew around 5 percent per year until around 2008, and then it fell, and then it grew at around 5 percent—or slightly less—per year again beginning in mid 2009. These facts are well known, but I bring them up here because they do constrain the kind of stories we can tell about the economy. Any story you tell has to contain a one-time shock that ended years ago, and it has to be consistent with NGDP that has grown at about the same rate over the last 3 years as it did before the shock arrived.

OK, now with that out of the way, let’s take the criticisms one by one.

Bryan cites Akerlof, Dickens, and Perry on long-run unemployment as a reason why QE3 might boost employment in spite of the fact that we are out of what we would conventionally call the short run. The ADP model assumes heterogeneous firms and workers with money illusion. At any given time, some firms need to cut real wages, and since nominal cuts hurt morale, higher inflation helps those particular firms cut wages instead of jobs. Consequently, in a low-inflation environment, monetary stimulus can help lubricate the employment market.

This argument is a good one as far as it goes. Unfortunately, I don’t think it goes very far given the stylized facts. As I noted above, NGDP is growing at a rate of 4-5 percent per year, not that different from before the crash. So any long-run ADP-style unemployment should be about the same now as it was before the crash unless there was a structural change in the economy. You can’t have it both ways—if we’re in a low-inflation environment for ADP purposes now, then we were in a low-inflation environment for ADP purposes before the crash as well.

Furthermore, assuming QE3 is a temporary policy, then if unemployment is long-term ADP unemployment, the effect of QE3 on unemployment will be temporary. I would regard a temporary dip in unemployment as a result of QE3 as good but underwhelming, given the claims of many market monetarists. There may of course be interactions between short-run unemployment and ADP unemployment, and for that reason, the dip in unemployment may not literally be temporary, but I would be surprised if QE3 could fix the economy through this channel.

Bryan makes an interesting linkage between my views on the ZMP hypothesis and ADP unemployment. If there is a decreasing secular trend of low-skill labor productivity, then ADP unemployment will become more serious over time. I think this is a good point, and it pushes me at the margin to favor a higher long-run NGDP target than I otherwise would. I was previously inclined to believe that the exact value of the target doesn’t matter once you get to levels of around 3 percent, but now I see more merit in a higher target.

Insider-outsider models

Bryan and some of the commenters at MR say that it is a mistake to focus on the wage demands of the unemployed. Rather, it is the wage demands of the employed that are especially sticky. The failure of insider wages to adjust downward to long-run levels means that there’s no ability to hire outsiders at below long-run levels, either because companies can’t afford to do it or because they are afraid of hurting insider morale.

The problem is that even if this story is true, we are probably, again, out of the short run. NGDP is almost 10 percent higher now than it was at the pre-crash peak. The number of people employed, even with population growth, is still below the pre-crash peak. Even assuming that insider nominal wages are totally inflexible, nominal output per worker has grown fast enough that insider real wages have probably adjusted. Furthermore, in five years, a non-trivial fraction of insiders retire or change jobs.

More generally, I’ve never been a fan of insider-outsider models, at least not for the United States in recent times. Maybe it makes sense as a model of Europe or Detroit in the union heyday. But today in the US, “labor” is less homogeneous than ever, private sector unions have declined, and fewer workers have an expectation of lifetime employment. Yet the past three recoveries have been increasingly jobless! How can you square the fact that at a time when the insider-outsider distinction is weaker than ever, labor hoarding has basically ended and labor market adjustment has become more difficult? I do it by assuming that the insider-outsider mechanism does not play that big of a role.

But again, even if the insider-outsider story was true at the beginning of the recession, there is little reason to believe that it is still true.

Ryan Avent and corporate profits

At the Economist, Ryan Avent focuses on my point that corporate profits are at record highs.

Firms could be enjoying high profits simply because revenues have stabilised while costs are low, perhaps because low expectations for future nominal spending growth have discouraged investment.

First, note that in the series I cited, corporate profits are adjusted for inventory valuation and capital consumption. The purpose of these adjustments is to make the series less responsive to exactly the kind of behavior Ryan posits. If firms decide not to invest in production and simply sell out of inventory instead, that can increase profits, but it doesn’t increase profits adjusted for inventory valuation. Likewise, a firm can temporarily increase profits by making inefficient use of existing equipment, which could lead to faster depreciation. Are these adjustments perfect? No. But they do offset some of Ryan’s concerns. Corporate profits are high even when you subtract some of the temporary gains firms get from not investing. The unadjusted series is here, by the way; I avoided it because I anticipated Ryan’s argument.

Second, whatever firms’ expectations were, as I’ve said repeatedly, nominal spending growth has not been especially low in the last three years. A better story, if you are trying to resist structural theories, might be that firms are wary of investing due to fears of shocks from Europe or Asia, which monetary easing now does little to help. It would be great if the Fed would commit now to keeping NGDP growing at 4-5 percent when those shocks do hit, but in the meantime, I am not expecting a lot out of QE3.

Ryan also makes a couple of other points, but none of them cut to the heart of my critique of QE3 optimism. He gestures to the New Keynesian literature, but of course even the New Keynesians don’t argue that the short run lasts forever. And Mankiw, who is one of the authors Ryan cites, is a well-known proponent of the unit root hypothesis. I do not read Mankiw as expecting a return to trend, no matter what monetary policy is, although I of course do not speak for him and am happy to be corrected. Ryan also quotes Weitzman on how increasing returns creates unemployment, which is true, but tautologous: if there were no increasing returns, anyone who was unemployed could start his own firm and be just as productive as when he was employed.

Bill Woolsey

Bill Woolsey cordially welcomes me, despite my heterodoxy, to the market monetarist club. I am glad to make the cut.

I think that I failed to make myself clear in my original post. Bill says, “Dourado’s version of how shifts in nominal GDP impact real output and employment is based upon an assumption of market clearing.” This is not what I intended to convey. I think that part of the effect of nominal shocks propagates through market-clearing monetary misperceptions (Lucas islands), and the rest through non-market-clearing nominal rigidities, or as I wrote in the original post, “because some wages, prices, and contracts don’t adjust instantaneously.” I am not as New Classical as Bill seems to think. I like some elements of the New Classical school, but in the end I think the correct theory of macro for now is pluralism.

In the long run, I do think that markets mostly clear. And I think that Bill must agree, for he writes at the end of his post:

On the other hand, most of us do believe that firms eventually cut prices and wages in the face of persistent surpluses of output and labor. Most of us remain puzzled by the slow adjustment.

This is my point. If our problems were purely cyclical, “eventually” would have happened already, so our problems must not be purely cyclical. Time to start looking at structural explanations.

Scott Sumner and cutting-edge research

I was pleased to get a reply from the high priest of market monetarism himself, Scott Sumner.

I addressed the plausibility of sticky wages here, and in numerous other posts in reply to Tyler Cowen and George Selgin. I’d also point out that there is lots of cutting-edge research that tells us that the “common sense” approach to the wage stickiness hypothesis is not reliable. By common sense I mean; “Come on, wouldn’t the unemployed have cut their wage demands by now.” Yes, they would have, but that doesn’t solve the problem.  This is partly (but not exclusively) for reasons discussed in this recent Ryan Avent post.

Well ok, I followed the first link, which gives the usual argument and then ends with the line, “Until we get a more plausible theory of unemployment, I’m sticking with stickiness.” This is honest, and it certainly is a common view, but I don’t think it’s a good idea to rely so heavily on a theory just because we don’t understand competing theories well yet. Macro of the gaps, I call it.

We have a long way to go in macro, so I’m glad that Scott brings up the issue of cutting-edge research. If he has particular examples of recent work that undermines the common sense approach, he should write about them at greater length. I assume that when he says “cutting-edge” he is not referring to the papers cited in Ryan’s post, since those are both from the 1980s.

Speaking of cutting edge research, let me point everyone to a paper, “Countercyclical Restructuring and Jobless Recoveries,” by David Berger, a new PhD from Yale, and now a professor at Northwestern. Berger creates a model in which firms grow fat during expansions and respond during recessions by laying off their least productive workers. His model creates jobless recoveries and matches the new stylized facts (they have changed since the 1980s) about business cycles pretty well.

One thing that I like about the Berger paper is that it shows why some nominal shocks, if not addressed immediately, are not easily reversible by monetary authorities. Once a firm has fired its least productive workers, it is not going back. If the monetary authority wants to prevent a recession, at least post-1984, it needs to act before firms lay off their workers. This perspective actually bolsters the case for NGDP targeting, because it means that the Fed should have an apparatus in place now so that the economy will be automatically stabilized when the next shock hits. Here is Tyler on Berger.

My question for Scott, since he’s so interested in cutting-edge research, is: “What do you think about Berger’s paper?” I assume that Scott is familiar with the changes in business cycles that Berger documents. Does he not think that Berger’s model accounts for some significant fraction of our current unemployment better than simply sticky wages forever?

The bottom line

None of my critics seem to be willing to make any sort of broadly falsifiable claim about how long the short run lasts. (I should say that Bryan is not arguing that we are necessarily in the short run in the bulk of his post). There is a lot of assuming trend stationarity, talk about output gaps, and pointing to literature I am well aware of—in short, a lot of question begging.

I would like to see a greater emphasis in the blogosphere on understanding stylized facts about recessions, a greater willingness to explore micro phenomena (even if we are not using fully microfounded models), and more macro-ecumenicism. No one school of macro has it all figured out, and that includes market monetarism. There is enough ambiguity in our current situation that reasonable people can disagree about what is going on. But I don’t think that reasonable people can be totally certain that all we need is more nominal stimulus.

The Short Run is Short

I’m a fan of Scott Sumner, NGDP level targeting, and many of the ideas of market monetarism in general. However, unlike many of those who support these ideas, I am pessimistic that QE3 will fix the economy, and I worry that too much celebration by market monetarists over the structure of easing will only serve to undermine what remains good in market monetarism if and when the economy fails to recover quickly. In particular, I think that many commentators fail to appreciate the mainstream macroeconomic distinction between short run and long run analysis, and that many economists overestimate how long the short run lasts.

The case for stimulus is based in monetary non-neutrality. If we double the money supply, the real productive capacity of the economy does not increase—real productive capacity has nothing to do with monetary factors. However, because people are tricked, and because some wages, prices, and contracts don’t adjust instantaneously, output may go up briefly. Business owners see an increase in nominal demand for their products and mistakenly assume that it is an increase in real demand. They see this as a profit opportunity, so they expand production. As prices, wages, and contracts adjust to the new money supply and their assumption is revealed to be false, they cut back on production to where they were before.

If we view the recession as a purely nominal shock, then monetary stimulus only does any good during the period in which the economy is adjusting to the shock. At some point during a recession, people’s expectations about nominal flows get updated, and prices, wages, and contracts adjust. After this point, monetary stimulus doesn’t help.

Obviously, there is no signal that is fired to let everyone know that the short run is over, so reasonable people can disagree about how long the short run lasts. But I think there is good reason to think that the short run is over—it is short, after all.

My first bit of evidence is corporate profits. They are at an all time high, around two-and-a-half times higher in nominal terms than they were during the late 1990s, our last real boom.

If you think that unemployment is high because demand is low and therefore business isn’t profitable, you are empirically mistaken. Business is very profitable, but it has learned to get by without as much labor.

A second data point is the duration of unemployment. Around 40 percent of the unemployed have been unemployed for six months or longer. And the mean duration of unemployment is even longer, around 40 weeks, which means that the distribution has a high-duration tail.

Now, do you mean to tell me that four years into the recession, for people who have been unemployed for six months, a year, or even longer, that their wage demands are sticky? This seems implausible.

A third argument I’ve heard a lot of is that mortgage obligations have remained high—sticky contracts—while income has gone down. Garett Jones endorses this as a theory of monetary non-neutrality, and I agree. In fact, I beat him to it. But just because debt can make money non-neutral in the short run does not mean that we are still in the short run.

In fact, there is good evidence that here too we are out of the short run. Household debt service payments as a percent of disposable personal income is lower than it has been at any point in the last 15 years.

Yes, this graph includes mortgage payments.

So what is the evidence that we are still in the short run? I think a lot of people assume that because unemployment remains above 8 percent, we must be in the short run. But this is just assuming the conclusion. There are structural hypotheses for higher unemployment, but even if unemployment is cyclical, it doesn’t mean that monetary adjustment has failed to occur—real sector recalculation may just take longer than monetary recalculation.

Again, I favor NGDP targeting, but it is most effective when it is done simultaneously with the nominal shock. Evan Soltas points to the case of Israel, and indeed, the Israelis did it right. But it seems like wishful thinking to assume that four to five years after a nominal shock, you can fix the economy with monetary stimulus.

I would be delighted to be wrong. And I wouldn’t be surprised to see a slight decrease in unemployment as the result of QE3. But I would be surprised if we experience a plummeting of unemployment in the next two years down to what we previously thought of as “normal” levels of around 5 percent. Yes, it is good that the Fed is now using the expectations channel, but it did it four to five years too late, and there’s little theory or evidence its failure can be easily reversed.

UPDATE: I reply to my critics here.

Debt is Worse than You Think

Toban Wiebe, a brilliant young economist of my acquaintance, writes In Praise of Consumer Debt:

Debt is a wonderful thing. But many intelligent and responsible people have debt aversion, believing that the optimal level of debt is zero. They proudly brag when they’ve paid off their mortgages that they are debt-free. This is flat out stupid.

Toban makes excellent points about consumption smoothing and debt-savings equivalence, points that I agree with. Nevertheless, I am not so cheerful about debt, either as a matter of personal finance or in terms of macroeconomics.

Think about personal finance in terms of the Capital Asset Pricing Model (CAPM). What CAPM says is that differences in expected rates of return are driven by differences in risk premia. Arbitrage across different asset classes drives out any excess returns. This just means that there is no free lunch in tradable investments—if you expect a higher reward, it’s just because you are taking on more risk.

So let’s look at a person who expects “a large increase in future income (with a high degree of certainty),” Toban’s subject. Let’s assume that because of the expected large increase in future income, this person is highly unlikely to go bankrupt, and that he is smart enough not to buy a house at the top of a bubble, so he is highly unlikely to default on a mortgage. And then let’s look at some scenarios:

1. The person takes out a mortgage at 4% to pay for a house. He pays off the house per the mortgage agreement, as slowly as possible. He accumulates additional assets which he puts into Treasurys that pay out 1%.
2. The person takes out a mortgage at 4% to pay for a house. He pays off the house per the mortgage agreement, as slowly as possible. He accumulates additional assets which he puts into a well-diversified stock portfolio.
3. The person takes out a mortgage at 4% to pay for a house. He pays off the house as quickly as possible. As he accumulates additional assets, he plows them into the mortgage to reduce his liabilities.

I think that Toban would agree that given my assumptions, the person in scenario 1 is “flat out stupid.” This person could earn an excess return of 3% by putting additional assets into the mortgage, as is done in scenario 3. But what CAPM claims is that there is a kind of equivalence between scenario 1 and scenario 2. The higher expected return to a well-diversified stock portfolio is merely compensating the asset holder for additional risk. That means that there is still an excess return of 3% to paying off the mortgage in comparison to holding stocks. This means that scenario 3 dominates the other 2 as an investment strategy.

There are a lot of caveats to this: maybe you underestimate the probability that you will default on your mortgage, maybe there is an equity premium (does anyone still believe this?), or maybe risk tolerance is all you have to sell (3% seems like a bad price, though). But the bottom line is that paying off your personal debts as quickly as possible is often a very good investment strategy. This is not to contradict Toban’s point about consumption smoothing—such smoothing often makes sense, and I have nothing against mortgages or debt per se. Just be smart about comparing the rate of return on your liabilities to the rate of return on your risk-equivalent assets.

The debt-averse attitude that Toban is combatting is a widely-held folk belief, and finds its strongest expression in traditional religions, especially Islam. Say what you will about the truth of folk beliefs, but even if they are false, they are often socially useful and approximately efficient. Even if it’s not “morally wrong” to take on debt, it may be efficient for people to believe that it’s morally wrong to take on debt.

In fact, it does seem that it would at least be efficient if people were a little more squeamish about debt. Our current macroeconomic problems are pretty severe—but would they have been so bad if people were not so levered up? I think the answer is pretty clearly not. Nominal shocks are amplified by fixed-value nominal claims. Hail Hyman Minsky!

A debt-averse society is a more robust society. If this robustness has value to Toban, then it is rational for him to encourage debt aversion, not discourage it. It is morally wrong and foolhardy to take on too much debt, Toban should claim. Those of us who understand the economics of debt should be more Straussian.

5 Reasons TacoCopters will be More Important than Hoverbikes

In Forbes, the excellent Adam Ozimek agrees with me that TacoCopters—commercial drones that deliver goods—will be an important economic advance. However, he thinks that they will also cause some economic drawbacks, and that on balance, hoverbikes—like this one designed by Aerofex, but also other flying human transporters more generally—will be more important.

I’m not convinced. Here are a few reasons.

1. Most humans have legs

Adam leads off with an interesting point: “humans are just a kind of stuff, and there is no reason to think that quadrotors won’t move us around in the future too.” But there is an important difference between humans and stuff. Most humans have the ability and will to navigate the last few steps after you drop them off. This means that the margin of error for navigation (though not the margin of error for safety) is higher for human transport, whereas non-human cargo must be delivered literally to the doorstep.

A big part of the problem that TacoCopters are solving is this “last few steps” problem. There’s not an equivalence between solving it for stuff and for humans, because most humans don’t have a pressing need for it to be solved. I will concede that for disabled people for whom the last few steps is a challenge, wheelchaircopters would represent an important advance.

2. The marginal benefit of flying cars over regular autonomous cars is not that high

Let’s say you already have an autonomous car and you use it to commute to work. On your commute, you spend your time reading, catching up on Twitter, applying makeup, etc.—not driving. An autonomous flying car might save you a few minutes on your commute. But it won’t save you any time on net, because you will still need or want to read, catch up on Twitter, or apply makeup before or after you get to work. Because you were not wasting your time driving in the first place, a faster commute saves you almost no time.

Furthermore, even if we assume that the use of your commute time was suboptimal, regular autonomous cars will get us places faster than human-driven cars today. That is because they will be able to use vehicle-to-vehicle communication to drive together more closely, to coordinate intersections automatically, and to notify each other of any remaining traffic incidents. Flying—and especially hovering—simply won’t create that much of a gain for getting about town.

I think we are much more likely to use human-transporting drones in long-distance travel than in daily driving. There are a lot of moderately wealthy people who could afford a small private jet, but could not also afford to also employ a pilot full time. Drone technology will bring private jets into the realm of possibility for a higher percentage of the population. They will also be used in commercial air travel, but there, pilot salaries are not an enormous fraction of the cost.

3. TacoCopters will create way more employment opportunities than they destroy

Adam and I are both interested in the ZMP hypothesis—that a non-trivial fraction of unemployment is caused by the fact that some of the unemployed literally cannot be profitably hired, that they have zero marginal product. And we agree that it is likely to be an even more important hypothesis in the future; robots really might steal our jobs!

Although TacoCopters could put a few hundred thousand delivery men out of work, think of all the new business opportunities that they will generate. As Adam says, the world is not flat. But with TacoCopters, cities, at a minimum, would become flat. New enterprises would be able to open up in low-rent districts and, at very low cost, deliver goods to the entire metropolitan area. Even if it’s not literally the unemployed deliverymen who are starting these businesses, they could be hired in non-delivery roles by the new entrepreneurs.

Adam worries about the cultural effects of robots stealing jobs, and this concerns me too. But TacoCopters will lead to an entrepreneurial boom! And I think we can all agree that the cultural effects of an entrepreneurial boom are good, at least on net.

4. We may be screwed on the ZMP front anyway

If I am wrong about all of the new entrepreneurial opportunities that TacoCopters will create, it’s still not clear how big of a marginal contribution TacoCopters will make to our problems. If TacoCopters create a lot of ZMP workers, they will probably not be alone; other artificial intelligence technologies will create millions more. These millions of ZMP workers and others who sympathize with them will almost certainly vote them a basic income.

Now, it’s possible that ZMPers who formerly worked in the delivery sector could be the straw that broke the camel’s back. If, collectively, they were the deciding vote on a basic income, that could be a bad outcome. But the current delivery sector is not so large as to make this likely. I think that a likely outcome might be that the acceleration in ZMP unemployment caused by TacoCopters could enable a basic income to pass a year earlier than it might otherwise. While this is a negative outcome in Adam’s view (and mine), it is not that significant culturally for the negative shock to happen just a little bit sooner.

5. Hoverbikes face higher regulatory barriers—and consequently may never make it to market

In my last post, I expressed concern that regulation would unnecessarily delay the introduction of TacoCopters. Whatever the other merits of hoverbikes, they are likely to face even higher regulatory hurdles than TacoCopters. It’s only fair to discount the benefits of an innovation by the likelihood that they won’t materialize. And if consumer safety or other regulations are too onerous, hoverbikes might not just get delayed—they might get outlawed.

As Adam notes, I have bet him \$100 that he won’t own a Star Wars-style speeder by the end of 2020. This is a bet that I think I will win, but that I hope to lose. I think in general we are approaching a really exciting time in the high tech sector. We’ve had a lot of advances in computer engineering, both in hardware and in software, and we’re getting to the stage where a lot of those advances are yielding applications in new physical possibilities, not just new computer applications. While some of these new possibilities have downside risks, I think it’s important that we as a society continue to experiment rapidly. Legalize innovation.