Saturday, January 30, 2016

Budget Deficits and Federal Debt from CBO

There's a considerable argument over federal borrowing. Were the deficits during the Great Recession too small to help boost the recovery, or too large? Were the deficits reduced too quickly, or not quickly enough? Are the ongoing deficits too large, or too small? That gentle "whoosh" you hear in the distance is me sidestepping those questions in this post. (For the record, I've taken the unpopular position that the path of federal borrowing was approximately correct during the Great Recession and its aftermath, thus managing to prove yet again the old adage that if you try to stand in the middle of the road, you'll get hit by traffic from both directions.)

Here, I want to offer the mild suggestion that debates about what budget deficits should be, or should have been, might be usefully informed by the path they have actually taken, and where they are headed. Here's the quick summary from the "Budget and Economic Outlook: 2016-2026," just released by the Congressional Budget Office.  Lots more detail on spending and taxes is available in the report.

Here's the path of federal budget deficits over time, and the "baseline" projections for the next 10 years, which assume that existing laws and rules relating to the budget will remain in place. The deficits are expressed as a share of the GDP--that is, relative to the size of the overall economy.
And here's the path of the debt/GDP ratio over time. The deficit is the amount borrowed by the federal government in a given year. The debt is the accumulation of borrowing over time. Again, it is expressed relative to GDP.

When mulling over appropriate size of deficits in the next few years, here are a couple of points to keep in mind.

1) As noted earlier, the CBO estimates are "baseline" forecasts, which means that if Congress has passed a law saying that spending will be reduced or certain taxes will be raised three or five or ten years into the future, the CBO assumes that these changes will actually happen. However, the CBO recognizes that Congress has a propensity to play games by passing budgets that project smaller deficits in the future through tax and spending policies that don't actually arrive. Thus, CBO also offers some "alternative" forecasts if some of these changes that are part of current law don't actually happen.

For example, the CBO forecast for 2020 is for a deficit of $810 billion. But this assumes that certain automatic future spending reductions in the 2011 Budget Control Act are actually enforced. If these cuts are not enforced, the deficit would be $97 billion higher in 2020. As another example, there is a rule in the tax code allowing for "expensing" of certain business investments, which means that a firm can count the entire cost of an investment in the year the spending happens, rather than depreciating the investment over time. If that expensing provision in tax law is extended into the future at a 50% rate, the projected deficit for 2020 would be an additional $52 billion larger. Bottom line: the "baseline" deficit estimate may be an underestimate, because it is based on a willingness of Congress to make some tough decisions in the future.

2) Keeping the debt/GDP ratio roughly constant or even reducing it gradually over time doesn't require anything close to a balanced budget. For example, the debt/GDP ratio is falling through the 1960s and 1970s, although the US government ran annual deficits almost every year. The fact that the debt/GDP ratio has been relatively flat during the last year or two means that annual deficits have been of a size that total debt is growing at about the same speed as GDP. The rise in debt/GDP ratio projected by the CBO a few years out means that using the baseline calculation of existing law, annual deficits are projected to be large enough that debt (the numerator of the calculation) will be growing at a faster rate than GDP (the denominator of the calculation).

Friday, January 29, 2016

Financial Risks: Views from the Office of Financial Research

The Wall Street Reform and Consumer Protection Act of 2010, commonly known as the Dodd-Frank Act, set up an Office of Financial Research. Its website describes its purpose this way: "The Office of Financial Research (OFR) helps to promote financial stability by looking across the financial system to measure and analyze risks, perform essential research, and collect and standardize financial data." In December 2015, the OFR published its Financial Stability Report for 2015.

The report emphasizes three main risks facing the US economy: 1) credit risks for US nonfinancial businesses and emerging markets; 2) the behaviors encouraged by the ongoing environment of low interest rates; and 3) situations in which financial markets are not resilient, as manifested in shortages of liquidity, run and fire-sale risks, and other areas. Here are some thoughts from the report.
"First and most important, credit risks are elevated and rising for U.S. nonfinancial businesses and many emerging markets. ... In 2015, U.S. nonfinancial business debt continued to grow rapidly, fueled by highly accommodative credit and underwriting standards. The ratio of that debt to gross domestic product has moved above pre-crisis highs, and corporate leverage continues to rise. So far, distress in U.S. credit markets has been largely limited to the lowest-rated debt issuers and the energy and commodity industries. However, that distress may spread, because investors now appear to be reassessing the credit and liquidity risks in these markets. ... The combination of higher corporate leverage, slower global growth and inflation, a stronger dollar, and the plunge in commodity prices is pressuring corporate earnings and weakening the debt-service capacity of many U.S. and emerging market borrowers. A shock that significantly further impairs U.S. corporate or emerging market credit quality could potentially threaten U.S. financial stability."
Here's a figure showing growth in outstanding US nonfinancial corporate bonds, which as a share of GDP have risen back close to where they were just before the 2008 financial crisis. The report also notes: "“Covenant-lite” loans, which contain less legal protections for creditors, accounted for approximately two-thirds of institutional leveraged loan volumes this year, and the share of low-rated debt with weak covenants has continued to increase ..."

"Another key source of risk is the eventual normalization of Federal Reserve monetary policy. Past U.S. monetary tightening cycles have been associated with increased volatility in corporate bond markets and, in some cases, increased spreads ... The next cycle could be more destabilizing for corporate debt markets, given that it will unwind an extraordinary period of low interest rates and associated yield-seeking investor behavior. U.S. corporate bond markets may face an exodus of these investors as yields on safer instruments increase. ... The fact that U.S. nonfinancial business debt has expanded so rapidly since the financial crisis suggests that even a modest default rate could lead to larger absolute losses than in previous default cycles."

These issues also spill over into emerging market debt. Here's a figure showing the level of corporate debt (not including government or household debt) in some emerging markets. Speaking of this debt in emerging markets, the report notes:
"A significant deterioration in the liquidity of large nonfinancial corporates could create financial difficulties for domestic banks and governments. In some emerging market countries, more than half of banks’ outstanding loans are to corporate borrowers, leaving many banking sectors vulnerable to losses in the event of broad-based corporate distress. In addition, a large share of emerging market corporate external bonds is issued by quasi-governmental borrowers, meaning that corporate distress could also activate contingent or legal liabilities for some governments, with associated effects on government debt markets. This is a particular problem for countries where the central government has less fiscal space, such as Brazil. Increased dependence on foreign exchange-denominated debt is also a concern, given that depreciating currencies increase the difficulty of servicing the debt."

"Second, the low interest rate environment may persist for some time, with associated excesses that could pose financial stability risks. ... The persistence of low rates contributes to excesses that could pose financial stability risks, including investor reach-for-yield behavior, tight risk premiums in U.S. bond markets, and, as noted, the high level and rapid growth of U.S. nonfinancial business debt. ... The duration of investors’ U.S. bond portfolios remains at historic highs ..., increasing their exposure to interest rate risk. ..."
"Third, although the resilience of the financial system has improved significantly in the past five years, it is uneven. Since the financial crisis, regulatory reforms and changes in risk management practices have strengthened key institutions and markets critical to financial stability. Yet, existing vulnerabilities persist and some new ones have emerged. Financial activity and risks continue to migrate, challenging existing regulations and reporting requirements. Market liquidity appears to be episodically fragile in major U.S. financial markets, diminishing sharply under stress. Run and fire-sale risks persist in securities financing markets. Interconnections among financial firms are evolving in ways not fully understood, for example, in the growing use of central clearing. ... The financial system is highly complex, dynamic, and interrelated, making it exceedingly challenging to monitor developments in every corner of the system and adequately assess the probability and magnitude of all important risks. ..."
"Funding conditions remain broadly stable, though market liquidity episodically appears to be fragile — an amplifier of financial stress. This fragility was evident in the 2010 U.S. equity flash crash, the 2013 U.S. Treasury market sell-offs, the October 2014 Treasury “flash rally,” and other episodes ... Runs in short-term wholesale funding markets were a key source of systemic stress during the financial crisis. Although progress has been made to address this vulnerability, run risk persists in these markets years later. ... Banks’ reliance on repo financing has diminished and concentration risks have eased as these institutions have diversified their funding sources and reduced their client-financing operations. Broker-dealers have also reduced their reliance on repo financing, particularly in net terms, but overall these firms remain highly dependent on such financing and at risk of runs in a stress scenario ..."
Of course, saying something is a "risk" doesn't mean that disaster is imminent, but rather that attention on the topic should be elevated. I'd expect that some of these topics--not sure which ones!--will be the stuff of headlines in the next year or so.

Thursday, January 28, 2016

Against Multiple Regression--And Experiments Have Issues, Too

It's not actually true that all econometrics students are required to get a tattoo reading "Correlation is not causation" on an easily visible part of their body. But I suspect that late at night, when grading midterm exams, some professors have considered the merits of imposing such a requirement. Richard Nisbett is a prominent professor of psychology at the University of Michigan. He offers a 36-minute lecture titled "The Crusade Against Multiple Regression Analysis" which was posted on The Edge website on January 21, 2016. For those would rather read than watch, a transcript is also posted.

Although the talk is aimed more at psychologists than economists, it's a nice accessible overview to these subjects, which apply across the social sciences--and toward the end, Nisbet has some comments that should be thought-provoking for economists about the potential strength of social influence. But at the start, Nisbet says:
"A huge range of science projects are done with multiple regression analysis. The results are often somewhere between meaningless and quite damaging. I find that my fellow social psychologists, the very smartest ones, will do these silly multiple regression studies, showing, for example, that the more basketball team members touch each other the better the record of wins.
I hope that in the future, if I’m successful in communicating with people about this, there’ll be a kind of upfront warning in New York Times articles: These data are based on multiple regression analysis. This would be a sign that you probably shouldn’t read the article because you’re quite likely to get non-information or misinformation. ...
What I most want to do is blow the whistle on this and stop scientists from doing this kind of thing. ... I want to do an article that will describe, similar to the way I have done now, what the problem is. I’m going to work with a statistician who can do all the formal stuff, and hopefully we’ll be published in some outlet that will reach scientists in all fields and also act as a kind of "buyer beware" for the general reader, so they understand when a technique is deeply flawed and can be alert to the possibility that the study they're reading has the self-selection or confounded-variable problems that are characteristic of multiple regression."
The basic issue arises here when a study looks at two variables that are correlated with each other. However, the study doesn't take into account all the relevant factors which may be causing the correlation, and thus it misinterprets the correlation. Here's a quick-and-dirty example from Nisbet:  
"A while back, I read a government report in The New York Times on the safety of automobiles. The measure that they used was the deaths per million drivers of each of these autos. It turns out that, for example, there are enormously more deaths per million drivers who drive Ford F150 pickups than for people who drive Volvo station wagons. Most people’s reaction, and certainly my initial reaction to it was, "Well, it sort of figures—everybody knows that Volvos are safe."
Let’s describe two people and you tell me who you think is more likely to be driving the Volvo and who is more likely to be driving the pickup: a suburban matron in the New York area and a twenty-five-year-old cowboy in Oklahoma. It’s obvious that people are not assigned their cars. We don’t say, "Billy, you’ll be driving a powder blue Volvo station wagon." Because of this self-selection problem, you simply can’t interpret data like that. You know virtually nothing about the relative safety of cars based on that study."
Again, Nisbet's point is that just because pickup trucks are correlated with more accidents or deaths does not in any way prove that pickup trucks are less safe. Maybe they are. Or maybe the reason behind the correlation is that pickup trucks are more likely to attract certain kinds of drivers, or to be driven in certain ways, and thus have 

Like many quick-and-dirty examples, this one is both useful and oversimplified. Multiple regression analysis is a way of doing correlations that lets you take a lot of other factors into account and hold them constant. In this example, one could include "control variables" about the drivers of cars, including age, gender, occupational category, parental status, rural/urban, state of residence, and the like. If you've got the detailed data, the kinds of factors that Nisbet mentions here can be taken into account in a multiple regression analysis--and if a study didn't take at least some of those factors into account, it's clearly statistical malpractice. 

But what's even worse is the situation where the available data doesn't describe the key difference. A standard example in economics classrooms is the study of how much getting more education influences future wages. Sure, it's easy to draw up a correlation and show that, on average, college graduates make more money. But it's plausible that college graduates may differ from non-college graduates in other ways, too. Perhaps they have more persistence. Perhaps they are more likely to be personally related to people who can help them find high-paying jobs. Perhaps they are smarter. Perhaps employers treat college completion as a signal that that someone is more likely to be a good employee, and offer job possibilities accordingly. 

Economists have been hyper-aware of these issues for at least a couple of decades, and arguably longer. For an overview of the approaches and methods they use to try to circumvent these problems, a useful starting point is the essay by Joshua D. Angrist and Jörn-Steffen Pischke, "The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics," which appeared in the Spring 2010 issue of the Journal of Economic Perspectives, (24:2, pp. 3-30).

Nisbet also points out that a lot of psychology experiments compare people people in one situation with people in a slightly different situation, but such studies can be hard to replicate because so many things are different about two situations, and people aren't very aware of what influences their decisions. Nisbet describes one of his own experiments this way: 
"The single thing I’ve done that has gotten the most notice was my work with Tim Wilson showing that we have very little access to the cognitive processes that go on that produce our behavior. ... I’ll give an example of the experiment we did. We have two experiments, and in the first experiment—a learning experiment—people memorize word pairs. There might be a pair like "ocean-moon" for some subjects. Then we’re through with that experiment: "Thank you very much." The other fellow has this other experiment—on free association, where you are to give an example of something in a category we give you. So, one of the categories we give is, "Name a laundry detergent." People are much more likely to mention Tide, if they’ve seen that ocean-moon pair. You ask them, "Why did you come up with that? Why did you say Tide?" "Well, that’s what my mother uses," or "I like the Tide box." And you say, "Do you remember learning that word pair—ocean-moon?" "Oh, yeah." "Do you suppose that could’ve had an influence?" "No. I don’t think so.""
Toward the end of the talk, Nisbet offers what I interpreted as a useful challenge to economists. When economists think about how to alter some behavior, they tend to think in terms of incentives for the individuals. Nisbet offers a useful reminder that social incentives often affect our behavior quite powerfully, and we are only dimly aware that these forces are operating on us. Nisbet offers some vivid example of social incentives in action:
I hear the word incentivize, I say, "If imagination fails, incentivize." There are so many more ways of getting people to do what’s in their own interests and society’s interests. Absolutely the most powerful way that we have is to create social influence situations where you see what it is that other people are doing and that’s what you do. I took up tennis decades ago and it turned out that most of my friends had taken up tennis. I dropped it a few years later and it turned out that the tennis courts were empty. I took up cross-country skiing, and how about that, these other people do it. Then we lost interest in it, and find out our friends don’t do that anymore.
How about minivans and fondue parties? You do things because other people do them. And one very practical important consequence of this was worked out by Debbie Prentice and her colleague at Princeton, [Dale Miller]. Princeton has a reputation of being a heavy drinking school. ... Prentice and Miller had the idea to find out how much drinking goes on. They had the strong intuition that less drinking goes on than people think goes on, because on Monday a kid comes in and says, "I was stoned all week," when in actuality he was studying all Sunday for the exam. In a setting where people are drinking a lot, you get prestige for drinking a lot. If you get good grades despite the fact that you’re drinking a lot, then that makes you look smarter. They found out how much people are actually drinking, and then they fed this information back to students and said, "This is how much drinking goes on." Drinking plummeted down to something closer to the level of what was actually going on.
Here's something that saved hundreds of millions of dollars and millions of tons of carbon dioxide being poured into the atmosphere in California by a social psychologist team led by Bob Cialdini. He hangs tags on people’s doors if they’re using more electricity than their neighbors saying, "You’re using more electricity than your neighbors," and that sends electricity usage down. However, you shouldn’t hang a tag on their door saying, "You’re using less electricity than your neighbors" because then people start using more electricity—unless you put a smiley face on the bottom. You’re using less electricity than your neighbor’s and a smiley face ... oh, that’s a good thing, I’ll keep it up.

Wednesday, January 27, 2016

Environment vs. Economy: A Shift in American Opinion

Consider how you would answer this question from the Gallup poll organization: "With which one of these statements about the environment and the economy do you most agree--protection of the environment should be given priority, even at the risk of curbing economic growth (or) economic growth should be given a priority, even if the environment suffers to some extent?" James W. Boyd and Carolyn Kousky show the results when Gallup surveyed Americans on this question in "Are We Becoming Greener? Trends in Environmental Desire," which appears in the Winter 2016 issue of Resources, published by Resources for the Future.

As the figure shows, when this question was first being asked in the mid-1980s and through the 1990, a typical answer was that two-thirds or more of American gave priority to the environment. But there is a substantial shift in the answers from about 2000 up through 2004. The proportions giving each priority shift to roughly half and  half; indeed, during the Great Recession and its aftermath, people put a higher priority on the economy than on the environment.

Of course, interpreting poll results is a tricky business, and Boyd and Kousky offer a useful discussion of all sorts of interpretations one might place on these results. Perhaps those taking the poll are interpreting not whether they value environmental protection steps taken in the past, but whether they think additional steps should be a high priority moving forward. Perhaps the term "environmental protection" has over time become equated in some people's minds with "big government," and so the results are showing a rise in negative feelings about government. Perhaps in the aftermath of the dot-com recession of 2001, Americans started to feel more concerned about future economic prospects.

The question of what people "prefer" is a delicate one for economists, who try to make a logical and analytical separation between what people's preferences and what they end up choosing. For example, imagine that people have a certain preference for coffee, which means that given a certain  price for quantity, they will demand a specific quantity of coffee. But now imagine that poor weather damages the coffee crop, the price of coffee rises, and people choose to purchase a lower quantity of coffee as a result. An economist would argue that even though people are buying a reduced quantity coffee, their "preference" for coffee didn't change. Instead, the change in what was bought and sold in the market was the result of a change in growing conditions for coffee. For an economist, a change in preference would involve buying more coffee at the same market price (and the same income level, and holding other factors constant, too). In the present context, people may perceive that the cost of environmental protection has been rising over time, but given a specific environmental protection policy with a high ratio of benefits/costs, they might favor it.

I'm sure lots of other interpretations are possible, but I wanted to mention one more point. Opinion surveys sometimes seem to suggest that problems can be solved right now, but both the environmental protection and economic growth have a long-run aspect. Many of the costs of environmental protection are felt in the shorter-term, while benefits (say, in human health or environment protected) are longer-term. Similarly, economic growth often is built on investments in the short run--human capital, physical capital, technology--which can seem pricey in the present, but pay off over longer periods of time. Choosing either environmental protection or economic growth, properly understood, is about less consumption in the present and planning for the future.

Tuesday, January 26, 2016

Two Speeches on Behavioral Economics

If you want to get up to speed on what behavioral economics is all about, two prominent speeches recently given at the annual meetings of the American Economic Association, held earlier this month in San Francisco, offer a useful starting point. The AEA Presidential Address was given by Richard Thaler on the subject of "Behavioral Economics: Past, Present, and Future." The Richard T. Ely lecture was given by John Y. Campbell on the subject of "Restoring Rational Choice: The Challenge of Consumer Financial Regulation." Campbell's lecture can be thought of as an application of behavioral economics to a specific policy area. The lectures will be published in print in the next few months. However, video of the roughly hour-long lectures accompanied by the slides, is available at the AEA website.

Thaler's lecture is, as the title suggests, is a broad overview of behavioral economics, and it should be quite accessible to a broad listenership. His basic argument is to point out simple economic models assume that economic agents (people and firms) seek to act in an optimal way. I would add that just to be clear, this approach to economic modelling is not the same as as arguing that people always have full and complete information, or that people have perfect abilities to perceive and calculate in all situations. It's not all that hard to do standard economic modelling with agents who lack full information or who can't do high-powered calculations very well, and as a result they will sometimes make mistakes. One way of restating the basic assumption of standard economics is that people will try to optimize, in the sense that at least they won't make the same mistakes over and over again.

Behavioral economics suggests that people often do make certain mistakes over and over again. People are often overconfident, and believe that they are well-above average in many categories. People have self-control problems when it comes to saving, exercising, diet, working hard. They are more averse to losses than they are enthusiastic about gains. They often stick with default choices, rather than considering alternatives. People often care about fairness, especially in social settings. They may be motivated in some settings, but not others. The idea  of not making the same mistake over and over again probably works well when it comes to certain market with frequent and low-stakes interactions: int he market for restaurant meals, for example, if you hate the meal, don't go back. But lots of life decisions get made only once or a limited number of times, like choice of college major, career, who you marry, buying a house, or how much you save for retirement. Some of these choices are easier to alter than others! No one gets to rewind life, over and over again, so that you can learn from your mistakes and avoid them the second or third or 15th time around.

Thaler gives lots of example of how the kinds of people with these very real psychological biases will act in ways that, from the viewpoint of a standard economic model, will act "irrationally." As a result, they will be situations where people make choices that they might later regret, ranging from small choices like paying for an extended warranty that never gets used up to large choice like those leading to bubbles and crashes in housing or stock markets. One of the most prominent examples in this literature is that if an employer puts workers into a retirement saving plan by default, they tend to stay in the plan, and if an employer doesn't put workers into a retirement saving plan by default, they tend to stay out of the plan. Thus, whether a given person ends up with retirement savings is often not a matter of personal choice, but a matter of what default was chosen by their employer. There are a very wide array of applications of these choices in many context: tax evasion, health insurance, Food Stamps, and many others. Thaler's lecture is a very nice broad overview.

John Campbell offers an application of these ideas in the area of household finance. His lecture is quite accessible, if not quite as accessible as Thaler's (that is, Campbell has a very small number of equations and reports a small number of statistical results). Campbell points out that people seem to systematically make what surely look like mistakes in their household decision-making. For example, some employers offer an "employer match" in retirement accounts--that is, if you put some amount like 5% of your income in a retirement account, the employer will also put in 5%. It's free money! But lots people don't take it. As another example, many people don't refinance their home mortgage when interest rates fall. Or many people put their retirement savings in a low-return financial instrument (like bonds) instead of a higher-return instrument (like stocks). Or many people take on lots of short-term debt at high interest rates, including with credit card debt, overdrawing their bank account and paying fees, or through payday loans. People are often financially illiterate, in the sense that they don't know the difference between a nominal and a real rate of interest, or how interest compounds over time, and similar problems.

Thus, Campbell argues that there is a case for financial regulation in certain settings that goes beyond the standard rules requiring disclosure of information, and moves toward setting rules. The tradeoff here is that setting specific rules can protect people from their own behavioral biases and ignorance. On the other side, rules may impose costs on those who are not so susceptible to such biases. As one example, consider the market for credit cards. The credit card companies offer what sounds like a very attractive deal: cash-back refunds, frequent flyer miles, and the like. Indeed, if you follow the rules and make all your payments in full and on time, and take advantage of the benefits, a credit card more than pays for itself. But if you fall behind in your payments, and start paying interest and fees, credit card borrowing can be very costly. If a new rule places sharp limits on credit card interest rates and fees, it will in some ways benefit those who could have become trapped in credit card debt (although they will also find it harder to get a credit card). But it also seems likely that if the credit card companies make less money from late fees and interest, they will also charge higher annual fees and offer less attractive options like cash-back and airline flights.

The case for rules that block certain fees or charges shouldn't be made casually, and the reason for the existence of economists in the universe is to warn about tradeoffs. But Campbell makes a strong case that in several contexts--getting people to invest long-term retirement accounts in stock, limiting the fees and charges from short-term borrowing through credit cards and payday loans, and reverse mortgages for the elderly--such rules are worth serious consideration.

Monday, January 25, 2016

Unemployment is Bottoming Out, So What's Next?

The unemployment rate, now hovering around 5%, has dropped a lot more quickly during the last three years than mainstream forecasters predicted. For example, back in February 2013, the Congressional Budget Office was forecasting that the unemployment rate would be 7.1% in 2015 and 6.3% in 2016. As another example, all participants in the meetings of the Federal Reserve Open Market Committee present their forecast of economic conditions looking ahead. In the "Summary of Economic Projections" prepared for its December 2012 meeting, the predicted range for the unemployment rate in 2015 was 5.7% to 6.8%. To put it another way, the unemployment rate in the last three years has fallen by more than even the most optimistic member of the Federal Reserve Open Market Committee believed was likely.

But now, the unemployment rate is close to bottoming out. The Congressional Budget Office forecast for January 2016 is that the unemployment rate will fall to 4.5% in 2016 and 2017, but will then rise back to 5.0% in the long run. In the Federal Reserve's "Summary of Economic Projections" for its meeting of December 16, 2015,  the median prediction is that unemployment will fall to 4.7% for the next three years from 2016 to 2018 but in the long run will rise back to 4.9%.

Other measures of the labor market also suggest that it is very close to returning to pre-recession levels. For example, here's a figure from the latest Job Openings and Labor Turnover Survey that came out in mid-January, which is a survey with a rotating sample of about 16,000 firms run by the US Bureau of Labor Statistics. The figure shows that the pace of hiring has returned almost to pre-recession levels, and that job openings have surged faster. The number of workers who quit typically drops during a recession, because it's harder to find an alternative job, so the rising number of "quits" is a signal of a healthier labor market.

In short, while the unemployment rate may dip a bit lower or bounce a bit higher, the long fall in the unemployment rate from its peak of 10% in October 2009 is mostly finished.  So what's the next step in looking for whether the labor market continues to improve? There are two broad metrics that come to mind. Are wages starting to rise more quickly, as one would expect in a tighter labor market? and what's the share of Americans who have jobs?

There are some early and very preliminary signs that wages might finally be starting to rise. For example, here's a figure from the "US Economy in a Snapshot" published earlier in January 2016 by the Federal Reserve Bank of New York.

Of course, what actually matters is not just wages, but whether wages are keeping ahead of inflation. Here's a figure in which the blue line shows the Employment Cost Index, a broad measure of how much costs of compensation (including benefits) are rising. The red line shows the rate of inflation based on the Personal Consumption Expenditures index. Notice that back around 2003-2004, compensation increases were running above inflation. The rate of inflation leaps up and down, in part driven by short-term shifts in prices of oil and food, but you can see that compensation wasn't running much ahead of inflation--if at all--from when the recession got underway up through about 2012. However, just in the last year or so, compensation has again been ahead of inflation

Another labor market measure that deserves a close look moving ahead is the relationship between the unemployment rate and some broader measures of labor market involvement. In this figure, again taken from the "US Economy in a Snapshot," the blue line shows the unemployment rate rising during the recession and falling again. The red line shows the labor force participation rate, which I've written on this blog a number of times.  The beige line shows the employment/population ratio

The downward trend in the labor force participation rate started well before the recession, and as I've discussed on this website before (for example, here and here), the long-term trend reflects factors like young adults being more likely to attend school, low-wage workers having fewer job opportunities, and the baby boom generation reaching retirement age.  This long-term fall in the labor force participation rate accelerates during the Great Recession, but more recently, it the rate of decline has slowed again.

A pattern I haven't written about much on this blog is the employment-to-population ratio, which has declined over time  for many of the same reasons as the labor force participation rate. The economic concept of the "labor force" includes both the employed and the unemployed--that is, everyone who has a job or is actively looking for one. However, the employment/population ratio includes only the employed,which is why it drops more sharply during recessions. It's interesting that the employment/population ratio actually stabilized back around 2010 and has even increased a little since then.

Maybe it goes without saying, but just because unemployment is bottoming out doesn't mean that everything is hunky-dory for everyone in the labor market. But the labor market problems of having the unemployment rate at 5% is at least a different and less terrible than the problems of having an unemployment rate at 10%.

Friday, January 22, 2016

Philanthropy, American Style

The Philanthropy Roundtable has published its most recent Almanac of American Philanthropy. The report offers lots of possibilities for browsing.  For example, there's a lengthy list of of "Great Philanthropy Quotations,"  "Timeline of American Philanthrophy: 1936-2015,"  a "Philanthropy Hall of Fame," and more. Alex Reid, a tax attorney who was a former counsel to the Joint Committee on Taxation of the U.S. Congress, offers a rock-ribbed historical and philosophical (if not especially economic) defense of why charitable contributions should be tax-deductible. But as is my wont, I'm inevitably drawn to the graphs and figures.

For example, here's a graph of giving over time, with the top line (measured on the left axis) showing total real charitable giving in the US, and the bottom line (measured on the right axis) showing per capita giving. As the report points out, another way of describing these figures is that Americans give about 2% of GDP: "But it’s interesting that even as we have become a much wealthier people in the post-WWII era, the fraction we give away hasn’t risen. There seems to be something stubborn about that 2 percent rate."

Where do the charitable donations go? As the report notes, donations to "religion" end up being spent in a variety of areas: "Much religious charity, however, ultimately goes into sub-causes like relief for the poor, medical care, education, or aid sent to low-income countries or victims of disaster."

And for all the talk of charitable giving by big foundations, the vast majority of charitable donations are by individuals. About two-thirds of Americans donate to charity, including over 90% of those with annual incomes above $125,000.

Here's a graph showing "Output of the Nonprofit Sector" as a share of GDP. A few points about this calculation are worth remembering: 1) With some industries, like steel mills or haircuts, it's relatively easy to measure output. With the nonprofit sector, what you're really measuring is money spent, rather than outcomes provided; 2) Volunteer time of over 8.1 billion hours per year isn't counted in the "output" statistics of nonprofits, for example, because although it would have a market value well above $100 billion (depending on how you value people's time), it's not paid for; 3) Also, it turns out that some charities registered with the IRS are counted as "businesses," rather than as nonprofits. That said, it's still interesting that, as the report notes: "For perspective, consider that annual U.S. defense spending totals 4.5 percent of GDP. The nonprofit sector surpassed the vaunted “military-industrial complex” in economic scope way back in 1993."

If one looks at private philanthropy as a share of GDP, Americans lead the high-income countries of the world, with Canada in second place.

Finally, there's a potentially interesting discussion topic for students--and for us all, really--on the subject of "Good Charity, Bad Charity?" The Almanac takes the following stance:

Some activists today are eager to define what is good or bad, acceptable or unacceptable, in other people’s giving. Princeton professor Peter Singer has lately made it almost a career to pronounce that only certain kinds of philanthropic contributions ought to be considered truly in the public interest. Only money given directly to “the poor” should be counted as charitable, he and some others argue.
Former NPR executive Ken Stern constructed a recent book on this same idea that charity must be “dedicated to serving the poor and needy.” Noting that many philanthropists go far beyond that limited population, he complains that it is “astonishingly easy to start a charity; the IRS approves over 99.5 percent of all charitable applications.” He disapprovingly lists nonprofits that have “little connection to common notions of doing good: the Sugar Bowl, the U.S. Golf Association, the Renegade Roller Derby team in Bend, Oregon, and the All Colorado Beer Festival, just to name a few.”
Is that a humane argument? Without question, the philanthropy for the downtrodden launched by people like Stephen Girard, Nicholas Longworth, Jean Louis, the Tappans, Milton Hershey, Albert Lexie, and Father Damien is deeply impressive. But the idea that only generosity aimed directly at the poor (or those who agitate in their name) should count as philanthropic is astoundingly narrow and shortsighted. Meddling premised on this view would horribly constrict the natural outpouring of human creativity.
Who is to say that Ned ­McIlhenny’s leaps to preserve the Negro spiritual, or rescue the snowy egret, were less worthy than income-boosting? Was the check that catalyzed Harper Lee’s classic novel bad philanthropy? Were there better uses for Alfred Loomis’s funds and volunteer management genius than beating the Nazis and Imperial Japanese military?
Even if you insist on the crude utilitarian view that only direct aid to the poor should count as charity, the reality is that many of the most important interventions that reduce poverty over time have nothing to do with alms. By building up MIT, George Eastman struck a mighty blow to increase prosperity and improve the health and safety of everyday life—benefiting individuals at all points on the economic spectrum. Givers who establish good charter schools today are doing more to break cycles of human failure than any welfare transfer has ever achieved. Donors who fund science, abstract knowledge, and new learning pour the deep concrete footings of economic success that have made us history’s most ­aberrant nation—where the poor improve their lot as much as other citizens, and often far more.
And what of the private donors who stoke the fires of imagination, moral understanding, personal character, and inspiration? Is artistic and religious philanthropy just the dabbling of bored and vain wealthholders? Aren’t people of all income levels lifted up when the human spirit is cultivated and celebrated in a wondrous story, or haunting piece of music, or awe-engendering cathedral?
When a donation is offered to unlock some secret of science, or feed an inspiring art, or attack some cruel disease, one can never count on any precise result. But it’s clear that any definition which denies humanitarian value to such giving, because it doesn’t go directly to income support, is crabbed and foolish. Much of the power and beauty of American philanthropy derives from its vast range, and the riot of causes we underwrite in our millions of donations.
In practice, my own answer to all those rhetorical questions is that my personal tax-deductible giving goes to a range of cause that include the arts, education, and conservation, as well as to activities supporting the poor. But one can make a case that it's too easy to use tax deductible contributions, and that the rules should be tightened.

Wednesday, January 20, 2016

Snapshots and Visualizations of the Global Economy

For me, figures based on data about the global economy are like reliable sources of indirect light in a room where I'm working. Even when they are out of view, or when I'm not thinking about them explicitly, they shed some light on whatever the topic of my work is that day. For example, I recently ran across this breakdown of world GDP by country on the cost information website "How Much" in the short article, "One Diagram That Will Change the Way You Look At the US Economy" (July 21, 2015).

The country share of GDP (using market exchange rates to convert national currency values) is shown by the dark like. The lighter subdivisions within each country show the share of the economy that is services (darker area), manufacturing (medium area), and agriculture (lighter area). The color scheme differentiates continents, as the key to the figure shows.

Here's a quick collection of some other global economy figures that have appeared on this blog during the last year or two. If you want more detailed commentary about the figures and their sources, you can check at the original blog post.

Here's a different representation of world GDP, this one from the World Bank's International Comparison project. The horizontal axis shows population for various countries. The vertical axis shows per capita GDP. Thus, the area for each country (per capita GDP multiplied by population) gives the size of the country's economy.  The original post was "GDP Snapshots from the International Comparison Project" (May 9, 2014),

This is a similar figure with population and per capita GDP from a McKinsey Global Institute report at "Global Economic Growth: All Productivity, All the Time"  (January 20, 2015). However, this figure shows the distribution for 1964 and also the distribution 50 years later in 2014, thus illustrating the rise in population, standard of living, and the size of the world economy over those 50 years.

Some areas of the world obviously have more economic activity than others. What's the geographic center of the world economy? It's a little tricky to carry out this calculation on the sphere of the world, but here's the result of one such calculation in "The Shifting Geographic Center of the World Economy" (June 3, 2015). The center of the world economy was over near China 2000 years ago. During the industrial revolution period of the 19th century and on into the 20th century, the geographic center moves to Europe and then over toward the United States. But now the geographic center is shifting back toward China.

One way of illustrating income inequality is with a graphs that has the range of incomes on the horizontal axis, and the share of people receiving any given level of income on the vertical axis. This kind of graph shows the most common level of income and the distribution of income. In this figure on "Global Income Inequality in Decline" (May 5, 2015), the global distribution of income for 2003 (brown line) is compared with the distribution in 2013 (blue line) and a projected distribution for 2035 (red line). You can see the rise in mean and median incomes over time. Moreover, economic growth around the globe, but especially in previously low-income places like China and India, means that not as much of the world population is bunched down at the bottom of the income distribution, and for that reason the world distribution of income is becoming more equal.

Over the last couple of centuries, a common pattern has been that the high-income economies were also the faster-growing and largest economies. But over the last few decades, that pattern has been changing. The consulting firm PricewaterhouseCoopers estimates that by 2050, the six largest economies in the world will be, in order, China, India, United States, Indonesia, Brazil, Mexico. Thus, we are entering a world "When High GDP No Longer Means High Per Capita GDP" (October 20, 2015). Here's a figure showing GDP in per capita terms for some large high-income countries the also a group of lower-income emerging market countries. By 2050, economic growth will make many of the countries on the right-hand side of this figure among the largest in the world. But even after several more decades of faster-than-global-average growth, their per capita GDP will not have yet caught up to the levels n the US and other high-income countries.

Finally, here's a depiction of the distribution of global wealth from Credit Suisse in "Snapshots of Global Wealth" (October 15, 2014). If you have more than $100,000 in wealth (and yes, your housing equity and your retirement account are included here), then you are sitting above the 90th percentile of the world wealth distribution. If you have more than $1,000,000 in wealth (or if you plan to end up at that level of wealth by the time you reach retirement age), you are in the 99th percentile of world wealth.

Tuesday, January 19, 2016

Digital Dividends and Development

"Digital technologies have spread rapidly in much of the world. Digital dividends—the broader development benefits from using these technologies—have lagged behind. In many instances digital technologies have boosted growth, expanded opportunities, and improved service delivery. Yet their aggregate impact has fallen short and is unevenly distributed." Those are the opening words of the 2016 World Development Report from the World Bank, which focuses on the theme of "Digital Dividends." The report does a nice job of wrapping its arms around this big unruly topic, with lots of concrete facts and examples. Here's a quick overview of some points that caught my eye.

The evidence on the spread of digital technologies around the world in the last decade or so is quite remarkable. The dark solid line rising sharply in this figure shows the spread of mobile phone technology to more than 80% of the population. Internet and mobile broadband are growing too, as the lines at the bottom of the figure show, but access to mobile phones has actually outstripped access to improved water, electricity, improved sanitation, and secondary schools.

But one can view digital access as half-full or half-empty. As the report notes: "First, nearly 60 percent of the world’s people are still offline and can’t participate in the digital economy in any meaningful way. ... The internet, in a broad sense, has grown quickly, but it is by no means universal. For every person connected to high-speed broadband, five are not. Worldwide, some 4 billion people do not have any internet access, nearly 2 billion do not use a mobile phone, and almost half a billion live outside areas with a mobile signal."

In what ways can the spread of digital technologies benefit the process of economic development, or economic growth more broadly? Digital technologies are what economists sometimes call "general purpose" technologies; they can be broadly applied in a very wide variety of contexts.
"Perhaps the greatest contribution to growth comes from the internet’s lowering of costs and thus from raising efficiency and labor productivity in practically all economic sectors. Better information helps companies make better use of existing capacity, optimizes inventory and supply chain management, cuts downtime of capital equipment, and reduces risk. In the airline industry, sophisticated reservation and pricing algorithms increased load factors by about one-third for U.S. domestic flights between 1993 and  2007. The parcel delivery company UPS famously uses intelligent routing algorithms to avoid left turns, saving time and about 4.5 million liters of petrol per year. Many retailers now integrate their suppliers in real-time supply chain management to keep inventory costs low. Vietnamese firms using e-commerce had on average 3.6 percentage point higher TFP [total factor productivity] growth than firms that did not use it. Chinese car companies that are more sophisticated users of the internet turn over their inventory stocks five times faster than their less savvy competitors. And Botswana and Uruguay maintain unique ID and trace-back systems for livestock that fulfill requirements for beef exports to the EU, while making the production process more efficient."
What about specifically helping the poor in developing countries?
"The biggest gains from digital technologies for the poor are likely to come from lower information and search costs. Technology can inform workers about prices, inputs, or new technologies more quickly and  cheaply, reducing friction and uncertainty. That can eliminate costly journeys, allowing more time for  work and reducing risks of crime or traffic accidents. Using technology for information on prices, soil  quality, weather, new technologies, and coordination with traders has been extensively documented in agriculture ... In Honduras, farmers who got market price information via short message service (SMS) reported an increase of 12.5 percent in prices received. In Pakistan, mobile phones allow farmers to shift to more perishable but higher return cash crops, reducing postharvest losses from the most perishable crops by 21–35 percent. The impacts of reduced information asymmetries tend to be larger when learning about information in distant markets or among disadvantaged farmers who face more information constraints. ..."
"In 12 countries surveyed in Africa, 65 percent of people believe that their family is better off because they have mobile  phones, whereas only 20 percent disagree (14.5 percent not sure). And 73 percent say mobile phones help save on travel time and costs, with only 10 percent saying otherwise. Two-thirds believe that having a mobile phone makes them feel more safe and secure."
To me, one intriguing application of digital technologies is to offer people a proof of identification. One of the most remarkable efforts along these lines is India’s Aadhaar system, in which about 900 million people have a 12-digit number which is linked to biometric information.

"Identity should be a public good. Its importance is now recognized in the post-2015 development agenda, specifically as a Sustainable Development Goal (SDG) target to “promote peaceful and inclusive societies for sustainable development, provide access to justice for all, and build effective, accountable, and inclusive institutions at all levels.” One of the indicators is to “provide legal identity for all, including birth registration, by 2030.” The best way to achieve this goal is through digital identity (digital ID) systems, central registries storing personal data in digital form and credentials that rely on digital, rather than physical, mechanisms to authenticate the identity of their holder. ...
"India’s Aadhaar program dispenses with the card altogether, providing remote authentication based on the holder’s fingerprints or iris scan. Online and mobile environments require enhanced authentication features—such as electronic trust services, which include e-signatures, e-seals, and time stamps—to add confidence in electronic transactions. Mobile devices offer a compelling proposition for governments seeking to provide identity credentials and widespread access to digital services. In Sub-Saharan Africa, for example, more than half of the population in some countries is without official ID, but more than two-thirds of the residents in the region have a mobile phone subscription. The developing world is home to more than 6 billion of the world’s 7 billion mobile subscriptions, making this a technology with considerable potential for registration, storage, and management of digital identity.  ...  Nigeria’s e-ID revealed 62,000 public sector “ghost workers,” saving US$1 billion annually. But the most important benefit may be in better integrating marginalized or disadvantaged groups into society. Digital technologies also enable the poor to vote by providing them with robust identification and by curtailing fraud and intimidation through better monitoring."
The report also discusses potential dangers of the spread of digital technology, including risks of greater concentration of large firms, a possible rise in economic inequality, and potential for government control of information. For example, there is some evidence of a "hollowing out" of jobs in a number of developing economies. The countries in the figure below are ranked from left to right by the annual change in the share of medium-skill jobs, shown by the medium-green bar. The darkest bars show the change in high-skilled jobs, while the lightest bars show the change in low-skilled jobs. A number of economies (although not China, shown on the far right) are seeing a drop in the share of jobs that involve medium skills.

A couple of final thoughts:

First, the upside thing about digital technologies is that they are general purpose, and have such broad application. The corresponding downside is that such technologies need to be applied, and wisely applied, in a broad variety of contexts to have their most powerful effect. As the World Bank report notes:
Access to the internet is critical, but not sufficient. The digital economy also requires a strong analog foundation, consisting of regulations that create a vibrant business climate and let firms leverage digital technologies to compete and innovate; skills that allow workers, entrepreneurs, and public servants to seize opportunities in the digital world; and accountable institutions that use the internet to empower citizens.
I confess that part of this explanation just made me laugh. Only international bureaucrats at a place like the World Bank could un-selfconsciously write that what's first needed is "regulations," because apparently we all know that regulations are what "create a vibrant business climate."  Well, at least we can agree that a favorable business climate is what's important! Along with human capital and good governance, or course.

The other point is that although the report is understandably focused on how digital technologies affect productivity and output, it also raises in a number of places the insight that many of the benefits of digital technology may not be captured very well by the economic values alone. For example, the report notes:
The digital revolution has brought immediate private benefits—easier communication and information, greater convenience, free digital products, and new forms of leisure. It has also created a profound sense of social connectedness and global community.
The connectedness and information flows of digital technology provide a very wide range of benefits. In economic terms, we measure those benefits by what users pay for the service. But like many innovations, what is provided was literally not possible to receive--or only possible at an extremely high price--before the innovation occurred. On a personal level, I receive very large benefits from access to the internet, by which I include use of computer, phone, and television. Thanks to the magic of somewhat competitive markets and their ongoing drive for innovation, what I actually pay for those services seems considerably less to me than the value of the benefits I receive.

Monday, January 18, 2016

Some Economics for Martin Luther King Jr. Day

On November 2, 1983, President Ronald Reagan signed the legislation establishing a federal holiday for the birthday of Martin Luther King Jr., to be celebrated each year on the third Monday in January. As the legislation that passed Congress said: "such holiday should serve as a time for Americans to reflect on the principles of racial equality and nonviolent social change espoused by Martin Luther King, Jr.." Of course, the case for racial equality stands fundamentally upon principles of justice, not economics. But here are four economics-related thoughts for the day drawn from past posts. (This is a revised and altered version of a post that first ran on this holiday in 2015.)

1) Inequalities of race and gender impose large economic costs on society as a whole, because one consequence of discrimination is that it hinders people in developing and using their talents. In "Equal Opportunity and Economic Growth" (August 20, 2012), I wrote:
A half-century ago, white men dominated the high-skilled occupations in the U.S. economy, while women and minority groups were often barely seen. Unless one holds the antediluvian belief that, say, 95% of all the people who are well-suited to become doctors or lawyers are white men, this situation was an obvious misallocation of social talents. Thus, one might predict that as other groups had more equal opportunities to participate, it would provide a boost to economic growth. Pete Klenow reports the results of some calculations about these connections in "The Allocation of Talent and U.S. Economic Growth," a Policy Brief for the Stanford Institute for Economic Policy Research.

Here's a table that illustrates some of the movement to greater equality of opportunity in the U.S. economy. White men are no longer 85% and more of the managers, doctors, and lawyers, as they were back in 1960. High skill occupation is defined in the table as "lawyers, doctors, engineers, scientists, architects, mathematicians and executives/managers." The share of white men working in these fields is up by about one-fourth. But the share of white women working in these occupations has more than tripled; of black men, more than quadrupled; of black women, more than octupled.

Moreover, wage gaps for those working in the same occupations have diminished as well. "Over the same time frame, wage gaps within occupations narrowed. Whereas working white women earned 58% less on average than white men in the same occupations in 1960, by 2008 they earned 26% less. Black men earned 38% less than white men in the typical occupation in 1960, but had closed the gap to 15% by 2008. For black women the gap fell from 88% in 1960 to 31% in 2008."

Much can be said about the causes behind these changes, but here, I want to focus on the effect on economic growth. For the purposes of developing a back-of-the-envelope estimate, Klenow builds up a model with some of these assumptions: "Each person possesses general ability (common to
all occupations) and ability specific to each occupation (and independent across occupations). All groups (men, women, blacks, whites) have the same distribution of abilities. Each young person knows how much discrimination they would face in any occupation, and the resulting wage they would get in each occupation. When young, people choose an occupation and decide how
much to augment their natural ability by investing in human capital specific to their chosen

With this framework, Klenow can then estimate how much of U.S. growth over the last 50 years or so can be traced to greater equality of opportunity, which encouraged many in women and minority groups who had the underlying ability to view it as worthwhile to make a greater investment in human capital.
"How much of overall growth in income per worker between 1960 and 2008 in the U.S. can be explained by women and African Americans investing more in human capital and working more in high-skill occupations? Our answer is 15% to 20% ... White men arguably lost around 5% of their earnings, as a result, because they moved into lower skilled occupations than they otherwise would have. But their losses were swamped by the income gains reaped by women and blacks."
At least to me, it is remarkable to consider that 1/6 or 1/5 of total U.S. growth in income per worker may be due to greater economic opportunity. In short, reducing discriminatory barriers isn't just about justice and fairness to individuals; it's also about a stronger U.S. economy that makes better use of the underlying talents of all its members.

2) Roland Fryer delivered the Henry and Bryna David Lecture at the National Academy of Sciences on the subject of "21st Century Inequality: The Declining Significance of Discrimination." I discussed this lecture in "The Journey to Becoming a School Reformer" (February 13, 2015). As Fryer tells the story, he was "asked in 2003 to explore the reasons for the social inequality in the United States." Fryer said:

"In two weeks I reported back that achievement gaps that were evident at an early age correlated with many of the social disparities that appeared later in life. I thought I was done. But the logical follow-up question was how to explain the achievement gap that was apparent in 8th grade. I’ve been working on that question for the past 10 years. I am certainly not going to tell you that discrimination has been purged from U.S. culture, but I do believe that these data suggest that differences in student achievement are a critical factor in explaining many of the black-white disparities in our society. It is no longer news that the United States is a lackluster performer on international comparisons of student achievement, ranking about 20th in the world. But the position of U.S. black students is truly alarming. If they were to be considered a country, they would rank just below Mexico in last place among all Organization of Economic Cooperation and Development countries. ... 
"When do U.S. black students start falling behind? It turns out that development psychologists can begin assessing cognitive capacity of children when they are only nine months old with the Bayley Scale of Infant Development. We examined data that had been collected on a representative sample of 11,000 children and could find no difference in performance of racial groups. But by age two, one can detect a gap opening, which becomes larger with each passing year. By age five, black children trail their white peers by 8 months in cognitive performance, and by eighth grade the gap has widened to twelve months."
Fryer goes on to describe his remarkable work that seeks to learn from the experience of high-performing charter schools that do very well in bringing many African-American children from low-income families up to expected grade-level academic performance--and better--and then applying those lessons in the context of actual big-city public schools. As I wrote in that blog post: 
It is remarkable to me that most of the cognitive performance gap for eighth-graders is already apparent for five year-olds. As I've commented on before in "The Parenting Gap for Pre-Preschool" (September 17, 2013), one possible reaction here is to think more seriously about home visitation programs for at-risk children in the first few years of life. 

3) For those who would like to know more about the economics of thinking about cause-and-effect in discrimination issues, a starting point might begin with this interview with Glenn Loury (July 2, 2014). Here's a slice of the discussion from that post:

A standard approach to studying discrimination in labor markets is to collect data on what people earn and their race/ethnicity or gender, along with a number of other variables like years of education, family structure, region where they live, occupation, years of job experience, and so on. This data lets you answer the question: can we account for differences in income across groups by looking at these kinds of observable traits other than race/ethnicity and gender? If so, a common implication is that the problem in our society may be that certain groups aren't getting enough education, or that children from single-parent families need more support--but that a pay gap which can be explained by observable factors other than race/ethnicity and gender isn't properly described as "discrimination." Loury challenges this approach, arguing that many of the observable factors are themselves the outcome of a history of discriminatory practices. He says:
"By that I mean, suppose I have a regression equation with wages on the left-hand side and a number of explanatory variables—like schooling, work experience, mental ability, family structure, region, occupation and so forth—on the right-hand side. These variables might account for variation among individuals in wages, and thus one should control for them if the earnings of different racial or ethnic groups are to be compared. One could put many different variables on the right-hand side of such a wage regression.
Well, many of those right-hand-side variables are determined within the very system of social interactions that one wants to understand if one is to effectively explain large and persistent earnings differences between groups. That is, on the average, schooling, work experience, family structure or ability (as measured by paper and pencil tests) may differ between racial groups, and those differences may help to explain a group disparity in earnings. But those differences may to some extent be a consequence of the same structure of social relations that led to employers having the discriminatory attitudes they may have in the work place toward the members of different groups.
So, the question arises: Should an analyst who is trying to measure the extent of “economic discrimination” hold the group accountable for the fact that they have bad family structure? Is a failure to complete high school, or a history of involvement in a drug-selling gang that led to a criminal record, part of what the analyst should control for when explaining the racial wage gap—so that the uncontrolled gap is no longer taken as an indication of the extent of unfair treatment of the group?
Well, one answer for this question is, “Yes, that was their decision.” They could have invested in human capital and they didn’t. Employer tastes don’t explain that individual decision. So as far as that analyst is concerned, the observed racial disparity would not be a reflection of social exclusion and mistreatment based on race. ... But another way to look at it is that the racially segregated social networks in which they were located reflected a history of deprivation of opportunity and access for people belonging to their racial group. And that history fostered a pattern of behavior, attitudes, values and practices, extending across generations, which are now being reflected in what we see on the supply side of the present day labor market, but which should still be thought of as a legacy of historical racial discrimination, if properly understood.
Or at least in terms of policy, it should be a part of what society understands to be the consequences of unfair treatment, not what society understands to be the result of the fact that these people don’t know how to get themselves ready for the labor market.

 4) Extensions in the period of copyright over time have meant that the speeches and writings of Martin Luther King Jr. and others in the U.S. civil rights movement are not easily available to, say, students in schools or the general public. This was one example I discussed in a post on "Absurdities of Copyright Protection" (May 13, 2014). The post discusses a paper by Derek Khanna called  "Guarding Against Abuse: Restoring Constitutional Copyright," published as R Street Policy Study No. 20 (April 2014). Here, I'll just quote a couple of paragraphs from Khanna.

Excessively long copyright terms help explain why Martin Luther King’s “I Have a Dream” speech is rarely shown on television, and specifically why it is almost never shown in its entirety in any other form. In 1999, CBS was sued for using portions of the speech in a documentary. It lost on appeal before the 11th Circuit. If copyright terms were shorter than 50 years, then those clips would be available for anyone to show on television, in a documentary or to students. When historical clips are in the public domain, learning flourishes. Martin Luther King did not need the promise of copyright protection for “life+70” to motivate him to write the “I Have a Dream” speech. (Among other reasons, because the term length was much shorter at the time.) ...
Eyes on the Prize is one of the most important documentaries on the civil rights movement. But many potential younger viewers have never seen it, in part because license requirements for photographs and archival music make it incredibly difficult to rebroadcast. The director, Jon Else, has said that “it’s not clear that anyone could even make ‘Eyes on the Prize’ today because of rights clearances.” The problems facing Eyes on the Prize are a result of muddied and unclear case law on fair use, but also copyright terms that have been greatly expanded. If copyright terms were 14 years, or even 50 years, then the rights to short video clips for many of these historical events would be in the public domain.

Friday, January 15, 2016

Franchise the National Parks?

The idea of franchising the national parks raises images of Mickey Mouse ears on top of Half-Dome at Yosemite, or the McDonald's "golden arches" as a scenic backdrop to the Old Faithful geyser in Yellowstone. But that's not what Holly Fretwell has in mind in her essay, "The NPS Franchise:
A Better Way to Protect Our Heritage," which appears in the George Wright Forum (2015, vol. 32, number 2, pp. 114-122). Instead, she is suggesting that a number of national parks might be better run as independent nongovernment conservation-minded operations with greater control over their own revenues and spending. In such an arrangement, the role of the National Park Service would be to evaluate the financial and environmental plans of possible franchisees, provide brand-name and a degree of logistical support, and then to make sure the franchisees announced plans were then followed up in the future.

To understand the impetus behind Fretwell's proposal, you need to first face the hard truth that the national parks have severe financial problems, which are manifesting themselves both in decaying infrastructure for human visitors and also in a diminished ability to protect the parks themselves (for example, sewer systems in parks affect both human visitors and environmental protection). Politicians are often happy to set aside more parkland, but spending the money to manage the land is a harder sell. If you accept as a basic constraint that federal spending on park maintenance isn't going to rise, or at least not rise sufficiently, then you are driven to consider other possibilities. Here's Fretwell on the current problems of the National Park Service (footnotes omitted):
As it enters its second century, NPS faces a host of challenges. In 2014, the budget of the National Park Service was $2.6 billion. The maintenance backlog is four times that, at $11.5 billion and growing. According to the National Parks Conservation Association (NCPA), about one-third of the shortfall is for “critical systems” that are essential for park function. Without upgrades, many park water and sewer systems are at risk. A water pipe failure in Grand Canyon National Park during the spring of 2014 cost $25,000 for a quick fix to keep water flowing, but is estimated to cost about $200 million to replace. Yellowstone also has antiquated water and wastewater facilities where past failures have caused environmental degradation. Sewer system upgrades in Yosemite and Grand Teton are necessary to prevent raw sewage from spilling into nearby rivers. Deteriorating electrical cables have caused failures in Gateway National Recreation Area and in Glacier’s historic hotels. Roads are crumbling in many parks. They are patched rather than restored for longevity. Only 10% of park roads are considered to be in better than “fair” condition. At least 28 bridges in the system are “structurally deficient,” and more than one-third of park trails are in “poor” or “seriously deficient” condition.
Cultural heritage resources that the parks are set aside to protect are also at risk. Only 40% of park historic structures are considered to be in “good” or better condition and they need continual maintenance to remain that way. Exterior walls are weakening on historic structures such as Perry’s Victory and International Peace Memorial in Ohio, the Vanderbilt Mansion in New York, and the cellhouse in Golden Gate National Recreation Area in California. Weather, unmonitored visitation, and leaky roofs are degrading cultural artifacts. Many of the artifacts and museum collections have never been catalogued. ... 
Even though the NPS maintenance backlog is four times the annual discretionary budget, rather than focus funding on maintaining what NPS already has, the system continues to grow. ... The continual expansion of park units and acreage without corresponding funding is what former NPS Director James Ridenour called “thinning the blood.” ...  The national park system has grown from 25.7 million acres and about 200 units in 1960 to 84.5 million acres and 407 units in 2015. Seven new parks were added under the 2014 National Defense Authorization Act and nine parks were expanded. The growth came with no additional funding for operations or maintenance—more “thinning the blood.”
I've had great family vacations in a number of national parks since I was a child. They were inexpensive to visit then, and they remain cheap. Indeed, there's sometimes an odd moment, when visiting a national park, when you realize that what you just spent at the gift shop, or for a family meal, considerably exceeds what you spent to enter the park. Fretwell writes:

Numerous parks have increased user and entrance fees for the 2015 summer season after
seeking public input and Washington approval. Even with the higher fees, a visit to destination parks like Grand Canyon and Yellowstone costs $30 for a seven-day vehicle permit, or just over $1 per person per day for a family of four. ... The current low fees to enter units of the national park system typically make up a small portion of the total park visit expense. It has been estimated that the entry fee is less than 2% of park visit costs for visitors to Yellowstone and Yosemite. The bulk of the expenditures when visiting destination parks go to lodging, travel, and food. Higher fees have little effect on visitation to most parks. ... Even modest fees (though sometimes large fee increases) could cover the operating costs of some destination parks. About $5 per person per day could cover operations in Grand Canyon National Park, as would just over $10 in Yellowstone.
An obvious question here is why the parks can't just raise fees on their own, but of course, that choice runs into political constraints as well. It is at least arguable that franchisees could spell out the facilities that need renovating and building, along with other services that could be offered, and then also be able to charge the fees that would cover the costs.

Fretwell recognizes that not all national parks will have enough visitors to work well with a franchise model (for example, some of the huge national parks in Alaska), and a need for direct government spending on such parks will remain. But it's worth remembering that national park visitors tend to have above-average income levels. A franchise proposal can be understood as a way of circumventing the political constraints that first prevent national parks from collecting money, and then don't allocate sufficient financial resources from other government revenues. A group of franchise proposals would also give national parks a way to move away from "thinning the blood"--that is, focusing heavily on how to persevere with tight and inflexible financial constraints--and instead offer an infusion of new ideas and how they might be financed.

Thursday, January 14, 2016

War on Cancer: Redux

In his 1971 State of the Union Address, President Richard Nixon launched what came to be known as the War on Cancer:
“I will also ask for an appropriation of an extra $100 million to launch an intensive campaign to find a cure for cancer, and I will ask later for whatever additional funds can effectively be used. The time has come in America when the same kind of concentrated effort that split the atom and took man to the moon should be turned toward conquering this dread disease. Let us make a total national commitment to achieve this goal.”
And now, 45 year later in the 2016 State of the Union address, President Barack Obama is relaunching the War on Cancer:
"Last year, Vice President Biden said that with a new moonshot, America can cure cancer. Last month, he worked with this Congress to give scientists at the National Institutes of Health the strongest resources that they’ve had in over a decade. So tonight, I’m announcing a new national effort to get it done. And because he’s gone to the mat for all of us on so many issues over the past 40 years, I’m putting Joe in charge of Mission Control. For the loved ones we’ve all lost, for the families that we can still save, let’s make America the country that cures cancer once and for all."
So how did that first War on Cancer turn out? At the tail end of 2008, just before President Obama took office, David Cutler took a stab at answering that question in "Are We Finally Winning the War on Cancer?" which appeared in the Fall 2008 issue of the Journal of Economic Perspectives (22:4, pp. 3-26). Here's a figure showing the mortality rate from cancer over time.
As Cutler reports, spending on cancer research and treatment did rise steadily after Nixon's speech at about 4-5% per year. But as the figure shows, cancer death rates kept rising for a time as well, at about 8% per year during the 1970s and 1980s. By 1997, the New England Journal of Medicine ran an article noting these trends called "Cancer Undefeated."  Perhaps inevitably, that article was soon followed by a sharp decline in cancer mortality. Apparently, Obama is re-enlisting in a war on cancer that has been going pretty well for a couple of decades. 

But the War on Cancer has been fought with several different tools--and biomedical research on a "cure for cancer" isn't the biggest one. Cutler focuses on four main types of cancer: lung, colorectal, female breast, and prostate. After reviewing the evidence on each, he wrote:  
"[B]ehaviors, screening, and treatment advances for the four cancers I consider were each important in improved cancer survival. Together, they explain 78 percent of the reduction in cancer mortality between 1990 and 2004. Thirty-five percent of reduced cancer mortality is attributable to greater screening—partly through earlier detection of disease, and partly through removal of precancerous adenomas in the colon and rectum. Behavioral factors are next in importance, at 23 percent; the impact of smoking reductions on lung cancer is the single most important factor in this category. Finally, treatment innovation is third in importance, accounting for 20 percent of reduced mortality.
"The relative importance of these different strategies seems surprising, but it is easily understandable. Despite the vast array of medical technologies, metastatic cancer remains incurable and fatal. The armamentarium of medicine can delay death, but cannot prevent it. Thus, technologies in metastatic settings have only limited effectiveness. Far more important is making sure that people do not get cancer in the first place (prevention) and that cancer is caught early (screening), when it can be successfully treated."
From this perspective, emphatic calls for a "cure for cancer" highlight a bias in US medicine, in favor of later-stage interventions which are often at high-cost, rather than early stage interventions of prevention and early screening that may often happen pretty much outside the health care system, but have the potential save many more lives at much lower cost.

David H. Howard, Peter B. Bach, Ernst R. Berndt, and Rena M. Conti looked at "Pricing in the Market for Anticancer Drugs" in the Winter 2015 issue of the Journal of Economic Perspectives (29:1, pp. 139-62). As I've discussed on this blog, before a new anti-cancer drug is approved, various clinical trials and studies are done, and these studies provide an estimate of the median expected extension of life as a result of using the drugs. Then based on the market price of the drug when it is announced, it's straightforward to calculate the price of the drug per year of life gained. Their calculations show that back in 1995, new anti-cancer drugs reaching the market were costing about $54,000 to save a year of life. By 2014, the new drugs were costing about $170,000 to save a year of life. This is an increase in cost per year of life saved of roughly 10% per year.

As both Nixon and Obama can attest, calling for a "cure for cancer" with an analogy to putting a person on the moon has been political magic for 45 years now. But if the "cure for cancer" rhetoric creates the expectation of a magic pill that lets everyone go back to smoking cigarettes again, I fear that it is both missing the point and even raising false hopes for cancer patients and their families. I'm pretty much always a supporter of additional research and development, and like everyone else, I hear anecdotes (which I cannot evaluate) about how some great new anti-cancer drugs are already in the research pipeline. But a focus on developing more extremely expensive anti-cancer drugs that often provide only very limited gains in average life expectancy (in a number of cases, only a few months) shouldn't be the primary approach here. At least for the near-term and probably the medium-term, too,  primary tools that can keep cancer mortality on a downward trends are more likely to be prevention and early detection, along with ongoing improvements in the health-effectiveness and cost-effectiveness of treatment, not a "moonshot" for a cure. 

Full disclosure: I've been Managing Editor of the Journal of Economic Perspectives since 1987, and so part of my paycheck came from working to publish the two articles mentioned here.