'Voting with Your Feet' and Metro-Area Livability

April 01, 1999
By  Howard J Wall

Most people are familiar with at least one of the publications that rank metro areas according to their livability. The best known of these are the Places Rated Almanac, which comes out with a new ranking every few years, and Money magazine's annual ranking of "The Best Places to Live in America." The rankings never fail to create controversy, eliciting gleeful cheers and breast-beating from residents of high-ranked areas, and cries of bias and ignorance from residents of low-ranked ones.

The Places Rated and Money rankings are based on data for 300 or more metro areas, using variables thought to be important indicators of livability. These variables measure factors which include economic conditions, climate, school quality, health care, cost of living, public transportation, and arts and leisure facilities. The variables are then aggregated to generate a single number for each metro area. The numbers for all the metro areas are then sorted to produce the livability ranking.

Although the Places Rated and Money rankings consider the same general factors, their results are almost entirely uncorrelated. In the 1997 rankings, for example, there was no overlap whatsoever between Money's top 10 and Places Rated's top 10. In fact, only one of Money's top 10 made Places Rated's top 75, and only one of Places Rated's top 10 made Money's top 25. Why are the rankings so uncorrelated when they are based on the same factors?

Can Livability Be Measured?

The fact that the two rankings are so different, despite being based on the same factors, should make one wary. A look at the list of variables used to construct them should make one even more concerned. For example, does the typical American's list of desirable factors include graduate student enrollment at local universities, the number of opera houses and the number of professional sports teams, all of which are included in the Places Rated index? What about factors, such as natural beauty and the "niceness" of residents, which have been excluded from the Places Rated and Money indices because they cannot be quantified?

Even if the variables are acceptable to the typical person, the method used to aggregate them into a livability index is not likely to be useful. This is because any aggregation of the variables requires strong assumptions about their relative value in terms of livability. In other words, the constructor of the index must assume a particular utility function to transform the bundle of goods and services that a person consumes into a measurable level of satisfaction, or utility. This makes the ranking completely subjective because one person might put more weight on the availability of cultural amenities and public parks, whereas another person might care more about the quality of local schools and average commuting time.

The choice of a utility function is crucial, and it is where the two indices fundamentally differ. The constructors of the Places Rated ranking simply aggregate the variables without justifying the utility function they imposed. In contrast, Money uses the results of a readership survey to determine the relative importance of each variable. Because of these different methods, the two indices will naturally differ. It also means that they are not likely to be accurate reflections of the typical person's preferences. Instead, they will tend to reflect the preferences of the constructors, or of the particular subset of the population surveyed.1

Voting with Your Feet

Given the problems inherent in livability indices, does it make sense to rank places according to livability? There is an alternative method based on the principle of revealed preference that avoids many of these problems. This principle is based on the notion that people make rational consumption choices in order to attain the highest possible level of satisfaction or utility. Therefore, if two alternative bundles of goods are both affordable, the one chosen must provide the higher level of utility. The actual bundle chosen is revealed preferred to all other affordable bundles.

In terms of migration decisions, the principle of revealed preference says that a person's movement from one metro area to another reveals that she prefers the new metro area to her previous one. When people move from one place to another, they "vote with their feet" about the livability of the two metro areas.

Using the principle of revealed preference, the alternative livability ranking works backwards.2 It starts from the decisions that people actually make about where to live and uses the information these decisions provide to reveal peoples' preferences. This rational index collects the votes to provide a livability ranking based on the preferences of the population. In contrast to other livability indices, the rational index does not require the constructor to select the set of variables, or to choose a utility function. Because of this, the rational index can be construed as more objective than the others.

Table 1

The 10 Most-Livable and Least-Livable Large Metro Areas:

The Rational Index
Rational Rank Metro Area Rational Rank Metro Area
1 Las Vegas, Nev. 50 Chicago, Ill.
2 Atlanta, Ga. 51 Bergen-Passaic, N.J.
3 Phoenix, Ariz. 52 Hartford, Conn.
4 Austin, Texas 53 San Francisco, Calif.
5 Raleigh-Durham, N.C. 54 Newark, N.J.
6 West Palm Beach, Fla. 55 Orange County, Calif.
7 Orlando, Fla. 56 Miami, Fla.
8 Fort Lauderdale, Fla. 57 San Jose, Calif.
9 Portland, Ore. 58 New York, N.Y.
10 Charlotte, N.C. 59 Los Angeles, Calif.

NOTE: The rational index ranks metro areas according to the tendency of Americans to choose to migrate into them.

SOURCE: Author's calculations based on Table B-1, U.S. Census Bureau (1998)

Constructing the most comprehensive version of the rational index requires knowing the number of people that migrated from each metro area to every other metro area. This reveals which of the other areas are more livable and which are less livable. All of these comparisons could then be collected to construct a comprehensive ranking of the metro areas. However, this would require a tremendous amount of data on migration, not to mention a lot of work. Fortunately, though, there is a shortcut that yields a roughly comparable ranking.

If livability is determined by the "vote" of the typical person, it can be constructed using metro area rates of migration. For example, consider two metro areas, A and B. Assume that the net rate of in-migration from the rest of the United States into Metro Area A is higher than that for Metro Area B. This means that, on average, people in the United States prefer Metro Area A to Metro Area B. According to this shortcut rational ranking, metro areas are ranked according to their domestic in-migration rates.

Real World Results

To illustrate the rational livability ranking, consider the 59 U.S. metro areas whose populations exceeded 1 million in 1997.3 First, take the total population change over the period 1990-97 and account for the parts of the change that are not due to migration from elsewhere in the country. To do this, simply subtract from the total population change the natural population change (births minus deaths) and the net number of international migrants. The net number of domestic migrants is what remains. Divide this number by the initial population to obtain the rate of domestic in-migration.

According to these calculations, the most-livable metro area in the United States for the period 1990-97 was Las Vegas, followed by Atlanta and Phoenix. The net domestic migration rate for Las Vegas was an astounding 38 percent, which is more than double the rates for Atlanta and Phoenix. Except for Portland, all of the 10 most-livable cities are in the Sun Belt. At the other extreme, Los Angeles and New York are ranked as the two least-livable metro areas, as revealed by the large number of people choosing to move elsewhere. Four of the bottom 10 are in California, and three are in the New York/Northern New Jersey combined metro area.4

The 1997 Places Rated and Money indices can be used to rank these 59 large metro areas. Only two of Places Rated's top 10 are also in the rational top 10, although four of Money's top 10 are. Interestingly, Las Vegas was ranked dead last by Places Rated. Also, Places Rated's top metro area, Orange County, was fifth from the bottom in the rational ranking.

The rational ranking had slightly more in common with Money's ranking than with Places Rated's. The rational top 10 had an average rank of 31 in Places Rated and 19 in Money. The rational's bottom 10 had an average rank of 33 in Places Rated and 27 in Money.

This illustration shows that even the simplest economic principle can have powerful applications. Here, the principle of revealed preference provided all of the information needed to show which places people think are the most livable. Once the ranking of metro areas is known, the next step is to find out which factors prevail in the high-ranked areas. Hopefully, knowledge of these factors can then guide governments to make informed policy decisions.

Eran Segev provided research assistance.

Endnotes

  1. Starting with its 1998 ranking, Money began using a representative sample of the population. Also, its web site ( www.pathfinder.com/money/bestplaces/) allows visitors to produce personalized rankings. Unfortunately, though, beginning in 1998 the magazine ranks metro areas within regions only, and no longer provides a single national ranking. [back to text]
  2. The rational index is outlined in detail in Douglas and Wall (1993) and Douglas (1997). [back to text]
  3. The data used to rank the livability of these metro areas from 1990-97 are in Table B-1 of U.S. Census Bureau (1998). [back to text]
  4. The ranking of all 59 metro areas is available via The Regional Economist's web site (www.stls.frb.org/publication/re/1999/b/).[back to text]

References

Douglas, Stratford, and Howard J. Wall. " 'Voting with your Feet' and the Quality of Life Index," Economics Letters (42: 1993), pp. 229-36.

Douglas, Stratford. "Estimating Relative Standard of Living in the United States Using Cross-Migration Data," Journal of Regional Science (August 1997), pp. 411-36.

Fried, Carla, Jeanhee Kim, and Amanda Walmac. "Our 11th Annual Survey of the Best Places to Live in America," Money (July 1997).

Savageau, David, and Geoffrey Loftus. Places Rated Almanac, Fifth Edition, MacMillan (1997).

U.S. Bureau of the Census. State and Metropolitan Area Data Book, 1997-98, Fifth Edition, (1998).

Related Topics

Views expressed in Regional Economist are not necessarily those of the St. Louis Fed or Federal Reserve System.


For the latest insights from our economists and other St. Louis Fed experts, visit On the Economy and subscribe.


Email Us

Media questions

Back to Top