ABOUT

Something to consider before restructuring the FCC . . .

Wednesday, January 18th, 2017

The Chief Economist of the Federal Communications Commission is a temporary position—with a term of a year or so of late—typically bestowed on economists with impressive credentials and experience related to media or telecommunications. Having worked at the FCC long enough to overlap with several chief economists, I noticed an interesting pattern. Many join the FCC full of hope—capable as they are—that they will reform the agency to better integrate “economic thinking” into regular policy decisions, but to quote a former colleague, “leave the agency with their sense of humor intact.”

I have heard many a former FCC economist rail against the lack of economic thinking at the FCC, with some former chief economists going very much on the record to do so (for instance, see here and here). Others (not necessarily affiliated with the FCC) have gone as far as to point out that much of what the FCC does or attempts to do is duplicative of the competition policies of the Department of Justice and Federal Trade Commission. These latter points are not a secret. The FCC publicly says so in every major transaction that it approves.

For example, in a transaction that I have had the pleasure to separately write about with one of the FCC’s former chief economists and a number of other colleagues, AT&T’s acquisition of former competitor Leap Wireless (see here and here), the FCC wrote (see ¶ 15):

Our competitive analysis, which forms an important part of the public interest evaluation, is informed by, but not limited to, traditional antitrust principles. The Commission and the Department of Justice (“DOJ”) each have independent authority to examine the competitive impacts of proposed communications mergers and transactions involving transfers of Commission licenses.

This standard language can be found in the “Standard of Review” section in any major FCC transaction order. The difference is that whereas the DOJ reviews telecom mergers pursuant to Section 7 of the Clayton Act, the FCC’s evaluation encompasses the “broad aims of the Communications Act.” From a competition analysis standpoint, a major difference is that if the DOJ wishes to stop a merger, “it must demonstrate to a court that the merger may substantially lessen competition or tend to create a monopoly.” In contrast, parties subject to FCC review have the burden of showing that the transaction, among other things, will enhance existing competition.

Such duplication and the alleged lack of economics at the FCC has led a number of individuals to suggest that the FCC should be restructured and some of its powers curtailed, particularly with respect to matters that are separately within the purview of the antitrust agencies. In particular, recently, a number of individuals in Donald Trump’s FCC transition team have written (read here) that Congress “should consider merging the FCC’s competition and consumer protection functions with those of the Federal Trade Commission, thus combining the FCC’s industry expertise and capabilities with the generic statutory authority of the FTC.”

I do not completely disagree—I would be remiss if I did not admit that the transition team makes a number of highly valid points in its comments on “Modernizing the Communications Act.” However, as Harold Feld, senior VP of Public Knowledge recently pointed out, efforts to restructure the FCC present a relatively “radical” undertaking and my main motivation in writing this post is to highlight Feld’s point by reminding readers of a recent court ruling.

In 2007—well before its acquisition of DIRECTV and its offer of unlimited data to customers who bundle its AT&T and DIRECT services—AT&T offered mobile wireless customers unlimited data plans. AT&T later phased out these plans except for customers who were “grandfathered”—those customers who signed up for an unlimited plan while it was available and never switched to an alternative option. In October 2011, perhaps worried about the implications of unlimited data in a data hungry world, AT&T reduced speeds for grandfathered customers on legacy plans whose monthly data usage surpassed a certain threshold—a practice that the FTC refers to as data throttling.

The FTC filed a complaint against AT&T under Section 5 of the FTC Act, alleging that customers who had been throttled by AT&T experienced drastically reduced service, but were not adequately informed of AT&T’s throttling program. As part of its complaint, the FTC claimed that AT&T’s actions violated the FTC Act and sought a permanent injunction on throttling and other equitable relief as deemed necessary by the Court.

Now here is where things get interesting: AT&T moved to dismiss on the basis that it is exempt as a “common carrier.” That is, AT&T claimed that the appropriate act that sets out jurisdiction over its actions is the Communications Act, and not the FTC Act. Moreover, AT&T’s position was that an entity with common carrier status cannot be regulated under the section that the FTC brought to this case (§ 45(a)), even when it is providing services other than common carriage services. This led one of my former colleagues to joke that this would mean that if AT&T were to buy General Motors, then it could use false advertising to sell cars and be exempt from FTC scrutiny.

The District Court for the Northern District of California happened to consider this matter after the FCC reclassified mobile data from a non-common carriage service to a common carriage service (in its Open Internet Order), but before the reclassification had gone into effect. The Court concluded that contrary to AT&T’s arguments, “the common carrier exception applies only where the entity has the status of common carrier and is actually engaging in common carrier activity.” Moreover, it denied AT&T’s motion because AT&T’s mobile data service was not regulated as common carrier activity by the FCC when the FTC suit was filed. However, in August 2016, this decision was reversed on appeal by the U.S. Court of Appeals for the Ninth Circuit (see here), which ruled that the common carrier exemption was “status based,” not “activity based,” as the lower court had determined.

Unfortunately, this decision leaves quite a regulatory void. To my knowledge, the FCC does not have a division of Common Carrier Consumer Protection (CCCP), and I doubt that any reasonable individual familiar with FCC practice would interpret the Open Internet Order as an attempted FCC power grab to attempt to duplicate or supplant FTC consumer protection authority. Indeed, the FCC articulated quite the reverse position by recently filing an Amicus Curiae Brief in support of the FTC’s October 2016 Petition to the Ninth Circuit to have the case reheard by the full court.

So what’s my point? Well first, the agencies are not intentionally attempting to step on each other’s toes. By and large, the FCC understands the role of the FTC and the DOJ and vice versa. Were AT&T to acquire General Motors, it is highly probable that given the state of regulation as it stands, employees at the FCC would find it preferable if the FTC continued to oversee General Motors’ advertising practices. A related stipulation applies to the FCC’s competition analysis. Whereas the analysis may be similar to that of the antitrust agencies, it is motivated at least in part by the FCC’s unique mission to establish or maintain universal service, which can lead to different decisions being made in the same case (for instance, whereas the DOJ did not challenge AT&T’s acquisition of Leap Wireless, the FCC imposed a number of conditions to safeguard against loss of service).

Of course, one could argue that confusion stemming from the above case might have been avoided had the FCC never had authority over common carriage in the first place. But if making that argument, one must be cognizant of the fact that although the FTC Act predates the Communications Act of 1934, prior to 1934, it was the Interstate Commerce Act, not the FTC Act, that lay out regulations for common carriers.  In other words, legislative attempts to rewrite the Communications Act will necessitate changes in various other pieces of legislation in order to assure that there are no voids in crucial protections to competition and consumers. Thus, to bolster Harold Feld’s points: those wishing to restructure the FCC need to do so being fully aware of what the FCC actually does and doesn’t do, they must take heed of all the subtleties underlying the legislation that lays the groundwork for the various agencies, and they should be mindful of potential for interpretation and reinterpretation under the common law aspects of our legal system.

Tags: , , , , ,

No Comments


Undesirable Incentives in the Incentive Auction (w. Emily Schaal)

Saturday, December 10th, 2016

Following the 2016 U.S. Presidential election, in a letter to FCC Chairman Wheeler, Republicans urged the FCC to avoid “controversial items” during the presidential transition.  Shortly thereafter, the Commission largely scrubbed its Nov. 17 agenda resulting in perhaps the shortest Open Commission Meeting in recent history.  Start at 9:30 here for some stern words from Chairman Wheeler in response.  Viewers are urged to pay particular attention to an important history and civics lesson from the Chairman in response to a question at 17:20 (though this should not indicate our agreement with everything that the Chairman says).

So what is the Commission to do prior to the transition?  According to the Senate Committee on Commerce, Science, and Transportation, the FCC can “focus its energies” on “many consensus and administrative matters.”  Presumably, this includes the FCC’s ongoing incentive auction, now set for its fourth round of bidding, and subject to its own controversies, with dissenting votes on major items released in 2014 (auction rules and policies regarding mobile spectrum) by Republican Commissioners concerned about FCC bidding restrictions and “market manipulation,” along with a statement by a Democratic Commissioner saying that FCC bidding restrictions did not go far enough.

The Incentive Auction

Initially described in the 2010 National Broadband Plan, the Incentive Auction is one of the ways in which the FCC is attempting to meet modern day demands for video and broadband services.  The FCC describes the auction for a broad audience in some detail here and here.  In short, the auction was intended to repurpose up to 126 megahertz of TV band spectrum, primarily in the 600 MHz band, for “flexible use” such as that relied on by mobile wireless providers to offer wireless broadband.  The auction consists of two separate but interdependent auctions—a reverse auction used to determine the price at which broadcasters will voluntarily relinquish their spectrum usage rights and a forward auction used to determine the price companies are willing to pay for the flexible use wireless licenses.

Repackaging

What makes this auction particularly complicated is a “repackaging” process that connects the reverse and forward auction.  The current licenses held by broadcast television stations are not necessarily suitable for the type of contiguous blocks of spectrum that are necessary to set up and expand regional or nationwide mobile wireless networks.  As such, repackaging involves reorganizing and assigning channels to the remaining broadcast television stations—that remain operational post-auction—in order to clear spectrum for flexible use.

The economics and technical complexities underlying this auction are well described in a recent working paper entitled “Ownership Concentration and Strategic Supply Reduction,” by Ulrich Doraszelski, Katja Seim, Michael Sinkinson, and Peichun Wang (henceforth Doraszelski et al. 2016) now making its way through major economic conferences (Searle, AEA).  As the authors point out with regard to the repackaging process (p. 6):

[It] is visually similar to defragmenting a hard drive on a personal computer.  However, it is far more complex because many pairs of TV stations cannot be located on adjacent channels, even across markets, without causing unacceptable levels of interference.  As a result, the repackaging process is global in nature in that it ties together all local media markets.

With regard to the reverse auction, Doraszelski et al. (2016) note that (p. 7):

[T]he auction uses a descending clock to determine the cost of acquiring a set of licenses that would allow the repacking process to meet the clearing target.  There are many different feasible sets of licenses that could be surrendered to meet a particular clearing target given the complex interference patterns between stations; the reverse auction is intended to identify the low-cost set . . . if any remaining license can no longer be repacked, the price it sees is “frozen” and it is provisionally winning, in that the FCC will accept its bid to surrender its license.

The idea is that the FCC should minimize the total cost of licenses sold on the reverse auction while making sure that its nationwide clearing target is satisfied.  As Doraszelski et al. (2016) note, the incentive auction has various desirable properties.  Of particular note is strategy proofness (see Milgrom and Segal 2015), whereby it is (weakly) optimal for broadcast license owners to truthfully reveal each station’s value as a going concern in the event that TV licenses are separately owned.

Strategic Supply Reduction

However, the author’s main concern in their working paper is that in spite of strategy proofness, the auction rules do not prevent firms that own multiple broadcast TV licenses from potentially engaging in strategic supply reduction.  As Doraszelski et al. (2016) show, this can lead to some fairly controversial consequences in the reverse auction that might compound any issues that could arise (e.g., decreased revenue) due to bidding restrictions in the forward auction.  Specifically, the authors find that multi-license holders are able to earn large rents from a supply reduction strategy where they strategically withhold some of their licenses from the auction to drive up the closing price for the remaining licenses they own.

The incentive auction aside, strategic supply reduction is a fairly common phenomenon in standard economic models of competition.  Consider for instance a typical model of differentiated product competition (or the Cournot model of homogenous product competition).  In each of these frameworks, firms’ best response strategies lead them to set prices or quantities such that the quantity sold is below the “perfectly competitive” level and prices are above marginal cost—thus, firms individually find it optimal to keep quantity low to make themselves (and consequently, their competitors) better off than under perfect competition.

In the incentive auction, a multi-license holder that withdraws a license from the auction could similarly increase the price for the remaining broadcast TV licenses that it owns (as well as the price of other broadcast TV license owners).  However, in contrast to the aforementioned economic models, in which firms effectively reduce supply by underproducing, a firm engaging in strategic supply reduction is left with a TV station that it might have otherwise sold in the auction.  The firm is OK with this if the gain from raising the closing price for other stations exceeds the loss from continuing to own a TV station instead of selling it into the auction.


Example 1

Consider the following highly stylized example of strategic supply reduction: There are two broadcasters, B1 and B2, in a market where the FCC needs to clear three stations (the reverse auction clearing target) and there are three different license “qualities,” A, B, and C, for which broadcasters have different reservation prices and holdings as follows:

B1 Quantity B2 Quantity Reservation Price
A 1 2 10
B 1 0 6
C 2 1 2

Suppose that the auctioneer does not distinguish between differences in licenses (this is a tremendous simplification relative to the real world).  Consider a reverse descending clock auction in which the auctioneer lowers its price in decrements of $2 starting at $10 (so $10 at time 1, $8 at time 2, and so on until the auction ends), and ceases to lower its price as soon as it realizes that any additional licensee drop outs would not permit it to clear its desired number of stations (as would for instance happen when quality A and B licenses drop out).  Suppose that a broadcaster playing “truthfully” that is indifferent between selling its quality license and dropping out remains in the auction (so that for instance, A quality licenses are not withdrawn until the price falls from $10 to $8).

In a reverse descending clock auction in which broadcasters play “naïve” strategies, each broadcaster would offer all of their licenses and drop some from consideration as the price decreases over time. However, there is another “strategic” option, in which B1 withholds a quality C license from the auction (B1 can do so by either overstating its reservation price for this license—say claiming that it is $10—or by not including it in the auction to begin with):

Naive Strategic
B1 B2 B1 B1 B2
Offered Offered Offered Withheld Offered
A 1 2 1 2
B 1 1
C 2 1 1 1 1
Licenses Auctioned 7 6

The results of the naïve bidding versus the strategic bidding auction are quite different.  In the naïve bidding auction, the auctioneer can continue to lower its price down to $4 at which point B1 pulls out its B quality license and the auction is frozen (further drop outs would not permit the desired number of licenses to be cleared).  Each broadcaster earns $4 for each quality C license with B1 earning a profit of 2×($4-$2)=$4.

Suppose instead that broadcaster B1 withheld one quality C license.  Then the auction would stop at $8 (because there are only three licenses left as soon as A quality licenses are withdrawn).  Each broadcaster now earns $8 per license sold, with B1 earning a profit of ($8-$6)+($8-$2)=$8.  Moreover, B2 benefits from B1’s withholding, earning profit of $6 instead of $2, as in the naïve bidding case.  The astute reader will notice that B1 could have done even better by withholding its B quality license instead!  This is a result of our assumption that the auctioneer treats all cleared licenses equally, which is not true in the actual incentive auction.  Finally, notice that even though B2 owns three licenses in this example, strategic withholding could not have helped it more than B1’s strategic withholding did unless it colluded with B1 (this entails B2 to withhold its quality A licenses and B1 to withhold both quality C licenses).


Evidence of Strategic Supply Reduction

Doraszelski et al. (2016) explain that certain types of geographic markets and broadcast licenses are more suitable for strategic supply reduction.  They write:

First, ideal markets from a supply reduction perspective are [those] in which the FCC intends to acquire a positive number of broadcast licenses and that have relatively steep supply curves around the expected demand level.  This maximizes the impact of withholding a license from the auction on the closing price . . .  Second, suitable groups of licenses consist of sets of relatively low value licenses, some with higher broadcast volume to sell into the auction and some with lower broadcast volume to withhold.

What is perhaps disconcerting is the fact that Doraszelski et al. (2016) have found evidence indicating that certain private equity firms spent millions acquiring TV licenses primarily from failing or insolvent stations in distress, often covering the same market and in most instances on the peripheries of major markets along the U.S. coasts.  Consistent with their model, the authors found that many of the stations acquired had high broadcast volume and low valuations.

Upon performing more in depth analysis that attempts to simulate the reverse auction using ownership data on the universe of broadcast TV stations together with FCC data files related to repacking—the rather interesting details of which we would encourage our audience to read— Doraszelski et al. (2016) conclude that strategic supply reduction is highly profitable.  In particular, using fairly conservative tractability assumptions, the authors found that simulated total payouts increased from $17 billion under naïve bidding to $20.7 billion with strategic supply reduction, with much of that gain occurring in markets in which private equity firms were active.


Example 2

Suppose that in our example above that the quality C stations held by broadcaster B1 were initially under the control of two separate entities, call these B3 and B4.  Then, if B1, B2, B3, and B4 were to participate in the auction, strategic withholding on the part of B1 would no longer benefit it.  However, B1 could make itself better off by purchasing one, or potentially both of the individual C quality licenses held by B3 and B4.  Consider the scenario where B1 offers to buy B3’s license.  B3 is willing to sell at $4 or more, the amount it will earn under naïve bidding in the auction and Bertrand style competition between B3 and B4 will keep B1 from offering more than that.  With a single C quality license, B1 can proceed to withhold either its B or C quality license, raise the price to $8, and benefit both itself, and the other broadcasters who make a sale in the auction.


This result, whether realized by the FCC ex-ante or not, is problematic for several reasons.  First, it raises the prospect that revenues raised in the forward auction will not be sufficient to meet payout requirements in the reverse auction.  As is, this has already occurred three times, with the FCC having had lowered its clearance target to 84 megahertz from the initial 126 megahertz; though we caution that the FCC is currently not permitted to release data regarding the prices at which different broadcasters drop out of the auction, so we cannot verify whether final prices in earlier stages of the reverse auction were impacted by strategic supply reduction.  Second, as is the case with standard oligopoly models, strategic supply reduction is beneficial for sellers, but not so for buyers or consumers.

Third, strategic supply reduction by private equity firms raises questions about the proper role and regulation of such firms.  The existence of such firms is generally justified by their role in providing liquidity to asset markets.  However, strategic supply reduction seems to contradict this role, particularly so if withheld stations are not put to good use—something Doraszelski et al. (2016) don’t deliberate on.  Moreover, strategic supply reduction relies on what antitrust agencies often term as unilateral effects—that is, supply reduction is individually optimal and does not rely on explicit or tacit collusion.  However, whereas antitrust laws are intended to deal with cases of monopolization and collusion, it does not seem to us that they can easily mitigate strategic supply reduction.

Doraszelski et al. (2016) propose a partial remedy that does not rely on the antitrust laws: require multi-license owners to withdraw licenses in order of broadcast volume from highest to lowest.  Their simulations show that this leads to a substantial reduction in payouts from strategic bidding (and a glance at Example 1 suggests that it would be effective in preventing strategic supply reduction there as well).  Although this suggestion has unfortunately come too late for the FCC’s Incentive Auction we hope (as surely do the authors) that it will inform future auctions abroad hoping to learn from the U.S. experience.

This post was written in collaboration with Emily Schaal, a student at The College of William and Mary who is pursuing work in mathematics and economics.  Emily and I previously worked together at the Federal Communications Commission, where she provided invaluable assistance to a team of wireless economists.  

Tags: , , , ,

No Comments


Trends in ISP Internet and Video Subscribership

Wednesday, August 24th, 2016

A number of colleagues and I recently completed work on a large grant proposal, and as is typical with grant proposals and research more broadly, a lot of worthwhile research that went in did not survive the final cut.  In this case, one of the core sources of data that motivated the identification strategy used in our proposal stemmed from Internet Service Provider (ISP) data on Internet and video subscribers.  Tables 1 and 2 below, which we did not ultimately submit, display this data for residential and non-enterprise business customers of major publically traded local exchange carriers (LECs) and cable companies for respectively, Internet and video subscriptions.


Table 1: Internet Subscribers for Major Public ISPs

ISP 2010 2011 2012 2013 2014 2015
Cable ISPs
Cable One Unavailable Unavailable Unavailable 473 489 501
Cablevision 2,653 2,701 2,763 2,780 2,760 2,809
Charter 3,385 3,655 3,978 4,640 5,075 5,572
Comcast 16,985 18,144 19,367 20,685 21,962 23,329
Mediacom 379 383 410 431 449 480
TWC 9,803 10,344 11,395 11,606 12,253 13,313
Local Exchange Carrier ISPs
AT&T 16,309 16,427 16,390 16,425 16,028 15,778
CenturyLink 2,349 5,655 5,851 5,991 6,082 6,048
Cincinnati Bell 256 257 259 268 270 287
EarthLink 2,029 1,636 1,350 1,139 976 821
Frontier 1,719 1,764 1,754 1,867 2,360 2,462
Verizon 8,392 8,670 8,795 9,015 9,205 9,228
Windstream 1,567 1,676 1,645 1,469 1,399 1,333

Notes: All subscriber numbers in thousands.  Data obtained from 2010-2015 SEC Annual Reports (10-K) for each firm.


Table 2: Video Subscribers for Major Public ISPs

ISP 2010 2011 2012 2013 2014 2015
Cable ISPs
Cable One Unavailable Unavailable Unavailable 539 451 364
Cablevision 3,008 2,947 2,893 2,813 2,681 2,594
Charter 4,520 4,314 4,158 4,342 4,419 4,430
Comcast 22,790 22,331 22,844 22,577 22,383 22,347
Mediacom 530 473 442 417 390 375
TWC 12,422 12,061 12,218 11,393 10,992 11,035
Local Exchange Carrier ISPs
AT&T 2,987 3,791 4,536 5,460 5,943 5,614
CenturyLink Unavailable 65 106 175 242 285
Cincinnati Bell 28 40 55 74 91 114
EarthLink 0 0 0 0 0 0
Frontier 310 225 347 385 582 554
Verizon 3,472 4,173 4,726 5,262 5,649 5,827
Windstream 427 441 426 402 385 359

Notes: All subscriber numbers in thousands.  For LEC ISPs, subscriber numbers generally do not include affiliated video subscription to satellite video programming.  Data obtained from 2010-2015 SEC Annual Reports (10-K) for each firm.

Casual observation of Table 1 shows that the number of Internet subscribers has continued to grow between 2010 and 2015 for most ISPs, whether cable or LEC.  In contrast, casual observation of Table 2 shows that the number of video subscribers has declined for most cable companies, but grown for most LECs over this time-frame, though LEC video subscribership remained substantially below that of the cable companies.

The general trend in Table 1 will not be surprising to Internet researchers or people who have not been living under a rock.  The Internet has been kind of a big deal the last few years.  For example, it has fostered business innovation (Brynjolfsson and Saunders 2010; Cusumano and Goeldi 2013; Evans and Schmalensee 2016; Parker, Van Alstyne, and Choudary 2016), economic growth (Czernich et al. 2011; Greenstein and McDevitt 2009, Kolko 2012), my ability to blog, and your ability to consume the items in the hyperlinks above.

The trends in Table 2 are less well known outside the world of Internet research and business practice and are at least in part attributable to historical developments involving the Internet. As described by Greg Rosston (2009), LECs initially got into the business of high-speed broadband to improve upon their previously offered dial-up Internet services—they were not initially in the multichannel video programming distribution (MVPD) market.  In contrast, the cable companies became ISPs after it became apparent that coaxial cables used to transmit cable television signals could also be used for high-speed broadband.

Thus, whereas cable companies could use their networks to offer subscribers video and Internet bundles, many LECs have had to partner with satellite video programming distributors or resell competitors’ services to be able to advertise a bundled service.  Eventually, some LECs acquired their own video customers either through purchases of smaller cable competitors in certain areas, or by relying on Internet Protocol television (IPTV), either through construction of fiber networks that deliver service to the home as was the case with Verizon or by doing whatever it is that AT&T does. This has explained the growth of LEC video customers, whereas competition from LECs, video on demand, and mobile wireless service providers should at least partly explain the decline in cable video subscribership.

To put these trends into perspective, I have included one additional Table (Table 3), which displays the ratios of video to Internet subscribers for the ISPs above.  As the Table makes evident, the ratios have declined for most cable companies and increased for most LECs between 2010 and 2015. If I had to make an educated guess, the cable company trend will continue in the coming years, but I am less certain that the trend on the LEC side is sustainable as video on demand and mobile wireless continue to eat into the traditional video market.


Table 3: Ratio of Video to Internet Subscribers for Major Public ISPs

ISP 2010 2011 2012 2013 2014 2015
Cable ISPs
Cable One Unavailable Unavailable Unavailable 1.14 0.92 0.73
Cablevision 1.13 1.09 1.05 1.01 0.97 0.92
Charter 1.34 1.18 1.05 0.94 0.87 0.80
Comcast 1.34 1.23 1.18 1.09 1.02 0.96
Mediacom 1.40 1.23 1.08 0.97 0.87 0.78
TWC 1.27 1.17 1.07 0.98 0.90 0.83
Local Exchange Carrier ISPs
AT&T 0.18 0.23 0.28 0.33 0.37 0.36
CenturyLink Unavailable 0.01 0.02 0.03 0.04 0.05
Cincinnati Bell 0.11 0.15 0.21 0.28 0.34 0.40
EarthLink 0.00 0.00 0.00 0.00 0.00 0.00
Frontier 0.18 0.13 0.20 0.21 0.25 0.23
Verizon 0.41 0.48 0.54 0.58 0.61 0.63
Windstream 0.27 0.26 0.26 0.27 0.28 0.27

Notes: Ratios represent the fraction of residential and non-enterprise business customers who subscribe to a video service relative to those who subscribe to high-speed Internet.  For LEC ISPs, ratios generally do not include affiliated video subscription to satellite video programming.

If you want to reuse or make fancy graphs out of the data located in this post, please attribute the data to Aleksandr Yankelevich, Quello Center, Michigan State University.

Tags: , , , , , , ,

No Comments


An Abridged History of Open Internet Regulation and Its Policy Implications (w. Kendall Koning)

Friday, June 17th, 2016

On June 14, 2016, the United States Court of Appeals for the District of Columbia (D.C. Circuit) upheld the FCC’s 2015 network neutrality regulations, soundly denying myriad legal challenges brought by the telecommunications industry (U.S. Telecomm. Ass’n v. FCC 2016).  Thus, unless the Supreme Court says otherwise, Congress rewrites the rules, or INSERT TRENDING CELEBRITY NAME truly breaks the Internet, we can expect to receive our lawful content without concerns that it would be throttled or that the content provider paid a termination fee.  How did we get here?  As my colleague Kendall Koning, a telecommunications attorney and Ph.D. candidate at the Department of Media and Information at Michigan State and I lay out in this blog post outlining the history of net neutrality regulation, it has been a long road.

A short Quello Center Working Paper covering substantially the contents of this blog post is available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2797366

The most recent D.C. Circuit case represented the third time that FCC network neutrality rules had been before that court, the first two having been struck down on largely procedural grounds.  The FCC’s 2015 Open Internet Order remedied these flaws by formally grounding the rules in Title II of the Telecommunications Act (47 U.S.C. § 201 et. sq. 2016) while simultaneously exercising a separate forbearance authority to exempt ISPs from some of the more restrictive rules left over from the PSTN era.

The U.S. Telecommunications Association (USTelecom), a trade group representing the nation’s broadband service providers along with various other petitioners, had challenged the FCC’s Order on a number of grounds.  USTelecom’s central challenge echoed earlier arguments that ISPs don’t really offer telecommunications, i.e., the ability to communicate with third parties without ISPs altering form and content, but an integrated information service, where ISP servers exercise control over the form and content of information transmitted over the network.  As explained below, this perspective was a historical artifact from the era of America Online and dial-up ISPs, but had been used successfully at the start of the broadband era.  In a stinging rejection of ISP arguments, the D.C. Circuit not only found that the FCC’s reclassification of Internet access as telecommunications was reasonable and within the bounds of the FCC’s discretionary authority but offered a strong endorsement of this perspective (U.S. Telecomm. Ass’n v. FCC supra at 25-26):

That consumers focus on transmission to the exclusion of add-on applications is hardly controversial. Even the most limited examination of contemporary broadband usage reveals that consumers rely on the service primarily to access third-party content . . . Indeed, given the tremendous impact third-party internet content has had on our society, it would be hard to deny its dominance in the broadband experience. Over the past two decades, this content has transformed nearly every aspect of our lives, from profound actions like choosing a leader, building a career, and falling in love to more quotidian ones like hailing a cab and watching a movie. The same assuredly cannot be said for broadband providers’ own add-on applications.

The Rules, What are They Good For?

At present, the FCC states that its current Open Internet rules “protect and maintain open, uninhibited access to legal online content without broadband Internet access providers being allowed to block, impair, or establish fast/slow lanes to lawful content.”  In particular, the present rules make clear the following three conditions, each of which is subject to a reasonable network management stipulation (FCC 2015 ¶¶ 15-18):

  1. No Blocking: A person engaged in the provision of broadband Internet access service . . . shall not block lawful content, applications, services, or non-harmful devices . . . .
  2. No Throttling: A person engaged in the provision of broadband Internet access service . . . shall not impair or degrade lawful Internet traffic on the basis of Internet content . . . .
  3. No Paid Prioritization: A person engaged in the provision of broadband Internet access service . . . shall not engage in paid prioritization . . . [—the] management of a broadband provider’s network to directly or indirectly favor some traffic over other traffic . . . either (a) in exchange for consideration (monetary or otherwise) from a third party, or (b) to benefit an affiliated entity.

These rules are, to a degree, a modern version of common carrier non-discrimination rules adapted for the Internet.  47 U.S.C. §201(b) requires that “all charges, practices, classifications, and regulations for . . . communication service shall be just and reasonable.”  Whereas in the United States, these statutes date back to the Telecommunications Act of of 1934, common carrier rules more generally have quite a long history, with precursors going as far back as the Roman Empire (Noam 1994).  One of the purposes of these rules is to protect consumers from what is frequently deemed unreasonable price discrimination: if a product or service is critically important, only available from a very small number of firms, and not subject to arbitrage, suppliers may be able to charge each consumer a price closer to that consumer’s willingness to pay, rather than a single market price.

Consumers of Internet services are not only individuals but also content providers, like ESPN, Facebook, Google, Netflix, and others, who rely on the Internet to reach their customers.  As a general-purpose network platform, the Internet connects consumers and content providers via myriad competing broadband provider networks, none of which can reach every single consumer (FCC 2010 ¶ 24).  The D.C. Circuit succinctly laid it out, writing (U.S. Telecomm. Ass’n v. FCC, supra at 9):

When an end user wishes to check last night’s baseball scores on ESPN.com, his computer sends a signal to his broadband provider, which in turn transmits it across the backbone to ESPN’s broadband provider, which transmits the signal to ESPN’s computer.  Having received the signal, ESPN’s computer breaks the scores into packets of information which travel back across ESPN’s broadband provider network to the backbone and then across the end user’s broadband provider network to the end user, who will then know that the Nats won 5 to 3.

Thus, when individuals or entities at the “edge” of the Internet wish to connect to others outside their host ISP network, that ISP facilitates the connection by using its own peering and transit arrangements with other ISPs to move the content (data) from the point of origination to the point of termination.

One of the key issues in the the network neutrality debate was whether or not ISPs where traffic terminates should be allowed to offer these companies, for a fee, a way to prioritize their Internet traffic over the traffic of others when network capacity was insufficient to satisfy current demand.  Many worried that structuring Internet pricing in this way would enable price discrimination among content providers (Choi, Jeon, and Kim 2015) and might have several undesirable side effects.

First, welfare might be diminished if prioritization results in a diminished diversity of content (Economides and Hermalin 2012).  Second, because prioritization is only valuable when network demand is greater than its capacity, selling prioritization might create a perverse incentive to keep network capacity scarce (Choi and Kim 2010; Cheng, Bandyopadhyay, Guo 2011).  Third, ISPs who offer cable services or are otherwise vertically integrated into content might use both of these features to disadvantage their competitors in the content markets.  In light of the risk that ISPs pursue price discrimination to defend their vertically integrated content interests, network neutrality can be seen as an application of the essential facilities doctrine from antitrust law (Pitofsky, Patterson, and Hooks 2002) to the modern telecommunications industry.

In response, broadband ISPs have claimed that discriminatory treatment of certain traffic was necessary to mitigate congestion (FTC 2007; Lee and Wu 2009 broadly articulate this argument).[1]  ISPs also claim that regulation prohibiting discriminatory treatment of traffic would dissuade them from continued investment in reliable Internet service provision (e.g., FCC 2010 ¶ 40 and n. 128; FCC 2015 at ¶ 411 and n. 1198) and even the FCC noted that its 2015 net neutrality rules could reduce investment incentives (FCC 2015 at ¶ 410).  Nevertheless, the FCC partially justified the implementation of net neutrality by noting that it believed that any potential investment-chilling effect of its regulation was likely to be short term and would dissipate over time as the marketplace internalized its decision.  Moreover, the FCC claimed that prior time periods of robust ISP regulation coincided with upswings in broadband network investment (FCC 2015 at ¶ 414).

How the Rules Came About?

The Commission’s Open Internet rules are far from the first time that the telecommunications industry has faced similar issues.  Half a century ago, AT&T refused to allow the use of cordless phones manufactured by third parties until it was forced to do so by a federal court (Carter v. AT&T, 250 F.Supp 188, N.D. Tex. 1966). The federal courts also needed to intervene before MCI was allowed to purchase local telephone service from AT&T to complete the last leg of long-distance telephone calls (MCI v. AT&T, 496 F.2d 214, 3rd Cir. 1974).  AT&T’s refusal to provide local telephone service to it’s long-distance competitor was deemed an abuse of its monopoly in local telephone service to protect its monopoly in long-distance telephone service, and featured prominently in the breakup of AT&T in 1984 (U.S. v. AT&T, 522 F.Supp. 131, D.D.C. 1982).  Subsequent vigorous competition in the long-distance market helped drive down prices significantly.

The rules developed for computer networks throughout the FCC’s decades long Computer Inquiries were also designed to ensure third party companies had non-discriminatory access to necessary network facilities, and to facilitate competition in the emerging online services market (Cannon 2003).  For example, basic telecommunications services, like dedicated long-distance facilities, were required to be offered separately without being bundled with equipment or computer processing services.  These services were the building blocks upon which the commercial Internet was built.

The rules that came out of the Computer Inquiries were codified by Congress in the Telecommunications Act of 1996, by classifying the Computer Inquiry’s basic services as telecommunications services under the 1996 Act, the Computer Inquiry’s enhanced services as information services under the 1996 Act, and subjecting only the former to the non-discrimination requirements of Title II (FCC 2015 at ¶¶ 63, 311-313; Cannon 2003; Koning 2015).[2]  In particular, 47 U.S.C. Title II stipulates that it is unlawful for telecommunications carriers “to make or give any undue or unreasonable preference or advantage to any particular person, class of persons, or locality, or to subject any particular person, class of persons, or locality to any undue or unreasonable prejudice or disadvantage (47 U.S.C. § 202(a) 2016).”

Internet access specifically was first considered in terms of this classification in 1998.  Alaska Sen. Ted Stevens and others wanted dial-up ISPs to pay fees into the Universal Service Fund, which subsidized services for poor and rural areas.  The FCC ruled that ISPs were information services because they “alter the format of information through computer processing applications such as protocol conversion” (FCC 1998 ¶ 33).  However, to understand this classification, it is important to keep in mind that ISP services at this time were provided using dial-up modems as the PSTN.  In other words, in 1998 the Internet was an “overlay” network—one that uses a different network as the underlying connections between network points (see, e.g., Clark et al. 2006).  If consumers’ connections to their ISPs were made using dial-up telephone connections, then USF fees for the underlying telecommunications network were already being paid through consumers’ telephone bills.

In this context, applying USF fees to both ISPs and the underlying network would have effectively been double taxation.  Additionally, the service dial-up ISPs provided could reasonably be described as converting an analog telecommunications signal (from a modem) on one network (the PSTN) to a digital packet switched one (the Internet), which is precisely the sort of protocol conversion that had been treated as an enhanced service under the Computer Inquiry rules.  The same reasoning does not apply to broadband Internet access service, because it provides access to a digital packet switched network directly rather than through a separate underlying network service (Koning 2015).  However, the FCC continued to apply this classification to broadband ISPs, effectively removing broadband services from regulation under Title II.

Modern policy concerns over these issues reappeared in the early 2000s when the competitive dial-up ISP market was being replaced with the broadband duopoly of Cable and DSL providers.[3]  The concern was that if ISPs had market power, they might deviate from the end-to-end openness and design principles that characterized the early Internet (Lemley and Lessig 2001).  Early efforts focused on preserving competition in the ISP market by fighting to keep last-mile infrastructure available to third-party ISPs as had been the case in the dial-up era.  However, difficult experiences with implementing the unbundling regime of the 1996 Act, differing regulatory regimes for DSL and Cable (local loops for DSL had been subjected to the unbundling provisions of the 1996 Act, but Cable networks were not; an analysis of the consequences of doing this can be found in Hazlett and Caliskan 2008), and the existence of at least duopoly competition between these two incumbents discouraged the FCC from taking that path (FCC 2002, 2005b).  Third-party ISPs tried to argue that Cable modem connections were themselves a telecommunications service and therefore should be subject to the common-carrier provisions of Title II.  The FCC disagreed, pointing to its classification of Internet access as an information service under the 1996 Act.  This classification was ultimately upheld by the Supreme Court in NCTA v. Brand X (545 U.S. 967, 2005).

Unable to rely on the structural protection of a robustly competitive ISP market, the FCC shifted its focus towards the possibility of enforcing an Internet non-discrimination regime through regulation.  During this time period, the meaning and ramifications of “net neutrality,” a term coined in 2003 (Wu 2003), became the subject of vigorous academic debate.  Under the computer inquiries, non-discrimination rules had applied to the underlying network infrastructure, but it was also possible for non-discrimination rules to apply to Internet service itself, just as they had been to other packet-switched networks (X.25 and Frame Relay) in the past (Koning 2015).  However, there was extensive debate over the specific formulation and likely effects of any such rules, particularly among legal scholars (e.g., Cherry 2006, Sidak 2006, Sandvig 2007, Zittrain 2008, Lee and Wu 2009).  Although to that point, there had been no rulemaking proceeding specifically addressing non-discrimination on the Internet, a number of major ISPs had agreed to forego such discrimination in exchange for FCC merger approval (FCC 2015 ¶ 65) and there was still a general expectation that ISPs would not engage in egregious blocking behavior.  In one early case, the Commission fined an ISP for blocking a competitor’s VoIP telephone service (FCC 2005a).  In 2008, the FCC also ruled against Comcast’s blocking of peer-to-peer applications (FCC 2008).  However, the Comcast order was later reversed by the D.C. Circuit (Comcast v. FCC, 600 F.3d 642, D.C. Cir. 2010).

In response to this legal challenge, the FCC initiated formal rulemaking proceedings to codify its network neutrality rules.  In 2010, the FCC released its initial Open Internet Order, which applied the FCC’s Section 706 authority under the Communications Act to address net neutrality directly (FCC 2010 ¶¶ 117-123).  Among other things, the 2010 Open Internet Order adopted the following rule (FCC 2010 ¶ 68):

A person engaged in the provision of fixed broadband Internet service, insofar as such person is so engaged, shall not unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service.  Reasonable network management shall not constitute unreasonable discrimination.

However, these rules were struck down by the D.C. Circuit in January 2014 (Verizon v. FCC, 740 F.3d 623, D.C. Cir. 2014). The root of the problem was that the Commission had continued to classify broadband Internet access as an “information service” under the 1996 Act, where its authority was severely limited.  As the court wrote: “[w]e think it obvious that the Commission would violate the Communications Act were it to regulate broadband providers as common carriers. Given the Commission’s still-binding decision to classify broadband providers not as providers of ‘telecommunications services’ but instead as providers of ‘information services,’ [] such treatment would run afoul of section [47 U.S.C §]153(51): ‘A telecommunications carrier shall be treated as a common carrier under this [Act] only to the extent that it is engaged in providing telecommunications services (Verizon v. FCC, supra at 650).’”

The FCC went back to the drawing board and issued its most recent Open Internet Order in 2015.  This time, the Commission grounded its rules in a reclassification of Internet access service as a Title II telecommunications service.  Moreover, unlike in the 2010 Order, which only subjected mobile broadband providers to a transparency and no blocking requirement (FCC 2010 ¶¶ 97-103), the Commission applied the same rules to providers of fixed and mobile broadband in the 2015 Order (FCC 2015 ¶ 14).

In contrast to information services, telecommunications services are subject to Title II common carrier non-discrimination provisions of the Act (FCC 2005b at ¶ 108 and n. 336).  As discussed above, these statutes expressly address the non-discrimination issues central to the network neutrality issue.  The reclassification permitted the Commission to exercise its Section 706 authority to implement the non-discrimination rules codified in Title II (FCC 2015 ¶¶ 306-309, 363, 365, 434).  On June 14, 2016, the D.C. Circuit upheld the FCC’s Open Internet rules as based on this and other statutes from Title II, 47 U.S.C. § 201 et. sq.

The Future of Net Neutrality

Although the Commission’s long evolving Open Internet rules appear to have found a solid legal grounding, it is important to understand that they are not without limits.  For instance, crucially, the rules stipulate what ISPs can and cannot do at termination, whereas they do not restrict the terms of interconnection and peering agreements with ISP networks (FCC 2015, ¶ 30).  Critically, in contrast to what HBO’s John Oliver might conclude from the FCC’s recent court victory, the Order does not prevent ISPs such as Comcast from requiring payment for interconnection to their networks; it merely subjects interconnection to the general rule under Title II that the prices charged must be reasonable and non-discriminatory.  Rather than making any prospective regulations on interconnection itself, the FCC’s 2015 Order leaves those issues open for future consideration on a case-by-case basis (FCC 2015, ¶ 203).

Additionally, academics are far from a consensus regarding the welfare implications of net neutrality.  When handing out judgement, the D.C. Circuit was careful to point out that its ruling was limited by a determination of whether the FCC has acted “within the limits of Congress’s delegation” (U.S. Telecomm. Ass’n v. FCC, supra note 1 at 23) of authority, and not on the economic merits or lack thereof of the FCC’s Internet regulations.[4]  In contrast to some of the aforementioned theoretical economics articles, there are a number of theoretical studies that find the type of quality of service tiering that is ruled out by the 2015 Order is likely to result in higher broadband investment and increase diversity of content (Krämer and Wiewiorra 2012; Bourreau, Kourandi, Valletti 2015), or for that matter, that under certain circumstances, it may not matter at all (Gans 2015; Gans and Katz 2016; Greenstein, Peitz, and Valletti 2016).  The empirical economic literature on net neutrality is at a very early stage and has thus far mostly focused on the consequences of other regulatory policies that might be likened to net neutrality regulation (Chang, Koski, and Majumdar 2003; Crandall, Ingraham, and Sidak 2004; Hausman and Sidak 2005; Hazlett and Caliskan 2008; Grajec and Röller 2012). To the extent that economists and other academicians reach some consensus on certain aspects of broadband regulation in the future, the FCC may be persuaded to update its rules.

Finally, the scope of the existing Open Internet rules remains under debate.  For instance, public interest group, Public Knowledge, recently rekindled the debate regarding whether zero rating (alternatively referred to as sponsored data plans) policies that exempt certain content from broadband caps imposed by certain providers constitute a violation of Open Internet principles (see Public Knowledge 2016; Comcast 2016).  Although the Commission has not ruled such policies out, in the 2015 Order, it left the door open to reassess them (FCC 2015, ¶¶ 151-153).

Signaling its concern about such policies, the FCC conditioned its recent approval of the merger between Charter Communications and Time Warner Cable on the parties consent not to impose data caps or usage-based pricing (FCC 2016 ¶ 457).  Academic research on this topic remains scarce.  Economides and Hermalin (2015) have suggested that in the presence of a sufficient number of content providers, ISPs able to set a binding cap will install more bandwidth than ones barred from doing so; to our knowledge, economists have not rigorously assessed zero rating and the FCC continues its inquiry into these policies.


[1] It should be noted that notwithstanding these claims, congestion control is already built into the TCP/IP protocol.  Further, more advanced forms of congestion management have been developed for specific applications, such as buffering and adaptive quality for streaming video, that allow these applications to adapt to network congestion.  Whereas real-time network QoS guarantees could be useful for certain applications (e.g., live teleconferencing), these applications represent a small share of overall Internet traffic.

[2] The categorizations embodied by the Computer Inquiries decisions initially stemmed from an attempt to create a legal and regulatory distinction between “pure communications” and “pure data processing,” the former of which was initially provisioned by an incumbent regulated monopoly (primarily AT&T), and the latter of which was viewed as largely competitive and needing little regulation.  The culmination of these inquiries implicitly led to a layered model of regulation, dividing communication policy into (i) a physical network layer (to which common carrier regulation might apply), (ii) a logical network layer (to which open access issues might apply), (iii) an applications and services layer, and (iv) a content layer (Cannon 2003 pp. 194-5, Koning 2015 pp. 286-7).

[3] One 1999 study found a total of 6,006 ISPs in the U.S.  See, e.g., Greenstein and Downes (1999) at 195-212.

[4] In particular, the Court wrote, “Nor do we inquire whether `some or many economists would disapprove of the [agency’s] approach’ because ‘we do not sit as a panel of referees on a professional economics journal, but as a panel of generalist judges obliged to defer to a reasonable judgement by an agency acting pursuant to congressionally delegated authority.”

Tags: , , , , , , , , ,

No Comments


Annie Waldherr Seminar on Food Safety in Online Issue Networks

Thursday, May 5th, 2016

Annie Waldherr presented a joint Media & Information and Quello Center seminar entitled “Discussing food safety in online issue networks: Empirical results and methodological prospects.”  Her talk highlighted that civil society actors concerned about food safety issues—GMOs, pesticide residues, and antibiotic-resistant superbugs—build coalitions that can eventually result in movement networks.  These connections can be empirically observed in online issue networks—sets of interlinked websites treating a common issue.

To assess mobilization potentials of actor coalitions Annie and her colleagues study the extent to which actors link to each other and to which actors talk about the same topics.  They combine hyperlink network analysis with probabilistic topic modeling to gain empirical insights on both, the structural as well as the content dimension of the issue networks.  Preliminary results for the US indicate a densely connected issue network spanning from central challenger actors to websites of mass media and political actors. A high number of issues expand through major parts of the network, such as contaminated food and regulation, genetically modified food, organic farming and sustainable agriculture. A fewer number of issues such as use of antibiotics or pollution of drinking water remain restricted to specific parts of the network.

Dr. Annie Waldherr is a Researcher in the Division of Communication Theory and Media Effects, Institute for Media and Communication Studies at Freie Universität Berlin.  Annie has used agent-based modeling (ABM) and network analysis to study computer-mediated communications processes. Her recent work using ABM was published in the Journal of Communication.

 

 

Tags: , , , , ,

No Comments


The Un#ballogetic World of Wireless Ads

Friday, April 1st, 2016

I belong to that rare breed of human that enjoys commercials.  As a social scientist with an interest in the impact of advertisement on consumer behavior, I often find myself, possibly to the chagrin of my wife (though she has not complained), assessing commercials out loud.  Are they informative?  Are they persuasive or attempt simply to elicit attention to the good in the ad?  Might they unintentionally lead to brand confusion?  Most importantly, are they funny?

Thus, having also spent some time among wireless regulators, I cannot help but comment on the recent spate of wireless attack ads perpetuated by three of the U.S. nationwide mobile wireless providers.  The initial culprit this time around was Verizon Wireless, which determined that balls were a good method to represent relative mobile wireless performance among the nationwide competitors.  Shortly thereafter, Sprint aired a commercial using bigger balls while T-Mobile brought in Steve Harvey to demand that Verizon #Ballagize.

There are myriad takeaways that can be had from these commercials.  First, at least on the face of it, the nationwide mobile wireless providers appear to be fiercely competitive with one another.  It would be interesting to look at advertising to sales ratios for this industry relative to that of other industries in the U.S., though at the time of writing of this blog, I did not have access to such data (Ad Age appears to be a convenient source).  Moreover, the content of the commercials suggests that although price continues to be an important factor (Sprint did not veer away from its “half-off” theme in its ball commercial), quality competition that allows competitors to differentiate their product (and in doing so, justify higher prices) remains paramount.

Unfortunately, as a consumer, it is difficult for me to properly assess what these commercials say about wireless quality.  There are a number of points at play here.

  1. The relative comparisons are vague: When Sprint says that it delivers faster download speeds than the other nationwide providers, what does that mean?  When I zoom into the aforementioned Sprint commercial at the 10 second mark, the bottom of the screen shows, “Claim based on Sprint’s analysis of average LTE download speeds using Nielsen NMP data (Oct. thru Dec. 2015).  NMP data captures real consumer usage and performance for downloads of all file sizes greater than 150kb.  Actual speeds may vary by location and device capability.”  As a consumer who spends most of his time in East Lansing, MI, I am not particularly well informed by a nationwide average.  Moreover, I know nothing about the statistical validity of the data (though here I am willing to give Nielsen the benefit of the doubt).  Moreover, I would be interested to know when Sprint states that it delivers faster download speeds, how much faster they are (in absolute terms) relative to the next fastest competitor.
  2. The small print is too small: Verizon took flak from its competitors for using outdated data in its commercial.  This is a valid claim.  Verizon’s small print (13 second mark in its commercial) states that RootMetrics data is based on the 1st half of 2015.  But unless I am actually analyzing these commercials as I am here, and viewing them side by side, it is difficult for me to make the comparison.
  3. The mobile wireless providers constantly question one another’s credibility, and this is likely to make me less willing to believe that they are indeed credible. Ricky Gervais explains this much better than I do: Ricky Gervais on speed, coverage, and network comparisons.

Alas, how is a consumer supposed to assess wireless providers?  An obvious source is Consumer Reports, but my sense, without paying for a subscription, is that these largely depend on expert reviews and not necessarily data analysis (someone correct me if I am wrong).  Another if one is not in the habit of paying for information about rival firms is the FCC.  The FCC’s Wireless Telecommunications Bureau publishes an “Annual Report and Analysis of Competitive Market Conditions with Respect to Mobile Wireless.”  The most recent, Eighteenth Report, contains a lengthy section on industry metrics with a focus on coverage (see Section III) as well as a section on service quality (see Section VI.C).  The latter section focuses on nationwide average speed according to FCC Speed Test data as well as on data from private sources Ookla, RootMetrics (yes, the one mentioned in those commercials), and CalSPEED (for California only).  If you are interested, be sure to check out the Appendix, which has a wealth of additional data.  For those who don’t want to read through a massive pdf file, there is also a set of Quick Facts containing some of the aforementioned data.

However, what I think is lacking is speed data at a granular level.  When analyzing transactions or assessing competition, the FCC does so at a level that is far more granular than the state, and rightly so, as consumers do not generally make purchasing decision across an entire state, needless to say, the nation as a whole.  This is because service where consumers are likely to be present for the majority of their time is a major concern when deciding on wireless quality.  In a previous blog post I mentioned that the FCC releases granular fixed broadband data, but unfortunately, as far as I am aware, this is still not the case for wireless, particularly with regard to individual carrier speed data.

The FCC Speed Test App provides the FCC with such data.  The Android version which I have on my phone provides nifty statistics about download and upload speed as well as latency and packet loss, with the option to parse the data according to mobile or WiFi.  My monthly mobile only data for the past month showed a download speed above 30 Mbps.  Go Verizon!  My Wifi average was more than double that.  Go SpartenNet!  Yet, my observation does not allow me to compare data across providers in East Lansing and my current contract happens to expire in a couple of weeks.  The problem is that in a place like East Lansing and particularly so in more rural areas of the United States, not enough people have downloaded the FCC Speed Test App and I doubt that the FCC would be willing to report firm level data at a level deemed not to have statistical validity.

For all I know, the entire East Lansing sample consists of my twice or so daily automatic tests that if aggregated to a quarter of a year make up less than 200 observations for Verizon Wireless.  Whether this is sufficient for a statistically significant sample depends on the dispersion in speed observations for a non-parametric measure such as a median speed and also on the assumed distribution for mean speeds.  I encourage people to try this app out.  The more people who download it, the more likely that the FCC will have sufficient data to be comfortable enough to report it at a level that will make it reliable as a decision making tool.  Perhaps then, the FCC will also redesign the app to also report competitor speeds for the relevant geographic area.

Tags: , , , , , , , ,

5 Comments


Understanding the Economics of Net Neutrality

Saturday, February 6th, 2016

Whether you are new to net neutrality and want to better understand the concept or a seasoned researcher who wants an update regarding open questions, I encourage you to read a recent working paper entitled, “Net Neutrality: A Fast Lane to Understanding the Trade-Offs,” by Shane Greenstein, Martin Peitz, and Tommaso Valletti, a group of economists with a track record researching and writing about Internet economics. Although the article is rather recent, I believe it presents a very good starting point for those interested in taking a deeper dive into both specific theoretical and general empirical issues revolving net neutrality.

In this blog post, I attempt to outline the article for perspective readers and provide a few potentially useful links. Although I abstract completely from the math and intuition behind the results, the article is extremely straightforward with this regard.

A good starting point for a discussion of net neutrality begins with an understanding of the uses of the Internet. As the authors see it, there are four relevant categories of use for the Internet:

  1. Static web browsing and e-mail (low bandwidth; can tolerate delay). Data flows are largely symmetric across users.
  2. Video downloading (high bandwidth; can tolerate delay).
  3. Voice-over IP, video-talk, video streaming and multi-player gaming (high bandwidth; quality declines with delay). Data flows are mostly unidirectional from content providers to users.
  4. Peer-to-peer applications (high bandwidth; can tolerate delay; can impose delay on others).

Although much economic research tends to abstract from the technical issues revolving use of the Internet, many studies of net neutrality implicitly model the third variant above and the authors follow suit. This makes up the bulk of modern Internet traffic: for instance, together, Netflix, Youtube, and Amazon Prime have consistently made up approximately 50 percent of all North American Internet traffic as of late.

There are three common arrangements for moving data from content providers to users:

  1. Move data over “backbone lines” (e.g., Level3) and then to local broadband data carriers (e.g., ISPs) where the user is located. This may entail relying on an ISP to get to the backbone line.
  2. Move traffic to servers located geographically close to users: CDNs (e.g., Akamai).
  3. “Collocate” servers inside the network of an ISP. Payment for collocation was at the heart of negotiations between Netflix and Comcast that put net neutrality in the limelight (see also, John Oliver’s response to Tom Wheeler and my tangential reference inspired by Oliver and T-Mobile CEO John Legere).

The authors focus on two definitions of net neutrality: (1) prohibition of payment from content providers to Internet service providers (referred to as one-sided pricing whereby ISPs can only charge consumers) and (2) prohibition of prioritization of traffic with or without compensation.  As Johannes Bauer and Jonathan Obar point out, these are not the only alternatives for governing the Internet (see Bauer and Obar 2014).  In a simple world with no competition and homogeneous users, the authors suggest that net neutrality does not affect profits or consumer surplus. A number of real world considerations are taken into account, and the potential ramification of imposing net neutrality are suggested as follows.

  1. Users and content providers are heterogeneous. In this case, pressure on one side of the market (between ISPs and content providers) can lead to a corresponding change in prices on the other side of the market (between ISPs and users).
    • For instance, when content providers are identical but consumers are heterogeneous, allowing ISPs to charge termination fees to content providers can induce them to lower prices to consumers.
    • On the other hand, when content providers are heterogeneous but consumers are identical, allowing ISPs to charge termination fees can induce inefficient content provider exit.
  2. Some content providers get money from advertising (e.g., Facebook and Google), others charge users directly (e.g., Netflix).
    • The latter situation can complicate the analysis because ISP termination fees may directly impact downstream content prices.
    • The situation is further complicated if content providers can endogenize their mix of advertising and direct revenue (e.g., Pandora).
  3. Competition differs across markets, with multiple ISPs in some markets and this is relevant for studying net neutrality (see Bourreau et al. 2015). I discuss data that could be used to gauge competition in broadband provision at the end of a prior blog post.
  4. Congestion, quality of service, and network and content investment can be impacted by regulation.
    • Long term trade-offs depend on the competitive setting (e.g., horizontal competition, vertical integration).
    • Peak (termination) pricing that might be forbidden under certain forms of net neutrality could lead to welfare-enhancing congestion reducing investment.
    • Prioritization can lead to both, desirable or undesirable outcomes, and this depends on both ISP and content provider investment in congestion reduction (for instance, see Choi et al. 2014).

The authors caution against broad policy prescriptions, and rightly so, given the present ambiguity surrounding the impacts of net neutrality.  Along the way, the authors inspire a number of open empirical questions that might help policy makers.

  1. How much would allowing or eliminating termination fees affect the price charged to subscribers?
  2. Which net neutrality regulations (when in place) have been binding in practice?
  3. How do net neutrality regulations impact investment in congestion reduction?
  4. Does competition alter the need for net neutrality regulation?

I suspect that the first two questions are fairly difficult to answer from an economics perspective because in large part they depend on significant insider knowledge about contracting among market participants. The Quello staff and I are presently contemplating how to rigorously answer questions (3) and (4). We are very interested in your feedback.

Tags: , , , , ,

1 Comment


Price-Matching Advertisements and Consumer Search

Tuesday, January 12th, 2016

The only programs I watch on TV with any regularity are cooking competitions like Chopped and American Football, so I find it somewhat odd that I have seen the same Toys “R” Us commercial advertising the toy retailer’s price-matching guarantee as many times as I have.  Perhaps I have been watching too much Chopped Junior.  In the commercial, a creepy children’s toy informs Optimus Prime that if potential consumers find him for a lower price at a competing retailer, Toys “R” Us will match the lower listed price (see it here).  Optimus Prime is impressed not by the seemingly great deal, but by the existential realization that as a toy, he is not unique in our world.

When I first saw that commercial, my first though was, “I have a publication coming out about the practice of price-matching guarantees, AWWWESOME!!!” (this sentence is funnier after watching the commercial).  My next though was, “it would benefit consumers more if retailers competed by lowering prices than pretended to compete by using price-matching guarantees.” I explain below.

As my co-author, Brady Vaughan, and I show in our paper, “Price-Match Announcements in a Consumer Search Duopoly,” forthcoming in the Southern Economic Journal, although advertisements emphasizing retailers’ price-matching guarantees appear to be pro-competitive, price-matching guarantees actually tend to lessen firms’ incentives to lower price.  The intuition behind our main result is as follows.  Suppose that consumers vary in their propensity to shop around for price.  We might for instance think of individuals as valuing their scarce time differently, leading some to feel that their incremental cost of uncovering an additional sample price (i.e., by visiting an additional store), is higher than that of others.  In such a setup, firms face two competing forces when setting their prices: they are inclined to lower prices to attract consumers who tend to shop around for the lowest price, and to raise them in an attempt to take advantage of consumers who find the activity of price comparison too time consuming to bother with (see Varian 1980, Stahl 1989 for the mathematical details behind this outcome).  Price dispersion ensues: firms run sales of different magnitudes in an effort to maximize profits.

Now consider what happens when firms offer price-matching guarantees.  Suppose that those consumers who find it worthwhile to shop around literally go store to store in search of price (this is not the only way to explain our results, but we find it to be one that aides intuition).  Some of these consumers will end their search at a store that does not list the lowest price.  Without a price-matching guarantee, if these consumers wish to procure the good at the lowest possible price observed, they will have to go back to the store with the best offer.  But if the last store they visit offers a price-matching guarantee, they can get the lowest price there instead.  Knowing this, firms realize that they won’t be able to win over as many price conscious consumers with deep discounts, so their incentive to run sales diminishes, leading to higher average prices.

Sound like a roundabout explanation?  Brady and I are far from the first to suggest that price-matching guarantees can diminish competition and some of the earliest explanations seem downright obvious: price-matching guarantees keep firms from lowering prices because rival firms immediately match price-cuts (see Hay 1982, Salop 1986, Doyle 1988).  However, I think that Brady and I have made a fairly cogent argument that takes into consideration how consumers behave and also accounts for the myriad advertisements firms undertake to inform consumers about their policies (here is one from Walmart, another from Toys “R” Us, and one from Staples).

This result begs the question, (i) have price-matching guarantees always been found to be anti-competitive and (ii) if so, can the anti-trust authorities reasonably do anything to prevent them?  The answer to question (i) is a no.  Although the bulk of the literature lends support to our findings, there are some notable explanations suggesting the contrary.  One that I find somewhat convincing when firms are differentiated is that price-matching guarantees can be a signal that a firm generally has lower underlying costs (perhaps it has negotiated better deals with merchants or doesn’t spend as much on its service quality) and consequently sets lower prices (see Moorthy and Winter 2006, Moorthy and Zhang 2006 for the details).  That is a theoretical argument.  The empirical literature is somewhat mixed, but typically strays to answer the question, “do firms that price-match have higher prices than those that do not,” instead of “are prices in general lower or higher when price-matching guarantees are used by some firms in the market?” More empirical research is ne20160109_145051eded to settle the issue.

As for question (ii), the answer may be no as well.  The anti-trust laws, stemming from the Sherman and Clayton Acts are generally focused on restraints of competition between firms, but price-matching guarantees are effectively standing offers by firms to contract with a consumer by referencing another firm’s price (see Edlin 1997).  Contracts that reference rivals are assuredly of concern to anti-trust practitioners (Scott Morton 2013), but when the contract does not impose any restriction on any party except a commitment to lower price by the firm offering it in response to publicly available information, it would seem (without undertaking a very rigorous empirical examination of the case at hand) rather difficult to make a case that competition is being restrained.

All of this comes with a major caveat.  Although I believe that price-matching guarantees have the potential to lead to higher prices in the market as a whole, if as a consumer, you find yourself in a situation where you can use a price-matching guarantee to save money, by all means do!  Unless all consumers can coordinate with all other consumers to bring about a better situation for themselves, they should do what is in their individual best interest.  I recently visited my family for the holidays and we decided to buy a board game to spend the time.  My brother reminded me to put my research to work.  I saved 20 bucks!

Tags: , , , , ,

3 Comments


Aleks Yankelevich’s First Blog Post (Chipotle, Market Definition, and Digital Inequality)

Wednesday, December 2nd, 2015

Growing up, my parents, brother, and I usually avoided restaurants. For my parents, this was initially out of necessity; as Soviet refugees, they did not have the financial means to eat out. However, even having achieved a modicum of success, my parents are not generally in the habit of frequenting restaurants, having perhaps out of a lifetime habit, developed a taste for home cooking. Restaurants are exclusively for special occasions.

Thus, having never eaten at a Chipotle Mexican Grill, they were sufficiently impressed by the restaurant’s façade to wish to eat there, but only when the grand occasion merits such an extravagant excursion. Their two sons were informed as such. Naturally, my brother and I (perhaps spoiled as we are) jumped at the chance to poke fun at our parents for placing Chipotle on a pedestal. This is, after all, a restaurant chain that is victim to some serious defecation humor, not Eleven Madison Park.

For a number of months, my parents were subjected to text messages and Facebook or Instagram posts with visuals of me or my brother outside various Chipotle restaurants, posing next to Chipotle ads, and in one instance, wearing a Chipotle t-shirt (I have no idea how that shirt found its way into my wardrobe). My parents responded, saying things like (and I could not make this up), “I wish someone would take us to that dream place.”

However, recently, my mother sent a group text directing the family to a news report about dozens of confirmed E.Coli cases related to Chipotle (even the FDA got involved) and asking for alternative dining suggestions. The text responses, in order, were as follows:

Me: California Tortilla
My Wife: Taco Bell
My Brother: Sushi
My Mother: Eating In (with picture of latest home cooked meal)
My Brother’s Girlfriend: Bacon

How does a reasonable individual interpret this chain of responses? As an economist with some regulatory and antitrust experience, I found the answer obvious. I sent the following group text (modified for concision): “Has anyone noticed that this text conversation has turned into the classic antitrust debate about appropriate market definition, with each subsequent family member suggesting a broader market?”

Surprisingly, no one else had noticed, but I was asked to unpack my statement a little bit (my mom sent a text that read: “English please.”).

The U.S. Department of Justice and the Federal Trade Commission’s Horizontal Merger Guidelines stipulate that market definition serves two roles in identifying potential competitive concerns. First, market definition helps specify the line of commerce (product) and section of the country (geography) in which a competitive concern arises. Second, market definition allows the Agencies to identify market participants and measure market shares and concentration.

As the Agencies point out, market definition focuses solely on demand substitution factors, i.e., on customer’s ability and willingness to substitute away from one product to another in response to a price increase or a corresponding non-price change (in the case of Chipotle, an E.Coli outbreak might qualify as a reduction in quality). Customers generally face a range of potential substitutes, some closer than others. Defining a market broadly to include relatively distant substitutes can lead to misleading market shares. As such, the Agencies may seek to define markets to be sufficiently narrow as to capture the relative competitive significance between substitute products. For some precision with this regard, I refer the reader to Section 4.1.1 of the Guidelines.

As for the group texts above, the reader can now infer how market definition was broadened by each subsequent family member. To reiterate:

Me: California Tortilla (Mexican food in a similar quality dining establishment to Chipotle.)
My Wife: Taco Bell (Mexican . . . inspired . . . dining out, generally.)
My Brother: Sushi (Dining out, generally.)
My Mother: Eating In (Dining, generally.)
My Brother’s Girlfriend: Bacon (Eating.)

Why is market definition relevant to the Quello Center at Michigan State University? As the Center’s website suggests, the Center seeks to stimulate and inform debate on media, communication and information policy for our digital age. One area where market definition plays a role with this regard is within the Quello Center’s broad interest in research about digital inequality.

Digital inequality represents a social inequality with regard to access to or use of the Internet, or more broadly, information and communication technologies (ICTs). Digital inequalities can arise as a result of individualistic factors (income, age and other demographics) or contextual ones (competition where a particular consumer is most likely to rely on ICTs). Market definition is most readily observed in the latter.

For instance, consider the market for fixed broadband Internet. An immediate question that arises is the appropriate geographic market definition. If we rule out individuals’ ability to procure fixed broadband Internet at local hotspots (e.g., libraries, coffee shops) from the relevant market definition, then the relevant geographic market appears to be the home. This is unfortunately a major burden for researchers attempting to assess the state of fixed broadband competition and its potential impact on digital inequality because most market level data in use is at a much more aggregated level than the home. The problem is that when an aggregated market, say a zip code, contains multiple competitors, it is unclear how many of these competitors actually compete in the same home.

Thus far, most studies of fixed broadband competition have been hampered by the issue of geographic market definition. For instance, Xiao and Orazem (2011) extend Bresnahan and Reiss’s (1991, 1994) classic studies of entry and competition in the market for fixed broadband, albeit at the zip code level. Wallsten and Mallahan (2010) use tract level FCC Form 477 data to test the effects of competition on speeds, penetration, and prices. However, whereas there are approximately 42,000 zip codes and 73,000 census tracts in the United States, there are approximately 124 million households, which implies a fairly large amount of aggregation that can lead researchers to conclude that competition is stronger than it actually is.

Another question that arises is whether fixed broadband is too narrow a product market and if the appropriate market definition is simply broadband, which would include fixed as well as mobile broadband. Thus far, because of data limitations, most studies of wireline-wireless substitution have focused mainly on voice rather than on Internet use (e.g. Macher, Mayo, Ukhaneva, and Woroch, 2015; Thacker and Wilson, 2015) and so do not assess whether mobile has become a medium that can mitigate digital inequality. Prieger (2013) has made some headway into this issue by showing evidence that as late as 2010, mobile and fixed broadband were generally not complementary, and that mobile only broadband subscription was slightly more prevalent in rural areas. However, because of data limitations, Prieger does not estimate a demand system to determine whether fixed and mobile broadband are substitutes or complements as the voice substitution papers above do.

Luckily, NTIA’s State Broadband Initiative (SBI) and more recently, the FCC, have enhanced researchers’ ability to assess competition at a fairly granular level by providing fixed broadband coverage and speed data at the level of the census block. Similarly, new data on Internet usage from the U.S. Census should allow researchers to better tackle the wireline-wireless substitution issue as well. The FCC has also hopped on the speed test bandwagon by collaborating with SamKnows to measure both fixed and mobile broadband quality. In the former case, the FCC periodically releases the raw data and I am optimistic that at some point, mobile broadband quality data will be released as well (readers please correct me if I am glossing over some already publically available granular data on mobile broadband speed and other characteristics).

The Quello Center staff seeks to combine such data, along with other sources, to study broadband competition and its impact on digital inequality. We welcome your feedback and are presently on the lookout for potential collaborators interested in these issues.

 

Tags: , , , , , ,

8 Comments