Something to consider before restructuring the FCC . . .

by

The Chief Economist of the Federal Communications Commission is a temporary position—with a term of a year or so of late—typically bestowed on economists with impressive credentials and experience related to media or telecommunications. Having worked at the FCC long enough to overlap with several chief economists, I noticed an interesting pattern. Many join the FCC full of hope—capable as they are—that they will reform the agency to better integrate “economic thinking” into regular policy decisions, but to quote a former colleague, “leave the agency with their sense of humor intact.”

I have heard many a former FCC economist rail against the lack of economic thinking at the FCC, with some former chief economists going very much on the record to do so (for instance, see here and here). Others (not necessarily affiliated with the FCC) have gone as far as to point out that much of what the FCC does or attempts to do is duplicative of the competition policies of the Department of Justice and Federal Trade Commission. These latter points are not a secret. The FCC publicly says so in every major transaction that it approves.

For example, in a transaction that I have had the pleasure to separately write about with one of the FCC’s former chief economists and a number of other colleagues, AT&T’s acquisition of former competitor Leap Wireless (see here and here), the FCC wrote (see ¶ 15):

Our competitive analysis, which forms an important part of the public interest evaluation, is informed by, but not limited to, traditional antitrust principles. The Commission and the Department of Justice (“DOJ”) each have independent authority to examine the competitive impacts of proposed communications mergers and transactions involving transfers of Commission licenses.

This standard language can be found in the “Standard of Review” section in any major FCC transaction order. The difference is that whereas the DOJ reviews telecom mergers pursuant to Section 7 of the Clayton Act, the FCC’s evaluation encompasses the “broad aims of the Communications Act.” From a competition analysis standpoint, a major difference is that if the DOJ wishes to stop a merger, “it must demonstrate to a court that the merger may substantially lessen competition or tend to create a monopoly.” In contrast, parties subject to FCC review have the burden of showing that the transaction, among other things, will enhance existing competition.

Such duplication and the alleged lack of economics at the FCC has led a number of individuals to suggest that the FCC should be restructured and some of its powers curtailed, particularly with respect to matters that are separately within the purview of the antitrust agencies. In particular, recently, a number of individuals in Donald Trump’s FCC transition team have written (read here) that Congress “should consider merging the FCC’s competition and consumer protection functions with those of the Federal Trade Commission, thus combining the FCC’s industry expertise and capabilities with the generic statutory authority of the FTC.”

I do not completely disagree—I would be remiss if I did not admit that the transition team makes a number of highly valid points in its comments on “Modernizing the Communications Act.” However, as Harold Feld, senior VP of Public Knowledge recently pointed out, efforts to restructure the FCC present a relatively “radical” undertaking and my main motivation in writing this post is to highlight Feld’s point by reminding readers of a recent court ruling.

In 2007—well before its acquisition of DIRECTV and its offer of unlimited data to customers who bundle its AT&T and DIRECT services—AT&T offered mobile wireless customers unlimited data plans. AT&T later phased out these plans except for customers who were “grandfathered”—those customers who signed up for an unlimited plan while it was available and never switched to an alternative option. In October 2011, perhaps worried about the implications of unlimited data in a data hungry world, AT&T reduced speeds for grandfathered customers on legacy plans whose monthly data usage surpassed a certain threshold—a practice that the FTC refers to as data throttling.

The FTC filed a complaint against AT&T under Section 5 of the FTC Act, alleging that customers who had been throttled by AT&T experienced drastically reduced service, but were not adequately informed of AT&T’s throttling program. As part of its complaint, the FTC claimed that AT&T’s actions violated the FTC Act and sought a permanent injunction on throttling and other equitable relief as deemed necessary by the Court.

Now here is where things get interesting: AT&T moved to dismiss on the basis that it is exempt as a “common carrier.” That is, AT&T claimed that the appropriate act that sets out jurisdiction over its actions is the Communications Act, and not the FTC Act. Moreover, AT&T’s position was that an entity with common carrier status cannot be regulated under the section that the FTC brought to this case (§ 45(a)), even when it is providing services other than common carriage services. This led one of my former colleagues to joke that this would mean that if AT&T were to buy General Motors, then it could use false advertising to sell cars and be exempt from FTC scrutiny.

The District Court for the Northern District of California happened to consider this matter after the FCC reclassified mobile data from a non-common carriage service to a common carriage service (in its Open Internet Order), but before the reclassification had gone into effect. The Court concluded that contrary to AT&T’s arguments, “the common carrier exception applies only where the entity has the status of common carrier and is actually engaging in common carrier activity.” Moreover, it denied AT&T’s motion because AT&T’s mobile data service was not regulated as common carrier activity by the FCC when the FTC suit was filed. However, in August 2016, this decision was reversed on appeal by the U.S. Court of Appeals for the Ninth Circuit (see here), which ruled that the common carrier exemption was “status based,” not “activity based,” as the lower court had determined.

Unfortunately, this decision leaves quite a regulatory void. To my knowledge, the FCC does not have a division of Common Carrier Consumer Protection (CCCP), and I doubt that any reasonable individual familiar with FCC practice would interpret the Open Internet Order as an attempted FCC power grab to attempt to duplicate or supplant FTC consumer protection authority. Indeed, the FCC articulated quite the reverse position by recently filing an Amicus Curiae Brief in support of the FTC’s October 2016 Petition to the Ninth Circuit to have the case reheard by the full court.

So what’s my point? Well first, the agencies are not intentionally attempting to step on each other’s toes. By and large, the FCC understands the role of the FTC and the DOJ and vice versa. Were AT&T to acquire General Motors, it is highly probable that given the state of regulation as it stands, employees at the FCC would find it preferable if the FTC continued to oversee General Motors’ advertising practices. A related stipulation applies to the FCC’s competition analysis. Whereas the analysis may be similar to that of the antitrust agencies, it is motivated at least in part by the FCC’s unique mission to establish or maintain universal service, which can lead to different decisions being made in the same case (for instance, whereas the DOJ did not challenge AT&T’s acquisition of Leap Wireless, the FCC imposed a number of conditions to safeguard against loss of service).

Of course, one could argue that confusion stemming from the above case might have been avoided had the FCC never had authority over common carriage in the first place. But if making that argument, one must be cognizant of the fact that although the FTC Act predates the Communications Act of 1934, prior to 1934, it was the Interstate Commerce Act, not the FTC Act, that lay out regulations for common carriers.  In other words, legislative attempts to rewrite the Communications Act will necessitate changes in various other pieces of legislation in order to assure that there are no voids in crucial protections to competition and consumers. Thus, to bolster Harold Feld’s points: those wishing to restructure the FCC need to do so being fully aware of what the FCC actually does and doesn’t do, they must take heed of all the subtleties underlying the legislation that lays the groundwork for the various agencies, and they should be mindful of potential for interpretation and reinterpretation under the common law aspects of our legal system.

Tags: , , , , ,


Undesirable Incentives in the Incentive Auction (w. Emily Schaal)

by

Following the 2016 U.S. Presidential election, in a letter to FCC Chairman Wheeler, Republicans urged the FCC to avoid “controversial items” during the presidential transition.  Shortly thereafter, the Commission largely scrubbed its Nov. 17 agenda resulting in perhaps the shortest Open Commission Meeting in recent history.  Start at 9:30 here for some stern words from Chairman Wheeler in response.  Viewers are urged to pay particular attention to an important history and civics lesson from the Chairman in response to a question at 17:20 (though this should not indicate our agreement with everything that the Chairman says).

So what is the Commission to do prior to the transition?  According to the Senate Committee on Commerce, Science, and Transportation, the FCC can “focus its energies” on “many consensus and administrative matters.”  Presumably, this includes the FCC’s ongoing incentive auction, now set for its fourth round of bidding, and subject to its own controversies, with dissenting votes on major items released in 2014 (auction rules and policies regarding mobile spectrum) by Republican Commissioners concerned about FCC bidding restrictions and “market manipulation,” along with a statement by a Democratic Commissioner saying that FCC bidding restrictions did not go far enough.

The Incentive Auction

Initially described in the 2010 National Broadband Plan, the Incentive Auction is one of the ways in which the FCC is attempting to meet modern day demands for video and broadband services.  The FCC describes the auction for a broad audience in some detail here and here.  In short, the auction was intended to repurpose up to 126 megahertz of TV band spectrum, primarily in the 600 MHz band, for “flexible use” such as that relied on by mobile wireless providers to offer wireless broadband.  The auction consists of two separate but interdependent auctions—a reverse auction used to determine the price at which broadcasters will voluntarily relinquish their spectrum usage rights and a forward auction used to determine the price companies are willing to pay for the flexible use wireless licenses.

Repackaging

What makes this auction particularly complicated is a “repackaging” process that connects the reverse and forward auction.  The current licenses held by broadcast television stations are not necessarily suitable for the type of contiguous blocks of spectrum that are necessary to set up and expand regional or nationwide mobile wireless networks.  As such, repackaging involves reorganizing and assigning channels to the remaining broadcast television stations—that remain operational post-auction—in order to clear spectrum for flexible use.

The economics and technical complexities underlying this auction are well described in a recent working paper entitled “Ownership Concentration and Strategic Supply Reduction,” by Ulrich Doraszelski, Katja Seim, Michael Sinkinson, and Peichun Wang (henceforth Doraszelski et al. 2016) now making its way through major economic conferences (Searle, AEA).  As the authors point out with regard to the repackaging process (p. 6):

[It] is visually similar to defragmenting a hard drive on a personal computer.  However, it is far more complex because many pairs of TV stations cannot be located on adjacent channels, even across markets, without causing unacceptable levels of interference.  As a result, the repackaging process is global in nature in that it ties together all local media markets.

With regard to the reverse auction, Doraszelski et al. (2016) note that (p. 7):

[T]he auction uses a descending clock to determine the cost of acquiring a set of licenses that would allow the repacking process to meet the clearing target.  There are many different feasible sets of licenses that could be surrendered to meet a particular clearing target given the complex interference patterns between stations; the reverse auction is intended to identify the low-cost set . . . if any remaining license can no longer be repacked, the price it sees is “frozen” and it is provisionally winning, in that the FCC will accept its bid to surrender its license.

The idea is that the FCC should minimize the total cost of licenses sold on the reverse auction while making sure that its nationwide clearing target is satisfied.  As Doraszelski et al. (2016) note, the incentive auction has various desirable properties.  Of particular note is strategy proofness (see Milgrom and Segal 2015), whereby it is (weakly) optimal for broadcast license owners to truthfully reveal each station’s value as a going concern in the event that TV licenses are separately owned.

Strategic Supply Reduction

However, the author’s main concern in their working paper is that in spite of strategy proofness, the auction rules do not prevent firms that own multiple broadcast TV licenses from potentially engaging in strategic supply reduction.  As Doraszelski et al. (2016) show, this can lead to some fairly controversial consequences in the reverse auction that might compound any issues that could arise (e.g., decreased revenue) due to bidding restrictions in the forward auction.  Specifically, the authors find that multi-license holders are able to earn large rents from a supply reduction strategy where they strategically withhold some of their licenses from the auction to drive up the closing price for the remaining licenses they own.

The incentive auction aside, strategic supply reduction is a fairly common phenomenon in standard economic models of competition.  Consider for instance a typical model of differentiated product competition (or the Cournot model of homogenous product competition).  In each of these frameworks, firms’ best response strategies lead them to set prices or quantities such that the quantity sold is below the “perfectly competitive” level and prices are above marginal cost—thus, firms individually find it optimal to keep quantity low to make themselves (and consequently, their competitors) better off than under perfect competition.

In the incentive auction, a multi-license holder that withdraws a license from the auction could similarly increase the price for the remaining broadcast TV licenses that it owns (as well as the price of other broadcast TV license owners).  However, in contrast to the aforementioned economic models, in which firms effectively reduce supply by underproducing, a firm engaging in strategic supply reduction is left with a TV station that it might have otherwise sold in the auction.  The firm is OK with this if the gain from raising the closing price for other stations exceeds the loss from continuing to own a TV station instead of selling it into the auction.


Example 1

Consider the following highly stylized example of strategic supply reduction: There are two broadcasters, B1 and B2, in a market where the FCC needs to clear three stations (the reverse auction clearing target) and there are three different license “qualities,” A, B, and C, for which broadcasters have different reservation prices and holdings as follows:

B1 Quantity B2 Quantity Reservation Price
A 1 2 10
B 1 0 6
C 2 1 2

Suppose that the auctioneer does not distinguish between differences in licenses (this is a tremendous simplification relative to the real world).  Consider a reverse descending clock auction in which the auctioneer lowers its price in decrements of $2 starting at $10 (so $10 at time 1, $8 at time 2, and so on until the auction ends), and ceases to lower its price as soon as it realizes that any additional licensee drop outs would not permit it to clear its desired number of stations (as would for instance happen when quality A and B licenses drop out).  Suppose that a broadcaster playing “truthfully” that is indifferent between selling its quality license and dropping out remains in the auction (so that for instance, A quality licenses are not withdrawn until the price falls from $10 to $8).

In a reverse descending clock auction in which broadcasters play “naïve” strategies, each broadcaster would offer all of their licenses and drop some from consideration as the price decreases over time. However, there is another “strategic” option, in which B1 withholds a quality C license from the auction (B1 can do so by either overstating its reservation price for this license—say claiming that it is $10—or by not including it in the auction to begin with):

Naive Strategic
B1 B2 B1 B1 B2
Offered Offered Offered Withheld Offered
A 1 2 1 2
B 1 1
C 2 1 1 1 1
Licenses Auctioned 7 6

The results of the naïve bidding versus the strategic bidding auction are quite different.  In the naïve bidding auction, the auctioneer can continue to lower its price down to $4 at which point B1 pulls out its B quality license and the auction is frozen (further drop outs would not permit the desired number of licenses to be cleared).  Each broadcaster earns $4 for each quality C license with B1 earning a profit of 2×($4-$2)=$4.

Suppose instead that broadcaster B1 withheld one quality C license.  Then the auction would stop at $8 (because there are only three licenses left as soon as A quality licenses are withdrawn).  Each broadcaster now earns $8 per license sold, with B1 earning a profit of ($8-$6)+($8-$2)=$8.  Moreover, B2 benefits from B1’s withholding, earning profit of $6 instead of $2, as in the naïve bidding case.  The astute reader will notice that B1 could have done even better by withholding its B quality license instead!  This is a result of our assumption that the auctioneer treats all cleared licenses equally, which is not true in the actual incentive auction.  Finally, notice that even though B2 owns three licenses in this example, strategic withholding could not have helped it more than B1’s strategic withholding did unless it colluded with B1 (this entails B2 to withhold its quality A licenses and B1 to withhold both quality C licenses).


Evidence of Strategic Supply Reduction

Doraszelski et al. (2016) explain that certain types of geographic markets and broadcast licenses are more suitable for strategic supply reduction.  They write:

First, ideal markets from a supply reduction perspective are [those] in which the FCC intends to acquire a positive number of broadcast licenses and that have relatively steep supply curves around the expected demand level.  This maximizes the impact of withholding a license from the auction on the closing price . . .  Second, suitable groups of licenses consist of sets of relatively low value licenses, some with higher broadcast volume to sell into the auction and some with lower broadcast volume to withhold.

What is perhaps disconcerting is the fact that Doraszelski et al. (2016) have found evidence indicating that certain private equity firms spent millions acquiring TV licenses primarily from failing or insolvent stations in distress, often covering the same market and in most instances on the peripheries of major markets along the U.S. coasts.  Consistent with their model, the authors found that many of the stations acquired had high broadcast volume and low valuations.

Upon performing more in depth analysis that attempts to simulate the reverse auction using ownership data on the universe of broadcast TV stations together with FCC data files related to repacking—the rather interesting details of which we would encourage our audience to read— Doraszelski et al. (2016) conclude that strategic supply reduction is highly profitable.  In particular, using fairly conservative tractability assumptions, the authors found that simulated total payouts increased from $17 billion under naïve bidding to $20.7 billion with strategic supply reduction, with much of that gain occurring in markets in which private equity firms were active.


Example 2

Suppose that in our example above that the quality C stations held by broadcaster B1 were initially under the control of two separate entities, call these B3 and B4.  Then, if B1, B2, B3, and B4 were to participate in the auction, strategic withholding on the part of B1 would no longer benefit it.  However, B1 could make itself better off by purchasing one, or potentially both of the individual C quality licenses held by B3 and B4.  Consider the scenario where B1 offers to buy B3’s license.  B3 is willing to sell at $4 or more, the amount it will earn under naïve bidding in the auction and Bertrand style competition between B3 and B4 will keep B1 from offering more than that.  With a single C quality license, B1 can proceed to withhold either its B or C quality license, raise the price to $8, and benefit both itself, and the other broadcasters who make a sale in the auction.


This result, whether realized by the FCC ex-ante or not, is problematic for several reasons.  First, it raises the prospect that revenues raised in the forward auction will not be sufficient to meet payout requirements in the reverse auction.  As is, this has already occurred three times, with the FCC having had lowered its clearance target to 84 megahertz from the initial 126 megahertz; though we caution that the FCC is currently not permitted to release data regarding the prices at which different broadcasters drop out of the auction, so we cannot verify whether final prices in earlier stages of the reverse auction were impacted by strategic supply reduction.  Second, as is the case with standard oligopoly models, strategic supply reduction is beneficial for sellers, but not so for buyers or consumers.

Third, strategic supply reduction by private equity firms raises questions about the proper role and regulation of such firms.  The existence of such firms is generally justified by their role in providing liquidity to asset markets.  However, strategic supply reduction seems to contradict this role, particularly so if withheld stations are not put to good use—something Doraszelski et al. (2016) don’t deliberate on.  Moreover, strategic supply reduction relies on what antitrust agencies often term as unilateral effects—that is, supply reduction is individually optimal and does not rely on explicit or tacit collusion.  However, whereas antitrust laws are intended to deal with cases of monopolization and collusion, it does not seem to us that they can easily mitigate strategic supply reduction.

Doraszelski et al. (2016) propose a partial remedy that does not rely on the antitrust laws: require multi-license owners to withdraw licenses in order of broadcast volume from highest to lowest.  Their simulations show that this leads to a substantial reduction in payouts from strategic bidding (and a glance at Example 1 suggests that it would be effective in preventing strategic supply reduction there as well).  Although this suggestion has unfortunately come too late for the FCC’s Incentive Auction we hope (as surely do the authors) that it will inform future auctions abroad hoping to learn from the U.S. experience.

This post was written in collaboration with Emily Schaal, a student at The College of William and Mary who is pursuing work in mathematics and economics.  Emily and I previously worked together at the Federal Communications Commission, where she provided invaluable assistance to a team of wireless economists.  

Tags: , , , ,


Digital Archive of James Quello’s Papers

by

The Quello Center is off and running in creating a digital archive of James H. Quello’s papers. Our archive team includes myself, having never created such an archive, plus Anne Marie Salter at the Center, Valeta Winsloff from Media and Information who supports our design work and blogging, Scout Calvert with the MSU Library, who is orchestrating this project, and Lauren E. Lincoln-Chavez, who has hands on experience in developing archives and special collections, and is based in Detroit.

The collection contains over 1,000 papers, including speeches, statements, letters, and remarks by James Quello during his long tenure as an FCC Commissioner. To this we will be adding our collection of photographs, and videos, as well as photos of his many awards and honors. This promises to be another of the many fun and rewarding projects of the Center.

The archive will be part of our WordPress blog and publicly accessible to anyone who might want a view of over two decades at the FCC through the words of one of its longest serving and most colorful commissioners. I read one of his papers from 1974 saying the he is willing to forgive journalists for getting things wrong at times (before there was a term ‘fake news’) in order to protect freedom of the press, and I imagine he would say the same thing about the users of social media today.

Generally, sifting through this collection is addictive as you follow the history of such issues as the fairness doctrine, cross-ownership rules, and more. I’ll keep you posted on our progress.

LtoR: Aleks, Bill, Valeta, Anne Marie, Lauren, Scout

Tags: , , , , ,


An Abridged History of Open Internet Regulation and Its Policy Implications (w. Kendall Koning)

by

On June 14, 2016, the United States Court of Appeals for the District of Columbia (D.C. Circuit) upheld the FCC’s 2015 network neutrality regulations, soundly denying myriad legal challenges brought by the telecommunications industry (U.S. Telecomm. Ass’n v. FCC 2016).  Thus, unless the Supreme Court says otherwise, Congress rewrites the rules, or INSERT TRENDING CELEBRITY NAME truly breaks the Internet, we can expect to receive our lawful content without concerns that it would be throttled or that the content provider paid a termination fee.  How did we get here?  As my colleague Kendall Koning, a telecommunications attorney and Ph.D. candidate at the Department of Media and Information at Michigan State and I lay out in this blog post outlining the history of net neutrality regulation, it has been a long road.

A short Quello Center Working Paper covering substantially the contents of this blog post is available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2797366

The most recent D.C. Circuit case represented the third time that FCC network neutrality rules had been before that court, the first two having been struck down on largely procedural grounds.  The FCC’s 2015 Open Internet Order remedied these flaws by formally grounding the rules in Title II of the Telecommunications Act (47 U.S.C. § 201 et. sq. 2016) while simultaneously exercising a separate forbearance authority to exempt ISPs from some of the more restrictive rules left over from the PSTN era.

The U.S. Telecommunications Association (USTelecom), a trade group representing the nation’s broadband service providers along with various other petitioners, had challenged the FCC’s Order on a number of grounds.  USTelecom’s central challenge echoed earlier arguments that ISPs don’t really offer telecommunications, i.e., the ability to communicate with third parties without ISPs altering form and content, but an integrated information service, where ISP servers exercise control over the form and content of information transmitted over the network.  As explained below, this perspective was a historical artifact from the era of America Online and dial-up ISPs, but had been used successfully at the start of the broadband era.  In a stinging rejection of ISP arguments, the D.C. Circuit not only found that the FCC’s reclassification of Internet access as telecommunications was reasonable and within the bounds of the FCC’s discretionary authority but offered a strong endorsement of this perspective (U.S. Telecomm. Ass’n v. FCC supra at 25-26):

That consumers focus on transmission to the exclusion of add-on applications is hardly controversial. Even the most limited examination of contemporary broadband usage reveals that consumers rely on the service primarily to access third-party content . . . Indeed, given the tremendous impact third-party internet content has had on our society, it would be hard to deny its dominance in the broadband experience. Over the past two decades, this content has transformed nearly every aspect of our lives, from profound actions like choosing a leader, building a career, and falling in love to more quotidian ones like hailing a cab and watching a movie. The same assuredly cannot be said for broadband providers’ own add-on applications.

The Rules, What are They Good For?

At present, the FCC states that its current Open Internet rules “protect and maintain open, uninhibited access to legal online content without broadband Internet access providers being allowed to block, impair, or establish fast/slow lanes to lawful content.”  In particular, the present rules make clear the following three conditions, each of which is subject to a reasonable network management stipulation (FCC 2015 ¶¶ 15-18):

  1. No Blocking: A person engaged in the provision of broadband Internet access service . . . shall not block lawful content, applications, services, or non-harmful devices . . . .
  2. No Throttling: A person engaged in the provision of broadband Internet access service . . . shall not impair or degrade lawful Internet traffic on the basis of Internet content . . . .
  3. No Paid Prioritization: A person engaged in the provision of broadband Internet access service . . . shall not engage in paid prioritization . . . [—the] management of a broadband provider’s network to directly or indirectly favor some traffic over other traffic . . . either (a) in exchange for consideration (monetary or otherwise) from a third party, or (b) to benefit an affiliated entity.

These rules are, to a degree, a modern version of common carrier non-discrimination rules adapted for the Internet.  47 U.S.C. §201(b) requires that “all charges, practices, classifications, and regulations for . . . communication service shall be just and reasonable.”  Whereas in the United States, these statutes date back to the Telecommunications Act of of 1934, common carrier rules more generally have quite a long history, with precursors going as far back as the Roman Empire (Noam 1994).  One of the purposes of these rules is to protect consumers from what is frequently deemed unreasonable price discrimination: if a product or service is critically important, only available from a very small number of firms, and not subject to arbitrage, suppliers may be able to charge each consumer a price closer to that consumer’s willingness to pay, rather than a single market price.

Consumers of Internet services are not only individuals but also content providers, like ESPN, Facebook, Google, Netflix, and others, who rely on the Internet to reach their customers.  As a general-purpose network platform, the Internet connects consumers and content providers via myriad competing broadband provider networks, none of which can reach every single consumer (FCC 2010 ¶ 24).  The D.C. Circuit succinctly laid it out, writing (U.S. Telecomm. Ass’n v. FCC, supra at 9):

When an end user wishes to check last night’s baseball scores on ESPN.com, his computer sends a signal to his broadband provider, which in turn transmits it across the backbone to ESPN’s broadband provider, which transmits the signal to ESPN’s computer.  Having received the signal, ESPN’s computer breaks the scores into packets of information which travel back across ESPN’s broadband provider network to the backbone and then across the end user’s broadband provider network to the end user, who will then know that the Nats won 5 to 3.

Thus, when individuals or entities at the “edge” of the Internet wish to connect to others outside their host ISP network, that ISP facilitates the connection by using its own peering and transit arrangements with other ISPs to move the content (data) from the point of origination to the point of termination.

One of the key issues in the the network neutrality debate was whether or not ISPs where traffic terminates should be allowed to offer these companies, for a fee, a way to prioritize their Internet traffic over the traffic of others when network capacity was insufficient to satisfy current demand.  Many worried that structuring Internet pricing in this way would enable price discrimination among content providers (Choi, Jeon, and Kim 2015) and might have several undesirable side effects.

First, welfare might be diminished if prioritization results in a diminished diversity of content (Economides and Hermalin 2012).  Second, because prioritization is only valuable when network demand is greater than its capacity, selling prioritization might create a perverse incentive to keep network capacity scarce (Choi and Kim 2010; Cheng, Bandyopadhyay, Guo 2011).  Third, ISPs who offer cable services or are otherwise vertically integrated into content might use both of these features to disadvantage their competitors in the content markets.  In light of the risk that ISPs pursue price discrimination to defend their vertically integrated content interests, network neutrality can be seen as an application of the essential facilities doctrine from antitrust law (Pitofsky, Patterson, and Hooks 2002) to the modern telecommunications industry.

In response, broadband ISPs have claimed that discriminatory treatment of certain traffic was necessary to mitigate congestion (FTC 2007; Lee and Wu 2009 broadly articulate this argument).[1]  ISPs also claim that regulation prohibiting discriminatory treatment of traffic would dissuade them from continued investment in reliable Internet service provision (e.g., FCC 2010 ¶ 40 and n. 128; FCC 2015 at ¶ 411 and n. 1198) and even the FCC noted that its 2015 net neutrality rules could reduce investment incentives (FCC 2015 at ¶ 410).  Nevertheless, the FCC partially justified the implementation of net neutrality by noting that it believed that any potential investment-chilling effect of its regulation was likely to be short term and would dissipate over time as the marketplace internalized its decision.  Moreover, the FCC claimed that prior time periods of robust ISP regulation coincided with upswings in broadband network investment (FCC 2015 at ¶ 414).

How the Rules Came About?

The Commission’s Open Internet rules are far from the first time that the telecommunications industry has faced similar issues.  Half a century ago, AT&T refused to allow the use of cordless phones manufactured by third parties until it was forced to do so by a federal court (Carter v. AT&T, 250 F.Supp 188, N.D. Tex. 1966). The federal courts also needed to intervene before MCI was allowed to purchase local telephone service from AT&T to complete the last leg of long-distance telephone calls (MCI v. AT&T, 496 F.2d 214, 3rd Cir. 1974).  AT&T’s refusal to provide local telephone service to it’s long-distance competitor was deemed an abuse of its monopoly in local telephone service to protect its monopoly in long-distance telephone service, and featured prominently in the breakup of AT&T in 1984 (U.S. v. AT&T, 522 F.Supp. 131, D.D.C. 1982).  Subsequent vigorous competition in the long-distance market helped drive down prices significantly.

The rules developed for computer networks throughout the FCC’s decades long Computer Inquiries were also designed to ensure third party companies had non-discriminatory access to necessary network facilities, and to facilitate competition in the emerging online services market (Cannon 2003).  For example, basic telecommunications services, like dedicated long-distance facilities, were required to be offered separately without being bundled with equipment or computer processing services.  These services were the building blocks upon which the commercial Internet was built.

The rules that came out of the Computer Inquiries were codified by Congress in the Telecommunications Act of 1996, by classifying the Computer Inquiry’s basic services as telecommunications services under the 1996 Act, the Computer Inquiry’s enhanced services as information services under the 1996 Act, and subjecting only the former to the non-discrimination requirements of Title II (FCC 2015 at ¶¶ 63, 311-313; Cannon 2003; Koning 2015).[2]  In particular, 47 U.S.C. Title II stipulates that it is unlawful for telecommunications carriers “to make or give any undue or unreasonable preference or advantage to any particular person, class of persons, or locality, or to subject any particular person, class of persons, or locality to any undue or unreasonable prejudice or disadvantage (47 U.S.C. § 202(a) 2016).”

Internet access specifically was first considered in terms of this classification in 1998.  Alaska Sen. Ted Stevens and others wanted dial-up ISPs to pay fees into the Universal Service Fund, which subsidized services for poor and rural areas.  The FCC ruled that ISPs were information services because they “alter the format of information through computer processing applications such as protocol conversion” (FCC 1998 ¶ 33).  However, to understand this classification, it is important to keep in mind that ISP services at this time were provided using dial-up modems as the PSTN.  In other words, in 1998 the Internet was an “overlay” network—one that uses a different network as the underlying connections between network points (see, e.g., Clark et al. 2006).  If consumers’ connections to their ISPs were made using dial-up telephone connections, then USF fees for the underlying telecommunications network were already being paid through consumers’ telephone bills.

In this context, applying USF fees to both ISPs and the underlying network would have effectively been double taxation.  Additionally, the service dial-up ISPs provided could reasonably be described as converting an analog telecommunications signal (from a modem) on one network (the PSTN) to a digital packet switched one (the Internet), which is precisely the sort of protocol conversion that had been treated as an enhanced service under the Computer Inquiry rules.  The same reasoning does not apply to broadband Internet access service, because it provides access to a digital packet switched network directly rather than through a separate underlying network service (Koning 2015).  However, the FCC continued to apply this classification to broadband ISPs, effectively removing broadband services from regulation under Title II.

Modern policy concerns over these issues reappeared in the early 2000s when the competitive dial-up ISP market was being replaced with the broadband duopoly of Cable and DSL providers.[3]  The concern was that if ISPs had market power, they might deviate from the end-to-end openness and design principles that characterized the early Internet (Lemley and Lessig 2001).  Early efforts focused on preserving competition in the ISP market by fighting to keep last-mile infrastructure available to third-party ISPs as had been the case in the dial-up era.  However, difficult experiences with implementing the unbundling regime of the 1996 Act, differing regulatory regimes for DSL and Cable (local loops for DSL had been subjected to the unbundling provisions of the 1996 Act, but Cable networks were not; an analysis of the consequences of doing this can be found in Hazlett and Caliskan 2008), and the existence of at least duopoly competition between these two incumbents discouraged the FCC from taking that path (FCC 2002, 2005b).  Third-party ISPs tried to argue that Cable modem connections were themselves a telecommunications service and therefore should be subject to the common-carrier provisions of Title II.  The FCC disagreed, pointing to its classification of Internet access as an information service under the 1996 Act.  This classification was ultimately upheld by the Supreme Court in NCTA v. Brand X (545 U.S. 967, 2005).

Unable to rely on the structural protection of a robustly competitive ISP market, the FCC shifted its focus towards the possibility of enforcing an Internet non-discrimination regime through regulation.  During this time period, the meaning and ramifications of “net neutrality,” a term coined in 2003 (Wu 2003), became the subject of vigorous academic debate.  Under the computer inquiries, non-discrimination rules had applied to the underlying network infrastructure, but it was also possible for non-discrimination rules to apply to Internet service itself, just as they had been to other packet-switched networks (X.25 and Frame Relay) in the past (Koning 2015).  However, there was extensive debate over the specific formulation and likely effects of any such rules, particularly among legal scholars (e.g., Cherry 2006, Sidak 2006, Sandvig 2007, Zittrain 2008, Lee and Wu 2009).  Although to that point, there had been no rulemaking proceeding specifically addressing non-discrimination on the Internet, a number of major ISPs had agreed to forego such discrimination in exchange for FCC merger approval (FCC 2015 ¶ 65) and there was still a general expectation that ISPs would not engage in egregious blocking behavior.  In one early case, the Commission fined an ISP for blocking a competitor’s VoIP telephone service (FCC 2005a).  In 2008, the FCC also ruled against Comcast’s blocking of peer-to-peer applications (FCC 2008).  However, the Comcast order was later reversed by the D.C. Circuit (Comcast v. FCC, 600 F.3d 642, D.C. Cir. 2010).

In response to this legal challenge, the FCC initiated formal rulemaking proceedings to codify its network neutrality rules.  In 2010, the FCC released its initial Open Internet Order, which applied the FCC’s Section 706 authority under the Communications Act to address net neutrality directly (FCC 2010 ¶¶ 117-123).  Among other things, the 2010 Open Internet Order adopted the following rule (FCC 2010 ¶ 68):

A person engaged in the provision of fixed broadband Internet service, insofar as such person is so engaged, shall not unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service.  Reasonable network management shall not constitute unreasonable discrimination.

However, these rules were struck down by the D.C. Circuit in January 2014 (Verizon v. FCC, 740 F.3d 623, D.C. Cir. 2014). The root of the problem was that the Commission had continued to classify broadband Internet access as an “information service” under the 1996 Act, where its authority was severely limited.  As the court wrote: “[w]e think it obvious that the Commission would violate the Communications Act were it to regulate broadband providers as common carriers. Given the Commission’s still-binding decision to classify broadband providers not as providers of ‘telecommunications services’ but instead as providers of ‘information services,’ [] such treatment would run afoul of section [47 U.S.C §]153(51): ‘A telecommunications carrier shall be treated as a common carrier under this [Act] only to the extent that it is engaged in providing telecommunications services (Verizon v. FCC, supra at 650).’”

The FCC went back to the drawing board and issued its most recent Open Internet Order in 2015.  This time, the Commission grounded its rules in a reclassification of Internet access service as a Title II telecommunications service.  Moreover, unlike in the 2010 Order, which only subjected mobile broadband providers to a transparency and no blocking requirement (FCC 2010 ¶¶ 97-103), the Commission applied the same rules to providers of fixed and mobile broadband in the 2015 Order (FCC 2015 ¶ 14).

In contrast to information services, telecommunications services are subject to Title II common carrier non-discrimination provisions of the Act (FCC 2005b at ¶ 108 and n. 336).  As discussed above, these statutes expressly address the non-discrimination issues central to the network neutrality issue.  The reclassification permitted the Commission to exercise its Section 706 authority to implement the non-discrimination rules codified in Title II (FCC 2015 ¶¶ 306-309, 363, 365, 434).  On June 14, 2016, the D.C. Circuit upheld the FCC’s Open Internet rules as based on this and other statutes from Title II, 47 U.S.C. § 201 et. sq.

The Future of Net Neutrality

Although the Commission’s long evolving Open Internet rules appear to have found a solid legal grounding, it is important to understand that they are not without limits.  For instance, crucially, the rules stipulate what ISPs can and cannot do at termination, whereas they do not restrict the terms of interconnection and peering agreements with ISP networks (FCC 2015, ¶ 30).  Critically, in contrast to what HBO’s John Oliver might conclude from the FCC’s recent court victory, the Order does not prevent ISPs such as Comcast from requiring payment for interconnection to their networks; it merely subjects interconnection to the general rule under Title II that the prices charged must be reasonable and non-discriminatory.  Rather than making any prospective regulations on interconnection itself, the FCC’s 2015 Order leaves those issues open for future consideration on a case-by-case basis (FCC 2015, ¶ 203).

Additionally, academics are far from a consensus regarding the welfare implications of net neutrality.  When handing out judgement, the D.C. Circuit was careful to point out that its ruling was limited by a determination of whether the FCC has acted “within the limits of Congress’s delegation” (U.S. Telecomm. Ass’n v. FCC, supra note 1 at 23) of authority, and not on the economic merits or lack thereof of the FCC’s Internet regulations.[4]  In contrast to some of the aforementioned theoretical economics articles, there are a number of theoretical studies that find the type of quality of service tiering that is ruled out by the 2015 Order is likely to result in higher broadband investment and increase diversity of content (Krämer and Wiewiorra 2012; Bourreau, Kourandi, Valletti 2015), or for that matter, that under certain circumstances, it may not matter at all (Gans 2015; Gans and Katz 2016; Greenstein, Peitz, and Valletti 2016).  The empirical economic literature on net neutrality is at a very early stage and has thus far mostly focused on the consequences of other regulatory policies that might be likened to net neutrality regulation (Chang, Koski, and Majumdar 2003; Crandall, Ingraham, and Sidak 2004; Hausman and Sidak 2005; Hazlett and Caliskan 2008; Grajec and Röller 2012). To the extent that economists and other academicians reach some consensus on certain aspects of broadband regulation in the future, the FCC may be persuaded to update its rules.

Finally, the scope of the existing Open Internet rules remains under debate.  For instance, public interest group, Public Knowledge, recently rekindled the debate regarding whether zero rating (alternatively referred to as sponsored data plans) policies that exempt certain content from broadband caps imposed by certain providers constitute a violation of Open Internet principles (see Public Knowledge 2016; Comcast 2016).  Although the Commission has not ruled such policies out, in the 2015 Order, it left the door open to reassess them (FCC 2015, ¶¶ 151-153).

Signaling its concern about such policies, the FCC conditioned its recent approval of the merger between Charter Communications and Time Warner Cable on the parties consent not to impose data caps or usage-based pricing (FCC 2016 ¶ 457).  Academic research on this topic remains scarce.  Economides and Hermalin (2015) have suggested that in the presence of a sufficient number of content providers, ISPs able to set a binding cap will install more bandwidth than ones barred from doing so; to our knowledge, economists have not rigorously assessed zero rating and the FCC continues its inquiry into these policies.


[1] It should be noted that notwithstanding these claims, congestion control is already built into the TCP/IP protocol.  Further, more advanced forms of congestion management have been developed for specific applications, such as buffering and adaptive quality for streaming video, that allow these applications to adapt to network congestion.  Whereas real-time network QoS guarantees could be useful for certain applications (e.g., live teleconferencing), these applications represent a small share of overall Internet traffic.

[2] The categorizations embodied by the Computer Inquiries decisions initially stemmed from an attempt to create a legal and regulatory distinction between “pure communications” and “pure data processing,” the former of which was initially provisioned by an incumbent regulated monopoly (primarily AT&T), and the latter of which was viewed as largely competitive and needing little regulation.  The culmination of these inquiries implicitly led to a layered model of regulation, dividing communication policy into (i) a physical network layer (to which common carrier regulation might apply), (ii) a logical network layer (to which open access issues might apply), (iii) an applications and services layer, and (iv) a content layer (Cannon 2003 pp. 194-5, Koning 2015 pp. 286-7).

[3] One 1999 study found a total of 6,006 ISPs in the U.S.  See, e.g., Greenstein and Downes (1999) at 195-212.

[4] In particular, the Court wrote, “Nor do we inquire whether `some or many economists would disapprove of the [agency’s] approach’ because ‘we do not sit as a panel of referees on a professional economics journal, but as a panel of generalist judges obliged to defer to a reasonable judgement by an agency acting pursuant to congressionally delegated authority.”

Tags: , , , , , , , , ,


Work Begun on James H. Quello Archive

by

We have just begun work on a digital archive of James H. Quello’s speeches, articles, and statements dating from 21 January 1974, for his Senate Confirmation Hearing. My thanks to the MSU Library for helping the Quello Center with this project, and from today we will start searching for funding to support this archiving project.

James H. Quello

James H. Quello

The core material will be Commissioner Quello’s written speeches, articles and statements, but we will be adding biographical materials, photos, and video material. This should be a valuable source for anyone seriously interested in the history of regulation and policy in the communication sector in the USA.

Our thanks to the MSU Library and to Sarah Roberts with the MSU Archives & Historical Collections.

Tags: , , , , , ,


The Un#ballogetic World of Wireless Ads

by

I belong to that rare breed of human that enjoys commercials.  As a social scientist with an interest in the impact of advertisement on consumer behavior, I often find myself, possibly to the chagrin of my wife (though she has not complained), assessing commercials out loud.  Are they informative?  Are they persuasive or attempt simply to elicit attention to the good in the ad?  Might they unintentionally lead to brand confusion?  Most importantly, are they funny?

Thus, having also spent some time among wireless regulators, I cannot help but comment on the recent spate of wireless attack ads perpetuated by three of the U.S. nationwide mobile wireless providers.  The initial culprit this time around was Verizon Wireless, which determined that balls were a good method to represent relative mobile wireless performance among the nationwide competitors.  Shortly thereafter, Sprint aired a commercial using bigger balls while T-Mobile brought in Steve Harvey to demand that Verizon #Ballagize.

There are myriad takeaways that can be had from these commercials.  First, at least on the face of it, the nationwide mobile wireless providers appear to be fiercely competitive with one another.  It would be interesting to look at advertising to sales ratios for this industry relative to that of other industries in the U.S., though at the time of writing of this blog, I did not have access to such data (Ad Age appears to be a convenient source).  Moreover, the content of the commercials suggests that although price continues to be an important factor (Sprint did not veer away from its “half-off” theme in its ball commercial), quality competition that allows competitors to differentiate their product (and in doing so, justify higher prices) remains paramount.

Unfortunately, as a consumer, it is difficult for me to properly assess what these commercials say about wireless quality.  There are a number of points at play here.

  1. The relative comparisons are vague: When Sprint says that it delivers faster download speeds than the other nationwide providers, what does that mean?  When I zoom into the aforementioned Sprint commercial at the 10 second mark, the bottom of the screen shows, “Claim based on Sprint’s analysis of average LTE download speeds using Nielsen NMP data (Oct. thru Dec. 2015).  NMP data captures real consumer usage and performance for downloads of all file sizes greater than 150kb.  Actual speeds may vary by location and device capability.”  As a consumer who spends most of his time in East Lansing, MI, I am not particularly well informed by a nationwide average.  Moreover, I know nothing about the statistical validity of the data (though here I am willing to give Nielsen the benefit of the doubt).  Moreover, I would be interested to know when Sprint states that it delivers faster download speeds, how much faster they are (in absolute terms) relative to the next fastest competitor.
  2. The small print is too small: Verizon took flak from its competitors for using outdated data in its commercial.  This is a valid claim.  Verizon’s small print (13 second mark in its commercial) states that RootMetrics data is based on the 1st half of 2015.  But unless I am actually analyzing these commercials as I am here, and viewing them side by side, it is difficult for me to make the comparison.
  3. The mobile wireless providers constantly question one another’s credibility, and this is likely to make me less willing to believe that they are indeed credible. Ricky Gervais explains this much better than I do: Ricky Gervais on speed, coverage, and network comparisons.

Alas, how is a consumer supposed to assess wireless providers?  An obvious source is Consumer Reports, but my sense, without paying for a subscription, is that these largely depend on expert reviews and not necessarily data analysis (someone correct me if I am wrong).  Another if one is not in the habit of paying for information about rival firms is the FCC.  The FCC’s Wireless Telecommunications Bureau publishes an “Annual Report and Analysis of Competitive Market Conditions with Respect to Mobile Wireless.”  The most recent, Eighteenth Report, contains a lengthy section on industry metrics with a focus on coverage (see Section III) as well as a section on service quality (see Section VI.C).  The latter section focuses on nationwide average speed according to FCC Speed Test data as well as on data from private sources Ookla, RootMetrics (yes, the one mentioned in those commercials), and CalSPEED (for California only).  If you are interested, be sure to check out the Appendix, which has a wealth of additional data.  For those who don’t want to read through a massive pdf file, there is also a set of Quick Facts containing some of the aforementioned data.

However, what I think is lacking is speed data at a granular level.  When analyzing transactions or assessing competition, the FCC does so at a level that is far more granular than the state, and rightly so, as consumers do not generally make purchasing decision across an entire state, needless to say, the nation as a whole.  This is because service where consumers are likely to be present for the majority of their time is a major concern when deciding on wireless quality.  In a previous blog post I mentioned that the FCC releases granular fixed broadband data, but unfortunately, as far as I am aware, this is still not the case for wireless, particularly with regard to individual carrier speed data.

The FCC Speed Test App provides the FCC with such data.  The Android version which I have on my phone provides nifty statistics about download and upload speed as well as latency and packet loss, with the option to parse the data according to mobile or WiFi.  My monthly mobile only data for the past month showed a download speed above 30 Mbps.  Go Verizon!  My Wifi average was more than double that.  Go SpartenNet!  Yet, my observation does not allow me to compare data across providers in East Lansing and my current contract happens to expire in a couple of weeks.  The problem is that in a place like East Lansing and particularly so in more rural areas of the United States, not enough people have downloaded the FCC Speed Test App and I doubt that the FCC would be willing to report firm level data at a level deemed not to have statistical validity.

For all I know, the entire East Lansing sample consists of my twice or so daily automatic tests that if aggregated to a quarter of a year make up less than 200 observations for Verizon Wireless.  Whether this is sufficient for a statistically significant sample depends on the dispersion in speed observations for a non-parametric measure such as a median speed and also on the assumed distribution for mean speeds.  I encourage people to try this app out.  The more people who download it, the more likely that the FCC will have sufficient data to be comfortable enough to report it at a level that will make it reliable as a decision making tool.  Perhaps then, the FCC will also redesign the app to also report competitor speeds for the relevant geographic area.

Tags: , , , , , , , ,


What is ‘Special Access’ and Why is It So Important? by Aleks Yankelevich

by

Dr Aleks Yankelevich gave a one hour Quello Center brown-bag presentation entitled “Regulating the Intranet: What is Special Access and Why is it Important?” (yes Intranet, not Internet) on January 26th 2016. His talk clarified the concept of special access, how it is regulated by the Federal Communications Commission, and ended with some ideas on research that might focus on this relatively under-researched area.

Aleksandr Yankelevich – Regulating the Internet – What is Special Access And Why Is It Important from Quello Center on Vimeo.

Special access lines are dedicated high-capacity connections used by businesses and institutions to transmit their voice and data traffic. These connections are used by businesses to facilitate intranet communication, by wireless providers to funnel cell phone traffic between towers, and by banks to connect to their ATMs. When the costs of special access services increase, these costs are passed on by businesses to consumers. Because many parts of the United States face limited competition in the provision of special access, these services are highly regulated. In this brown-bag seminar, Aleks will discuss the significance of the special access market, why regulation of the intranet is relatively under-studied, and briefly explain a number of FCC related proceedings with respect to special access as well as his ongoing and potential research on the topic.

Tags: , , , ,


Delivering Pizza Without Offering Pizza Delivery

by

Having appreciated my colleague Aleks’ Yankelevich’s creative use of a “food” metaphor to explain an important aspect of economic analysis, I thought it fitting, on the day of oral arguments in the legal challenge to the FCC’s Open Internet Order, to consider another effective use of such a metaphor:  Supreme Court Justice Antonin Scalia’s dissent in the Brand X case.  Whereas the majority opinion in that case deferred to an earlier FCC ruling that Internet access was an “information” rather than a “telecommunication” service, Scalia–joined by two liberal justices, Ruth Bader Ginsburg and David Souter–argued that the majority’s view was akin to accepting a claim by the owner of a pizzeria that it delivered pizza, but didn’t “offer pizza delivery service.”

Below are some excerpts from Scalia’s dissent that I find most significant in terms of how the DC Circuit (and perhaps later, the Supreme Court) should and will rule in the latest challenge to the FCC’s Open Internet Order, which is the first in which the Commission has treated Internet access as a Title II “telecommunication” service rather than an “information” service.

The first sentence of the FCC ruling under review reads as follows: “Cable modem service provides high-speed access to the Internet, as well as many applications or functions that can be used with that access, over cable system facilities”…Does this mean that cable companies “offer” high-speed access to the Internet?  Surprisingly not, if the Commission and the Court are to be believed.

It happens that cable-modem service is popular precisely because of the high-speed access it provides, and that, once connected with the Internet, cable-modem subscribers often use Internet applications and functions from providers other than the cable company. Nevertheless, for purposes of classifying what the cable company does, the Commission (with the Court’s approval) puts all the emphasis on the rest of the package (the additional “applications or functions”). It does so by claiming that the cable company does not “offe[r]” its customers high-speed Internet access because it offers that access only in conjunction with particular applications and functions, rather than “separate[ly],” as a “stand-alone offering…”

There are instances in which it is ridiculous to deny that one part of a joint offering is being offered merely because it is not offered on a “stand-alone” basis…If, for example, I call up a pizzeria and ask whether they offer delivery, both common sense and common “usage”…would prevent them from answering: “No, we do not offer delivery–but if you order a pizza from us, we’ll bake it for you and then bring it to your house.” The logical response to this would be something on the order of, “so, you do offer delivery.” But our pizza-man may continue to deny the obvious and explain, paraphrasing the FCC and the Court: “No, even though we bring the pizza to your house, we are not actually “offering” you delivery, because the delivery that we provide to our end users is ‘part and parcel’ of our pizzeria-pizza-at-home service and is ‘integral to its other capabilities.’”… Any reasonable customer would conclude at that point that his interlocutor was either crazy or following some too-clever-by-half legal advice.

In short, for the inputs of a finished service to qualify as the objects of an “offer” (as that term is reasonably understood), it is perhaps a sufficient, but surely not a necessary, condition that the seller offer separately “each discrete input that is necessary to providing . . . a finished service…”

Shifting his analogy from pizza to puppies, Justice Scalia adds:

The pet store may have a policy of selling puppies only with leashes, but any customer will say that it does offer puppies because a leashed puppy is still a puppy, even though it is not offered on a “stand-alone” basis.

Despite the Court’s mighty labors to prove otherwise, …the telecommunications component of cable-modem service retains such ample independent identity that it must be regarded as being on offer–especially when seen from the perspective of the consumer or the end user, which the Court purports to find determinative.

Since the majority opinion in Brand X was based primarily on the doctrine of “administrative deference” derived from the 1984 Supreme Court case Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc., one would hope and expect that the DC Circuit Court judges hearing today’s oral arguments would remember what Justice Thomas wrote in that majority opinion: “If a statute is ambiguous, and if the implementing agency’s construction is reasonable, Chevron requires a federal court to accept the agency’s construction of the statute, even if the agency’s reading differs from what the court believes is the best statutory interpretation.”

When the majority’s Chevron-base deference is coupled with Justice Scalia’s simple but clear and commonsensical analogies to pizza and puppies, it’s hard for me to imagine a strong legal basis for the Circuit Court (or the Supreme Court if it ends up ruling on the case) to rule against the FCC’s Title II-based Open Internet Order. Perhaps today’s oral arguments will provide some additional clues as to whether I’m right or wrong about that (Update: downloadable audio of the oral arguments is here (wireline) and here (wireless, First Amendment, Forbearance). h/t @haroldfeld, whose initial response to today’s arguments is here.

Tags: , ,


Aleks Yankelevich’s First Blog Post (Chipotle, Market Definition, and Digital Inequality)

by

Growing up, my parents, brother, and I usually avoided restaurants. For my parents, this was initially out of necessity; as Soviet refugees, they did not have the financial means to eat out. However, even having achieved a modicum of success, my parents are not generally in the habit of frequenting restaurants, having perhaps out of a lifetime habit, developed a taste for home cooking. Restaurants are exclusively for special occasions.

Thus, having never eaten at a Chipotle Mexican Grill, they were sufficiently impressed by the restaurant’s façade to wish to eat there, but only when the grand occasion merits such an extravagant excursion. Their two sons were informed as such. Naturally, my brother and I (perhaps spoiled as we are) jumped at the chance to poke fun at our parents for placing Chipotle on a pedestal. This is, after all, a restaurant chain that is victim to some serious defecation humor, not Eleven Madison Park.

For a number of months, my parents were subjected to text messages and Facebook or Instagram posts with visuals of me or my brother outside various Chipotle restaurants, posing next to Chipotle ads, and in one instance, wearing a Chipotle t-shirt (I have no idea how that shirt found its way into my wardrobe). My parents responded, saying things like (and I could not make this up), “I wish someone would take us to that dream place.”

However, recently, my mother sent a group text directing the family to a news report about dozens of confirmed E.Coli cases related to Chipotle (even the FDA got involved) and asking for alternative dining suggestions. The text responses, in order, were as follows:

Me: California Tortilla
My Wife: Taco Bell
My Brother: Sushi
My Mother: Eating In (with picture of latest home cooked meal)
My Brother’s Girlfriend: Bacon

How does a reasonable individual interpret this chain of responses? As an economist with some regulatory and antitrust experience, I found the answer obvious. I sent the following group text (modified for concision): “Has anyone noticed that this text conversation has turned into the classic antitrust debate about appropriate market definition, with each subsequent family member suggesting a broader market?”

Surprisingly, no one else had noticed, but I was asked to unpack my statement a little bit (my mom sent a text that read: “English please.”).

The U.S. Department of Justice and the Federal Trade Commission’s Horizontal Merger Guidelines stipulate that market definition serves two roles in identifying potential competitive concerns. First, market definition helps specify the line of commerce (product) and section of the country (geography) in which a competitive concern arises. Second, market definition allows the Agencies to identify market participants and measure market shares and concentration.

As the Agencies point out, market definition focuses solely on demand substitution factors, i.e., on customer’s ability and willingness to substitute away from one product to another in response to a price increase or a corresponding non-price change (in the case of Chipotle, an E.Coli outbreak might qualify as a reduction in quality). Customers generally face a range of potential substitutes, some closer than others. Defining a market broadly to include relatively distant substitutes can lead to misleading market shares. As such, the Agencies may seek to define markets to be sufficiently narrow as to capture the relative competitive significance between substitute products. For some precision with this regard, I refer the reader to Section 4.1.1 of the Guidelines.

As for the group texts above, the reader can now infer how market definition was broadened by each subsequent family member. To reiterate:

Me: California Tortilla (Mexican food in a similar quality dining establishment to Chipotle.)
My Wife: Taco Bell (Mexican . . . inspired . . . dining out, generally.)
My Brother: Sushi (Dining out, generally.)
My Mother: Eating In (Dining, generally.)
My Brother’s Girlfriend: Bacon (Eating.)

Why is market definition relevant to the Quello Center at Michigan State University? As the Center’s website suggests, the Center seeks to stimulate and inform debate on media, communication and information policy for our digital age. One area where market definition plays a role with this regard is within the Quello Center’s broad interest in research about digital inequality.

Digital inequality represents a social inequality with regard to access to or use of the Internet, or more broadly, information and communication technologies (ICTs). Digital inequalities can arise as a result of individualistic factors (income, age and other demographics) or contextual ones (competition where a particular consumer is most likely to rely on ICTs). Market definition is most readily observed in the latter.

For instance, consider the market for fixed broadband Internet. An immediate question that arises is the appropriate geographic market definition. If we rule out individuals’ ability to procure fixed broadband Internet at local hotspots (e.g., libraries, coffee shops) from the relevant market definition, then the relevant geographic market appears to be the home. This is unfortunately a major burden for researchers attempting to assess the state of fixed broadband competition and its potential impact on digital inequality because most market level data in use is at a much more aggregated level than the home. The problem is that when an aggregated market, say a zip code, contains multiple competitors, it is unclear how many of these competitors actually compete in the same home.

Thus far, most studies of fixed broadband competition have been hampered by the issue of geographic market definition. For instance, Xiao and Orazem (2011) extend Bresnahan and Reiss’s (1991, 1994) classic studies of entry and competition in the market for fixed broadband, albeit at the zip code level. Wallsten and Mallahan (2010) use tract level FCC Form 477 data to test the effects of competition on speeds, penetration, and prices. However, whereas there are approximately 42,000 zip codes and 73,000 census tracts in the United States, there are approximately 124 million households, which implies a fairly large amount of aggregation that can lead researchers to conclude that competition is stronger than it actually is.

Another question that arises is whether fixed broadband is too narrow a product market and if the appropriate market definition is simply broadband, which would include fixed as well as mobile broadband. Thus far, because of data limitations, most studies of wireline-wireless substitution have focused mainly on voice rather than on Internet use (e.g. Macher, Mayo, Ukhaneva, and Woroch, 2015; Thacker and Wilson, 2015) and so do not assess whether mobile has become a medium that can mitigate digital inequality. Prieger (2013) has made some headway into this issue by showing evidence that as late as 2010, mobile and fixed broadband were generally not complementary, and that mobile only broadband subscription was slightly more prevalent in rural areas. However, because of data limitations, Prieger does not estimate a demand system to determine whether fixed and mobile broadband are substitutes or complements as the voice substitution papers above do.

Luckily, NTIA’s State Broadband Initiative (SBI) and more recently, the FCC, have enhanced researchers’ ability to assess competition at a fairly granular level by providing fixed broadband coverage and speed data at the level of the census block. Similarly, new data on Internet usage from the U.S. Census should allow researchers to better tackle the wireline-wireless substitution issue as well. The FCC has also hopped on the speed test bandwagon by collaborating with SamKnows to measure both fixed and mobile broadband quality. In the former case, the FCC periodically releases the raw data and I am optimistic that at some point, mobile broadband quality data will be released as well (readers please correct me if I am glossing over some already publically available granular data on mobile broadband speed and other characteristics).

The Quello Center staff seeks to combine such data, along with other sources, to study broadband competition and its impact on digital inequality. We welcome your feedback and are presently on the lookout for potential collaborators interested in these issues.

 

Tags: , , , , , ,


Testing the Limits of Net Neutrality Rules

by

In the past few weeks we’ve seen both a wireless and wireline carrier launch new “zero rating” video streaming services that test the boundaries of the FCC’s net neutrality policy: T-Mobile’s Binge On and Comcast’s Stream TV.

According to published reports, FCC chairman Tom Wheeler has praised Binge On as “highly innovative” and “highly competitive,” while also noting that the Commission will continue to monitor the service under its “general conduct” rule. According to Ars Technica, an FCC spokesperson declined comment on Comcast’s Stream TV, which does not count against the company’s data caps.

The FCC’s reported response to the two services is not too surprising. While they share some similarities, they are also different in key respects. Among the differences that come initially to mind are:

In a blog post, Public Knowledge senior staff attorney John Bergmayer argues that Stream TV is subject to and violates the FCC’s Open Internet order as well as the consent decree Comcast agreed to as part of its NBC Universal acquisition. I’d recommend reading the post in full for anyone wanting a preview of legal arguments to be made in more formal channels by Public Knowledge and others likely to challenge Stream TV before the FCC and the courts.

According to Bergmayer:

Comcast maintains that “Stream TV is a cable streaming service delivered over Comcast’s cable system, not over the Internet.” But Stream TV is being delivered to Comcast broadband customers over their broadband connections, and is accessible on Internet-connected devices (that is, not just through a cable box). From a user’s perspective, it is identical to any other Internet service. Comcast’s argument is that if it offers its service only to Comcast customers and locates the servers that provide Stream TV on its own property, connected to its own network, that this exempts it from the Open Internet rules. This is an absurd position that would permit Comcast to discriminate in favor of any of its own services, and flies in the face of the Open Internet rules…

[I]t does not appear that Stream TV is an IP service like facilities-based VoIP. It is not available standalone; you need a broadband Internet access connection to access it. It is thus readily distinguishable from services like facilities-based VoIP. If Comcast offered Stream TV separately from broadband there would be a better case that it was more like traditional cable TV or a specialized service–but it does not.

Bergmayer also reviews some relevant language from Comcast’s NBC Universal consent decree, including:

“Comcast shall not offer a Specialized Service that is substantially or entirely comprised of Defendants’ affiliated content,” and…”[if] Comcast offers any Specialized Service that makes content from one or more third parties available … [it] shall allow any other comparable Person to be included in a similar Specialized Service on a nondiscriminatory basis.”

In an article in Multichannel News, Jeff Baumgartner previews what may be a core element of Comcast’s legal argument defending Stream TV:

“Stream TV is an in-home IP-cable service delivered over Comcast’s cable network, not over the public Internet,” Comcast said in a statement issued Thursday, the same day it launched Stream TV to its second market – Chicago. “IP-cable is not an ‘over-the-top’ streaming video service. Stream enables customers to enjoy their cable TV service on mobile devices in the home delivered over the managed cable network, without the need for additional equipment, like a traditional set-top-box.”

The FCC does address the idea in rules released in December 2014, which explain that “an entity that delivers cable services via IP is a cable operator to the extent it delivers those services as managed video services over its own facilities and within its footprint…IP-based service provided by a cable operator over its facilities and within its footprint must be regulated as a cable service not only because it is compelled by the statutory definitions; it is also good policy, as it ensures that cable operators will continue to be subject to the pro-competitive, consumer-focused regulations that apply to cable even if they provide their services via IP.”

In his blog post Bergmayer cites language from the Commission’s Open Internet order related to the provision of “Non-Broadband Internet Access Service Data Services.” In my view, a key sentence in that section of the order is “The Commission expressly reserves the authority to take action if a service is, in fact, providing the functional equivalent of broadband Internet access service or is being used to evade the open Internet rules.” On the face of it, I’m inclined to agree with Bergmayer that this appears to be the case with Comcast’s Stream TV, when coupled with its data cap policies and the reality of Comcast’s multifaceted market power in both distribution and content.

And, more generally, I think Bergmayer is correct that “Comcast’s program raises a host of issues under the Open Internet rules, the consent decree, and—most importantly—general principles of competition.”

The fact that Comcast is testing the bounds of the Commission’s new rules is not surprising, given its focus on maximizing shareholder value within a set of interrelated and dynamic markets in which it enjoys substantial market power, but faces significant challenges to its traditional revenue streams and growth prospects. In fact, I view it as helpful that Comcast is moving fairly quickly in this direction, since it is likely to force the FCC and the Courts to revisit yet again the question of how to craft communication policy that serves the public interest in the Internet age.

And, with the Commission having classified broadband access as a Title II service, my hope is that any court review of FCC action responding to Stream TV or similar services will consider substantive policy arguments (e.g., related to competition and the public interest) rather than simply ruling that the Commission cannot impose net neutrality rules absent a Title II classification of broadband access (which seemed to be the central message of the most recent DC Circuit Court ruling).

We are clearly moving into a world where the central element of our once heavily (and often clumsily) siloed communication infrastructure and policy (and arguably our economy and society as a whole) is IP connectivity. Though some believe the FCC has outlived its usefulness in that world, my own preference—at least for now—is that the Commission retain sufficient tools and authority to continue serving as the specialized regulatory agency responsible for setting ground rules that help ensure that the public interest is well served during and after this historic and vitally important transition from yesterday’s communication technology and industry structure to tomorrow’s.

Tags: , , , ,