An Abridged History of Open Internet Regulation and Its Policy Implications (w. Kendall Koning)


On June 14, 2016, the United States Court of Appeals for the District of Columbia (D.C. Circuit) upheld the FCC’s 2015 network neutrality regulations, soundly denying myriad legal challenges brought by the telecommunications industry (U.S. Telecomm. Ass’n v. FCC 2016).  Thus, unless the Supreme Court says otherwise, Congress rewrites the rules, or INSERT TRENDING CELEBRITY NAME truly breaks the Internet, we can expect to receive our lawful content without concerns that it would be throttled or that the content provider paid a termination fee.  How did we get here?  As my colleague Kendall Koning, a telecommunications attorney and Ph.D. candidate at the Department of Media and Information at Michigan State and I lay out in this blog post outlining the history of net neutrality regulation, it has been a long road.

A short Quello Center Working Paper covering substantially the contents of this blog post is available at:

The most recent D.C. Circuit case represented the third time that FCC network neutrality rules had been before that court, the first two having been struck down on largely procedural grounds.  The FCC’s 2015 Open Internet Order remedied these flaws by formally grounding the rules in Title II of the Telecommunications Act (47 U.S.C. § 201 et. sq. 2016) while simultaneously exercising a separate forbearance authority to exempt ISPs from some of the more restrictive rules left over from the PSTN era.

The U.S. Telecommunications Association (USTelecom), a trade group representing the nation’s broadband service providers along with various other petitioners, had challenged the FCC’s Order on a number of grounds.  USTelecom’s central challenge echoed earlier arguments that ISPs don’t really offer telecommunications, i.e., the ability to communicate with third parties without ISPs altering form and content, but an integrated information service, where ISP servers exercise control over the form and content of information transmitted over the network.  As explained below, this perspective was a historical artifact from the era of America Online and dial-up ISPs, but had been used successfully at the start of the broadband era.  In a stinging rejection of ISP arguments, the D.C. Circuit not only found that the FCC’s reclassification of Internet access as telecommunications was reasonable and within the bounds of the FCC’s discretionary authority but offered a strong endorsement of this perspective (U.S. Telecomm. Ass’n v. FCC supra at 25-26):

That consumers focus on transmission to the exclusion of add-on applications is hardly controversial. Even the most limited examination of contemporary broadband usage reveals that consumers rely on the service primarily to access third-party content . . . Indeed, given the tremendous impact third-party internet content has had on our society, it would be hard to deny its dominance in the broadband experience. Over the past two decades, this content has transformed nearly every aspect of our lives, from profound actions like choosing a leader, building a career, and falling in love to more quotidian ones like hailing a cab and watching a movie. The same assuredly cannot be said for broadband providers’ own add-on applications.

The Rules, What are They Good For?

At present, the FCC states that its current Open Internet rules “protect and maintain open, uninhibited access to legal online content without broadband Internet access providers being allowed to block, impair, or establish fast/slow lanes to lawful content.”  In particular, the present rules make clear the following three conditions, each of which is subject to a reasonable network management stipulation (FCC 2015 ¶¶ 15-18):

  1. No Blocking: A person engaged in the provision of broadband Internet access service . . . shall not block lawful content, applications, services, or non-harmful devices . . . .
  2. No Throttling: A person engaged in the provision of broadband Internet access service . . . shall not impair or degrade lawful Internet traffic on the basis of Internet content . . . .
  3. No Paid Prioritization: A person engaged in the provision of broadband Internet access service . . . shall not engage in paid prioritization . . . [—the] management of a broadband provider’s network to directly or indirectly favor some traffic over other traffic . . . either (a) in exchange for consideration (monetary or otherwise) from a third party, or (b) to benefit an affiliated entity.

These rules are, to a degree, a modern version of common carrier non-discrimination rules adapted for the Internet.  47 U.S.C. §201(b) requires that “all charges, practices, classifications, and regulations for . . . communication service shall be just and reasonable.”  Whereas in the United States, these statutes date back to the Telecommunications Act of of 1934, common carrier rules more generally have quite a long history, with precursors going as far back as the Roman Empire (Noam 1994).  One of the purposes of these rules is to protect consumers from what is frequently deemed unreasonable price discrimination: if a product or service is critically important, only available from a very small number of firms, and not subject to arbitrage, suppliers may be able to charge each consumer a price closer to that consumer’s willingness to pay, rather than a single market price.

Consumers of Internet services are not only individuals but also content providers, like ESPN, Facebook, Google, Netflix, and others, who rely on the Internet to reach their customers.  As a general-purpose network platform, the Internet connects consumers and content providers via myriad competing broadband provider networks, none of which can reach every single consumer (FCC 2010 ¶ 24).  The D.C. Circuit succinctly laid it out, writing (U.S. Telecomm. Ass’n v. FCC, supra at 9):

When an end user wishes to check last night’s baseball scores on, his computer sends a signal to his broadband provider, which in turn transmits it across the backbone to ESPN’s broadband provider, which transmits the signal to ESPN’s computer.  Having received the signal, ESPN’s computer breaks the scores into packets of information which travel back across ESPN’s broadband provider network to the backbone and then across the end user’s broadband provider network to the end user, who will then know that the Nats won 5 to 3.

Thus, when individuals or entities at the “edge” of the Internet wish to connect to others outside their host ISP network, that ISP facilitates the connection by using its own peering and transit arrangements with other ISPs to move the content (data) from the point of origination to the point of termination.

One of the key issues in the the network neutrality debate was whether or not ISPs where traffic terminates should be allowed to offer these companies, for a fee, a way to prioritize their Internet traffic over the traffic of others when network capacity was insufficient to satisfy current demand.  Many worried that structuring Internet pricing in this way would enable price discrimination among content providers (Choi, Jeon, and Kim 2015) and might have several undesirable side effects.

First, welfare might be diminished if prioritization results in a diminished diversity of content (Economides and Hermalin 2012).  Second, because prioritization is only valuable when network demand is greater than its capacity, selling prioritization might create a perverse incentive to keep network capacity scarce (Choi and Kim 2010; Cheng, Bandyopadhyay, Guo 2011).  Third, ISPs who offer cable services or are otherwise vertically integrated into content might use both of these features to disadvantage their competitors in the content markets.  In light of the risk that ISPs pursue price discrimination to defend their vertically integrated content interests, network neutrality can be seen as an application of the essential facilities doctrine from antitrust law (Pitofsky, Patterson, and Hooks 2002) to the modern telecommunications industry.

In response, broadband ISPs have claimed that discriminatory treatment of certain traffic was necessary to mitigate congestion (FTC 2007; Lee and Wu 2009 broadly articulate this argument).[1]  ISPs also claim that regulation prohibiting discriminatory treatment of traffic would dissuade them from continued investment in reliable Internet service provision (e.g., FCC 2010 ¶ 40 and n. 128; FCC 2015 at ¶ 411 and n. 1198) and even the FCC noted that its 2015 net neutrality rules could reduce investment incentives (FCC 2015 at ¶ 410).  Nevertheless, the FCC partially justified the implementation of net neutrality by noting that it believed that any potential investment-chilling effect of its regulation was likely to be short term and would dissipate over time as the marketplace internalized its decision.  Moreover, the FCC claimed that prior time periods of robust ISP regulation coincided with upswings in broadband network investment (FCC 2015 at ¶ 414).

How the Rules Came About?

The Commission’s Open Internet rules are far from the first time that the telecommunications industry has faced similar issues.  Half a century ago, AT&T refused to allow the use of cordless phones manufactured by third parties until it was forced to do so by a federal court (Carter v. AT&T, 250 F.Supp 188, N.D. Tex. 1966). The federal courts also needed to intervene before MCI was allowed to purchase local telephone service from AT&T to complete the last leg of long-distance telephone calls (MCI v. AT&T, 496 F.2d 214, 3rd Cir. 1974).  AT&T’s refusal to provide local telephone service to it’s long-distance competitor was deemed an abuse of its monopoly in local telephone service to protect its monopoly in long-distance telephone service, and featured prominently in the breakup of AT&T in 1984 (U.S. v. AT&T, 522 F.Supp. 131, D.D.C. 1982).  Subsequent vigorous competition in the long-distance market helped drive down prices significantly.

The rules developed for computer networks throughout the FCC’s decades long Computer Inquiries were also designed to ensure third party companies had non-discriminatory access to necessary network facilities, and to facilitate competition in the emerging online services market (Cannon 2003).  For example, basic telecommunications services, like dedicated long-distance facilities, were required to be offered separately without being bundled with equipment or computer processing services.  These services were the building blocks upon which the commercial Internet was built.

The rules that came out of the Computer Inquiries were codified by Congress in the Telecommunications Act of 1996, by classifying the Computer Inquiry’s basic services as telecommunications services under the 1996 Act, the Computer Inquiry’s enhanced services as information services under the 1996 Act, and subjecting only the former to the non-discrimination requirements of Title II (FCC 2015 at ¶¶ 63, 311-313; Cannon 2003; Koning 2015).[2]  In particular, 47 U.S.C. Title II stipulates that it is unlawful for telecommunications carriers “to make or give any undue or unreasonable preference or advantage to any particular person, class of persons, or locality, or to subject any particular person, class of persons, or locality to any undue or unreasonable prejudice or disadvantage (47 U.S.C. § 202(a) 2016).”

Internet access specifically was first considered in terms of this classification in 1998.  Alaska Sen. Ted Stevens and others wanted dial-up ISPs to pay fees into the Universal Service Fund, which subsidized services for poor and rural areas.  The FCC ruled that ISPs were information services because they “alter the format of information through computer processing applications such as protocol conversion” (FCC 1998 ¶ 33).  However, to understand this classification, it is important to keep in mind that ISP services at this time were provided using dial-up modems as the PSTN.  In other words, in 1998 the Internet was an “overlay” network—one that uses a different network as the underlying connections between network points (see, e.g., Clark et al. 2006).  If consumers’ connections to their ISPs were made using dial-up telephone connections, then USF fees for the underlying telecommunications network were already being paid through consumers’ telephone bills.

In this context, applying USF fees to both ISPs and the underlying network would have effectively been double taxation.  Additionally, the service dial-up ISPs provided could reasonably be described as converting an analog telecommunications signal (from a modem) on one network (the PSTN) to a digital packet switched one (the Internet), which is precisely the sort of protocol conversion that had been treated as an enhanced service under the Computer Inquiry rules.  The same reasoning does not apply to broadband Internet access service, because it provides access to a digital packet switched network directly rather than through a separate underlying network service (Koning 2015).  However, the FCC continued to apply this classification to broadband ISPs, effectively removing broadband services from regulation under Title II.

Modern policy concerns over these issues reappeared in the early 2000s when the competitive dial-up ISP market was being replaced with the broadband duopoly of Cable and DSL providers.[3]  The concern was that if ISPs had market power, they might deviate from the end-to-end openness and design principles that characterized the early Internet (Lemley and Lessig 2001).  Early efforts focused on preserving competition in the ISP market by fighting to keep last-mile infrastructure available to third-party ISPs as had been the case in the dial-up era.  However, difficult experiences with implementing the unbundling regime of the 1996 Act, differing regulatory regimes for DSL and Cable (local loops for DSL had been subjected to the unbundling provisions of the 1996 Act, but Cable networks were not; an analysis of the consequences of doing this can be found in Hazlett and Caliskan 2008), and the existence of at least duopoly competition between these two incumbents discouraged the FCC from taking that path (FCC 2002, 2005b).  Third-party ISPs tried to argue that Cable modem connections were themselves a telecommunications service and therefore should be subject to the common-carrier provisions of Title II.  The FCC disagreed, pointing to its classification of Internet access as an information service under the 1996 Act.  This classification was ultimately upheld by the Supreme Court in NCTA v. Brand X (545 U.S. 967, 2005).

Unable to rely on the structural protection of a robustly competitive ISP market, the FCC shifted its focus towards the possibility of enforcing an Internet non-discrimination regime through regulation.  During this time period, the meaning and ramifications of “net neutrality,” a term coined in 2003 (Wu 2003), became the subject of vigorous academic debate.  Under the computer inquiries, non-discrimination rules had applied to the underlying network infrastructure, but it was also possible for non-discrimination rules to apply to Internet service itself, just as they had been to other packet-switched networks (X.25 and Frame Relay) in the past (Koning 2015).  However, there was extensive debate over the specific formulation and likely effects of any such rules, particularly among legal scholars (e.g., Cherry 2006, Sidak 2006, Sandvig 2007, Zittrain 2008, Lee and Wu 2009).  Although to that point, there had been no rulemaking proceeding specifically addressing non-discrimination on the Internet, a number of major ISPs had agreed to forego such discrimination in exchange for FCC merger approval (FCC 2015 ¶ 65) and there was still a general expectation that ISPs would not engage in egregious blocking behavior.  In one early case, the Commission fined an ISP for blocking a competitor’s VoIP telephone service (FCC 2005a).  In 2008, the FCC also ruled against Comcast’s blocking of peer-to-peer applications (FCC 2008).  However, the Comcast order was later reversed by the D.C. Circuit (Comcast v. FCC, 600 F.3d 642, D.C. Cir. 2010).

In response to this legal challenge, the FCC initiated formal rulemaking proceedings to codify its network neutrality rules.  In 2010, the FCC released its initial Open Internet Order, which applied the FCC’s Section 706 authority under the Communications Act to address net neutrality directly (FCC 2010 ¶¶ 117-123).  Among other things, the 2010 Open Internet Order adopted the following rule (FCC 2010 ¶ 68):

A person engaged in the provision of fixed broadband Internet service, insofar as such person is so engaged, shall not unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service.  Reasonable network management shall not constitute unreasonable discrimination.

However, these rules were struck down by the D.C. Circuit in January 2014 (Verizon v. FCC, 740 F.3d 623, D.C. Cir. 2014). The root of the problem was that the Commission had continued to classify broadband Internet access as an “information service” under the 1996 Act, where its authority was severely limited.  As the court wrote: “[w]e think it obvious that the Commission would violate the Communications Act were it to regulate broadband providers as common carriers. Given the Commission’s still-binding decision to classify broadband providers not as providers of ‘telecommunications services’ but instead as providers of ‘information services,’ [] such treatment would run afoul of section [47 U.S.C §]153(51): ‘A telecommunications carrier shall be treated as a common carrier under this [Act] only to the extent that it is engaged in providing telecommunications services (Verizon v. FCC, supra at 650).’”

The FCC went back to the drawing board and issued its most recent Open Internet Order in 2015.  This time, the Commission grounded its rules in a reclassification of Internet access service as a Title II telecommunications service.  Moreover, unlike in the 2010 Order, which only subjected mobile broadband providers to a transparency and no blocking requirement (FCC 2010 ¶¶ 97-103), the Commission applied the same rules to providers of fixed and mobile broadband in the 2015 Order (FCC 2015 ¶ 14).

In contrast to information services, telecommunications services are subject to Title II common carrier non-discrimination provisions of the Act (FCC 2005b at ¶ 108 and n. 336).  As discussed above, these statutes expressly address the non-discrimination issues central to the network neutrality issue.  The reclassification permitted the Commission to exercise its Section 706 authority to implement the non-discrimination rules codified in Title II (FCC 2015 ¶¶ 306-309, 363, 365, 434).  On June 14, 2016, the D.C. Circuit upheld the FCC’s Open Internet rules as based on this and other statutes from Title II, 47 U.S.C. § 201 et. sq.

The Future of Net Neutrality

Although the Commission’s long evolving Open Internet rules appear to have found a solid legal grounding, it is important to understand that they are not without limits.  For instance, crucially, the rules stipulate what ISPs can and cannot do at termination, whereas they do not restrict the terms of interconnection and peering agreements with ISP networks (FCC 2015, ¶ 30).  Critically, in contrast to what HBO’s John Oliver might conclude from the FCC’s recent court victory, the Order does not prevent ISPs such as Comcast from requiring payment for interconnection to their networks; it merely subjects interconnection to the general rule under Title II that the prices charged must be reasonable and non-discriminatory.  Rather than making any prospective regulations on interconnection itself, the FCC’s 2015 Order leaves those issues open for future consideration on a case-by-case basis (FCC 2015, ¶ 203).

Additionally, academics are far from a consensus regarding the welfare implications of net neutrality.  When handing out judgement, the D.C. Circuit was careful to point out that its ruling was limited by a determination of whether the FCC has acted “within the limits of Congress’s delegation” (U.S. Telecomm. Ass’n v. FCC, supra note 1 at 23) of authority, and not on the economic merits or lack thereof of the FCC’s Internet regulations.[4]  In contrast to some of the aforementioned theoretical economics articles, there are a number of theoretical studies that find the type of quality of service tiering that is ruled out by the 2015 Order is likely to result in higher broadband investment and increase diversity of content (Krämer and Wiewiorra 2012; Bourreau, Kourandi, Valletti 2015), or for that matter, that under certain circumstances, it may not matter at all (Gans 2015; Gans and Katz 2016; Greenstein, Peitz, and Valletti 2016).  The empirical economic literature on net neutrality is at a very early stage and has thus far mostly focused on the consequences of other regulatory policies that might be likened to net neutrality regulation (Chang, Koski, and Majumdar 2003; Crandall, Ingraham, and Sidak 2004; Hausman and Sidak 2005; Hazlett and Caliskan 2008; Grajec and Röller 2012). To the extent that economists and other academicians reach some consensus on certain aspects of broadband regulation in the future, the FCC may be persuaded to update its rules.

Finally, the scope of the existing Open Internet rules remains under debate.  For instance, public interest group, Public Knowledge, recently rekindled the debate regarding whether zero rating (alternatively referred to as sponsored data plans) policies that exempt certain content from broadband caps imposed by certain providers constitute a violation of Open Internet principles (see Public Knowledge 2016; Comcast 2016).  Although the Commission has not ruled such policies out, in the 2015 Order, it left the door open to reassess them (FCC 2015, ¶¶ 151-153).

Signaling its concern about such policies, the FCC conditioned its recent approval of the merger between Charter Communications and Time Warner Cable on the parties consent not to impose data caps or usage-based pricing (FCC 2016 ¶ 457).  Academic research on this topic remains scarce.  Economides and Hermalin (2015) have suggested that in the presence of a sufficient number of content providers, ISPs able to set a binding cap will install more bandwidth than ones barred from doing so; to our knowledge, economists have not rigorously assessed zero rating and the FCC continues its inquiry into these policies.

[1] It should be noted that notwithstanding these claims, congestion control is already built into the TCP/IP protocol.  Further, more advanced forms of congestion management have been developed for specific applications, such as buffering and adaptive quality for streaming video, that allow these applications to adapt to network congestion.  Whereas real-time network QoS guarantees could be useful for certain applications (e.g., live teleconferencing), these applications represent a small share of overall Internet traffic.

[2] The categorizations embodied by the Computer Inquiries decisions initially stemmed from an attempt to create a legal and regulatory distinction between “pure communications” and “pure data processing,” the former of which was initially provisioned by an incumbent regulated monopoly (primarily AT&T), and the latter of which was viewed as largely competitive and needing little regulation.  The culmination of these inquiries implicitly led to a layered model of regulation, dividing communication policy into (i) a physical network layer (to which common carrier regulation might apply), (ii) a logical network layer (to which open access issues might apply), (iii) an applications and services layer, and (iv) a content layer (Cannon 2003 pp. 194-5, Koning 2015 pp. 286-7).

[3] One 1999 study found a total of 6,006 ISPs in the U.S.  See, e.g., Greenstein and Downes (1999) at 195-212.

[4] In particular, the Court wrote, “Nor do we inquire whether `some or many economists would disapprove of the [agency’s] approach’ because ‘we do not sit as a panel of referees on a professional economics journal, but as a panel of generalist judges obliged to defer to a reasonable judgement by an agency acting pursuant to congressionally delegated authority.”

Tags: , , , , , , , , ,

Hugs, Handshakes and High Fives: Strategies for Evaluating the Impact of Digital Inclusion Using Data from the Broadband Technology Opportunity Program


A great deal of funding has been devoted to stimulating the development of broadband Internet infrastructures and services in the United States. Federally funded initiatives have been studied and evaluated through dozens of studies. Jon Gant will discuss the lessons learned from efforts to evaluate the impact of broadband Internet initiatives.

Jon Gant

Jon Gant

Biographical Sketch

Dr. Jon Gant is a national leader in the areas of digital inclusion and broadband adoption. Jon is currently a professor at the Graduate School of Library and Information Science at the University of Illinois at Urbana-Champaign where he serves as the founding Director of the Center for Digital Inclusion (CDI). Under Jon’s leadership, CDI examines the social and economic impact of information and communication technologies globally. Jon is the principal investigator for the Illinois Digital Innovation Leadership Program. This is a collaboration with University of Illinois Extension and the Champaign-Urbana Community Fab Lab to build local high tech hubs in Illinois based to support digital fabrication, digital media production and data analytics. CDI is currently developing new research on smart cities/communities and next-generation Internet applications to serve the public.
Since 2009, Jon served as a director of Urbana-Champaign Big Broadband (UC2B), a University of Illinois-led intergovernmental consortium with the City of Urbana and City of Champaign operating an Internet service provider startup providing gigabit speed Internet access serving households, businesses and community anchor institutions in Urban-Champaign, IL. UC2B received a $22 million Broadband Technology Opportunity Program grant to construct a 187-mile fiber-optic broadband network infrastructure. Jon served as director for business development and was responsible for designing and implementing an innovative data analytics approach for business development, network engineering and construction, and customer relationship management. Since the completion of the BTOP grant in 2014, UC2B is now a not-for-profit ISP. UC2B is partnering with ITV-3 to expand gigabit services, voice and video to households in Urbana-Champaign. Jon serves currently as the Chairperson of the UC2B Board of Directors.
Jon served as a research director for the evaluation of the Department of Commerce’s Broadband Technology Opportunity Program (BTOP) as a consultant with ASR Analytics. Jon collaborated with the evaluation team to develop the mixed method research design, train and mentor the research and data analytics team, lead site visits, conduct interviews, brief senior NTIA officials, analyze the social and economic impacts, and co-author the case studies and final reports.
The Institute of Museum and Library Services, the Organization for Economic Coordination and Development, the International Telecommunication Union, the State of Illinois, Partnership for a Connected Illinois, the American Library Association, and the National Science Foundation, among others, have funded Jon’s research.
Jon received his M.S. and Ph.D. degrees from Heinz College at Carnegie Mellon University, where he studied public policy and information management. Jon earned his undergraduate degree from the University of Michigan.

Tags: , ,

Deplorable Telephony, by A. Michael Noll


Deplorable Telephony

A. Michael Noll

April 7, 2016

© 2016 AMN

The quality and fidelity of a telephone call in the United States are deplorable. Calls are disconnected and it is challenging to understand the other person. In our rush to cheaper telephone service, quality has been thrown to the wind.

Internet telephony utilizes compression to reduce the bit rate – and compression is a compromise with quality. The speech bits need to all arrive promptly and in the correct sequence, or quality is impaired. The Internet is great for data, but less so for telephony. Computer scientists have never understood telephone service. The result is a system that might be great for computers and the Internet, but bad in terms of quality of service. Internet telephony is still just an Interesting experiment – not yet ready for full time.

Cellular telephony was a great innovation, and today allows people to keep in contact when on the move. It too uses compression – compression that analyzes the speech signal and reconstructs a facsimile of at for the receiver of the call. As compression is increased to allow more calls in the restricted bandwidth allocated to cellular, quality deteriorates.

I now have difficulties in understanding the speech when I receive a cellular or Internet telephone call. The compression technology makes two-way full-duplex calls impossible. Once the caller starts talking, the connection is seized and it is impossible to interrupt – it becomes a monologue.

Disconnects are routine. The other day, I received a telephone call over the Internet, and it disconnected about every 7 minutes. Disconnects are frequent with cellular calls too. Telephone calls now require much time redialing each other.

Years ago, telephone engineers were concerned about the transmission quality of telephony, and much effort was spent on improving the technology to improve the fidelity. Today the profit motive dominates telephony – and quality has suffered greatly. Today’s younger consumers do not know the past – and what they are missing in terms of quality. Have texting and email replaced voice telephony? Are government agencies giving too much attention to broadband speeds rather than fidelity?

A. Michael Noll

A. Michael Noll

Tags: , , , , ,

Understanding the Economics of Net Neutrality


Whether you are new to net neutrality and want to better understand the concept or a seasoned researcher who wants an update regarding open questions, I encourage you to read a recent working paper entitled, “Net Neutrality: A Fast Lane to Understanding the Trade-Offs,” by Shane Greenstein, Martin Peitz, and Tommaso Valletti, a group of economists with a track record researching and writing about Internet economics. Although the article is rather recent, I believe it presents a very good starting point for those interested in taking a deeper dive into both specific theoretical and general empirical issues revolving net neutrality.

In this blog post, I attempt to outline the article for perspective readers and provide a few potentially useful links. Although I abstract completely from the math and intuition behind the results, the article is extremely straightforward with this regard.

A good starting point for a discussion of net neutrality begins with an understanding of the uses of the Internet. As the authors see it, there are four relevant categories of use for the Internet:

  1. Static web browsing and e-mail (low bandwidth; can tolerate delay). Data flows are largely symmetric across users.
  2. Video downloading (high bandwidth; can tolerate delay).
  3. Voice-over IP, video-talk, video streaming and multi-player gaming (high bandwidth; quality declines with delay). Data flows are mostly unidirectional from content providers to users.
  4. Peer-to-peer applications (high bandwidth; can tolerate delay; can impose delay on others).

Although much economic research tends to abstract from the technical issues revolving use of the Internet, many studies of net neutrality implicitly model the third variant above and the authors follow suit. This makes up the bulk of modern Internet traffic: for instance, together, Netflix, Youtube, and Amazon Prime have consistently made up approximately 50 percent of all North American Internet traffic as of late.

There are three common arrangements for moving data from content providers to users:

  1. Move data over “backbone lines” (e.g., Level3) and then to local broadband data carriers (e.g., ISPs) where the user is located. This may entail relying on an ISP to get to the backbone line.
  2. Move traffic to servers located geographically close to users: CDNs (e.g., Akamai).
  3. “Collocate” servers inside the network of an ISP. Payment for collocation was at the heart of negotiations between Netflix and Comcast that put net neutrality in the limelight (see also, John Oliver’s response to Tom Wheeler and my tangential reference inspired by Oliver and T-Mobile CEO John Legere).

The authors focus on two definitions of net neutrality: (1) prohibition of payment from content providers to Internet service providers (referred to as one-sided pricing whereby ISPs can only charge consumers) and (2) prohibition of prioritization of traffic with or without compensation.  As Johannes Bauer and Jonathan Obar point out, these are not the only alternatives for governing the Internet (see Bauer and Obar 2014).  In a simple world with no competition and homogeneous users, the authors suggest that net neutrality does not affect profits or consumer surplus. A number of real world considerations are taken into account, and the potential ramification of imposing net neutrality are suggested as follows.

  1. Users and content providers are heterogeneous. In this case, pressure on one side of the market (between ISPs and content providers) can lead to a corresponding change in prices on the other side of the market (between ISPs and users).
    • For instance, when content providers are identical but consumers are heterogeneous, allowing ISPs to charge termination fees to content providers can induce them to lower prices to consumers.
    • On the other hand, when content providers are heterogeneous but consumers are identical, allowing ISPs to charge termination fees can induce inefficient content provider exit.
  2. Some content providers get money from advertising (e.g., Facebook and Google), others charge users directly (e.g., Netflix).
    • The latter situation can complicate the analysis because ISP termination fees may directly impact downstream content prices.
    • The situation is further complicated if content providers can endogenize their mix of advertising and direct revenue (e.g., Pandora).
  3. Competition differs across markets, with multiple ISPs in some markets and this is relevant for studying net neutrality (see Bourreau et al. 2015). I discuss data that could be used to gauge competition in broadband provision at the end of a prior blog post.
  4. Congestion, quality of service, and network and content investment can be impacted by regulation.
    • Long term trade-offs depend on the competitive setting (e.g., horizontal competition, vertical integration).
    • Peak (termination) pricing that might be forbidden under certain forms of net neutrality could lead to welfare-enhancing congestion reducing investment.
    • Prioritization can lead to both, desirable or undesirable outcomes, and this depends on both ISP and content provider investment in congestion reduction (for instance, see Choi et al. 2014).

The authors caution against broad policy prescriptions, and rightly so, given the present ambiguity surrounding the impacts of net neutrality.  Along the way, the authors inspire a number of open empirical questions that might help policy makers.

  1. How much would allowing or eliminating termination fees affect the price charged to subscribers?
  2. Which net neutrality regulations (when in place) have been binding in practice?
  3. How do net neutrality regulations impact investment in congestion reduction?
  4. Does competition alter the need for net neutrality regulation?

I suspect that the first two questions are fairly difficult to answer from an economics perspective because in large part they depend on significant insider knowledge about contracting among market participants. The Quello staff and I are presently contemplating how to rigorously answer questions (3) and (4). We are very interested in your feedback.

Tags: , , , , ,

Aleks Yankelevich’s First Blog Post (Chipotle, Market Definition, and Digital Inequality)


Growing up, my parents, brother, and I usually avoided restaurants. For my parents, this was initially out of necessity; as Soviet refugees, they did not have the financial means to eat out. However, even having achieved a modicum of success, my parents are not generally in the habit of frequenting restaurants, having perhaps out of a lifetime habit, developed a taste for home cooking. Restaurants are exclusively for special occasions.

Thus, having never eaten at a Chipotle Mexican Grill, they were sufficiently impressed by the restaurant’s façade to wish to eat there, but only when the grand occasion merits such an extravagant excursion. Their two sons were informed as such. Naturally, my brother and I (perhaps spoiled as we are) jumped at the chance to poke fun at our parents for placing Chipotle on a pedestal. This is, after all, a restaurant chain that is victim to some serious defecation humor, not Eleven Madison Park.

For a number of months, my parents were subjected to text messages and Facebook or Instagram posts with visuals of me or my brother outside various Chipotle restaurants, posing next to Chipotle ads, and in one instance, wearing a Chipotle t-shirt (I have no idea how that shirt found its way into my wardrobe). My parents responded, saying things like (and I could not make this up), “I wish someone would take us to that dream place.”

However, recently, my mother sent a group text directing the family to a news report about dozens of confirmed E.Coli cases related to Chipotle (even the FDA got involved) and asking for alternative dining suggestions. The text responses, in order, were as follows:

Me: California Tortilla
My Wife: Taco Bell
My Brother: Sushi
My Mother: Eating In (with picture of latest home cooked meal)
My Brother’s Girlfriend: Bacon

How does a reasonable individual interpret this chain of responses? As an economist with some regulatory and antitrust experience, I found the answer obvious. I sent the following group text (modified for concision): “Has anyone noticed that this text conversation has turned into the classic antitrust debate about appropriate market definition, with each subsequent family member suggesting a broader market?”

Surprisingly, no one else had noticed, but I was asked to unpack my statement a little bit (my mom sent a text that read: “English please.”).

The U.S. Department of Justice and the Federal Trade Commission’s Horizontal Merger Guidelines stipulate that market definition serves two roles in identifying potential competitive concerns. First, market definition helps specify the line of commerce (product) and section of the country (geography) in which a competitive concern arises. Second, market definition allows the Agencies to identify market participants and measure market shares and concentration.

As the Agencies point out, market definition focuses solely on demand substitution factors, i.e., on customer’s ability and willingness to substitute away from one product to another in response to a price increase or a corresponding non-price change (in the case of Chipotle, an E.Coli outbreak might qualify as a reduction in quality). Customers generally face a range of potential substitutes, some closer than others. Defining a market broadly to include relatively distant substitutes can lead to misleading market shares. As such, the Agencies may seek to define markets to be sufficiently narrow as to capture the relative competitive significance between substitute products. For some precision with this regard, I refer the reader to Section 4.1.1 of the Guidelines.

As for the group texts above, the reader can now infer how market definition was broadened by each subsequent family member. To reiterate:

Me: California Tortilla (Mexican food in a similar quality dining establishment to Chipotle.)
My Wife: Taco Bell (Mexican . . . inspired . . . dining out, generally.)
My Brother: Sushi (Dining out, generally.)
My Mother: Eating In (Dining, generally.)
My Brother’s Girlfriend: Bacon (Eating.)

Why is market definition relevant to the Quello Center at Michigan State University? As the Center’s website suggests, the Center seeks to stimulate and inform debate on media, communication and information policy for our digital age. One area where market definition plays a role with this regard is within the Quello Center’s broad interest in research about digital inequality.

Digital inequality represents a social inequality with regard to access to or use of the Internet, or more broadly, information and communication technologies (ICTs). Digital inequalities can arise as a result of individualistic factors (income, age and other demographics) or contextual ones (competition where a particular consumer is most likely to rely on ICTs). Market definition is most readily observed in the latter.

For instance, consider the market for fixed broadband Internet. An immediate question that arises is the appropriate geographic market definition. If we rule out individuals’ ability to procure fixed broadband Internet at local hotspots (e.g., libraries, coffee shops) from the relevant market definition, then the relevant geographic market appears to be the home. This is unfortunately a major burden for researchers attempting to assess the state of fixed broadband competition and its potential impact on digital inequality because most market level data in use is at a much more aggregated level than the home. The problem is that when an aggregated market, say a zip code, contains multiple competitors, it is unclear how many of these competitors actually compete in the same home.

Thus far, most studies of fixed broadband competition have been hampered by the issue of geographic market definition. For instance, Xiao and Orazem (2011) extend Bresnahan and Reiss’s (1991, 1994) classic studies of entry and competition in the market for fixed broadband, albeit at the zip code level. Wallsten and Mallahan (2010) use tract level FCC Form 477 data to test the effects of competition on speeds, penetration, and prices. However, whereas there are approximately 42,000 zip codes and 73,000 census tracts in the United States, there are approximately 124 million households, which implies a fairly large amount of aggregation that can lead researchers to conclude that competition is stronger than it actually is.

Another question that arises is whether fixed broadband is too narrow a product market and if the appropriate market definition is simply broadband, which would include fixed as well as mobile broadband. Thus far, because of data limitations, most studies of wireline-wireless substitution have focused mainly on voice rather than on Internet use (e.g. Macher, Mayo, Ukhaneva, and Woroch, 2015; Thacker and Wilson, 2015) and so do not assess whether mobile has become a medium that can mitigate digital inequality. Prieger (2013) has made some headway into this issue by showing evidence that as late as 2010, mobile and fixed broadband were generally not complementary, and that mobile only broadband subscription was slightly more prevalent in rural areas. However, because of data limitations, Prieger does not estimate a demand system to determine whether fixed and mobile broadband are substitutes or complements as the voice substitution papers above do.

Luckily, NTIA’s State Broadband Initiative (SBI) and more recently, the FCC, have enhanced researchers’ ability to assess competition at a fairly granular level by providing fixed broadband coverage and speed data at the level of the census block. Similarly, new data on Internet usage from the U.S. Census should allow researchers to better tackle the wireline-wireless substitution issue as well. The FCC has also hopped on the speed test bandwagon by collaborating with SamKnows to measure both fixed and mobile broadband quality. In the former case, the FCC periodically releases the raw data and I am optimistic that at some point, mobile broadband quality data will be released as well (readers please correct me if I am glossing over some already publically available granular data on mobile broadband speed and other characteristics).

The Quello Center staff seeks to combine such data, along with other sources, to study broadband competition and its impact on digital inequality. We welcome your feedback and are presently on the lookout for potential collaborators interested in these issues.


Tags: , , , , , ,

Rural Access to Broadband: the Case in Britain Shines Light on a Pattern


In Britain, a growing gap between urban and rural Internet speeds is damaging business, adding to farming costs, driving young people away from areas in which they have grown-up, and deterring retirees from moving to some areas of the country. These are some of the conclusions of our in-depth academic study of Internet access that Bill Dutton, Director of the Quello Center, conducted with the dot.rural RCUK Digital Economy Research Hub at the University of Aberdeen, and the Oxford Internet Institute, at the University of Oxford.

The report has been published, entitled ‘Two-Speed Britain: Rural Internet Use’. It is based on the most detailed survey so far of rural Internet users. By looking separately at ‘deep rural’ (remote), ‘shallow rural’ (less remote) and urban Internet users, the project was able to reveal the true nature of a rural divide. The report is available online at:

Specifically, Bill and his colleagues found that while in urban areas just six per cent of those sampled had an average broadband speed below 6.3 Mbits/sec, in deep rural areas 45% of people were unable to achieve this modest speed. The lead research for dot.rural, Professor John Farrington, of the University of Aberdeen and lead author of the report, said that these findings indicated the scale of the problem for deep rural areas in particular, and that the digital gap is currently widening, rather than closing.

“The broadband speed gap between urban and especially deep rural areas is widening: it will begin to narrow as superfast reaches more rural areas but better-connected, mostly urban, areas will also increase speeds at a high rate. This means faster areas will probably continue to get faster, faster with slow speed areas left lagging behind.

“There is a growing social and economic gap between those who are connected and those who are not, the ‘digitally excluded’,” he said.

“It is generally seen in differences between deep (remote) rural Internet use on the one hand, and shallow (less remote) rural and urban Internet use on the other hand.

It is most pronounced in upland areas in Scotland, Wales and England, but also in many areas in lowland rural Britain. It affects 1.3 million people in deep rural Britain, and many more in less remote areas with poor Internet connection: 9.2 million people live in shallow rural areas.

“Rural businesses are penalised because they are unable to take advantage of the commercial efficiencies afforded by the Internet, as in the creative industries, or have to resort to the use of paper systems which are more costly, as in the farming sector where there is a push to move administration such as sheep registrations online.

“All these issues can potentially create a new tipping point for digitally poorly connected rural areas, including: losing businesses; adding to farming’s costs; making out-migration more likely for young people; and in-migration less likely for retirees or the economically active.

Professor Farrington added that the issue needed to be addressed if the UK Government agenda of ‘Digital by default’, with government services being delivered online, is to be achieved.

“There is a drive to make public services ranging from registering to vote to applying for a visa or making a tax return digital by default, and simpler, clearer and faster to use.

“Based on the findings of our report, this can’t be achieved until better connection is universal. The ‘universal’ broadband target of 2 Mbits/sec will be inadequate to fulfil this aim.

”An element of policy should be to improve the interface between public, private and community efforts in improving deep rural broadband speeds”

As one of the authors, and one of the principle researchers in the conduct of the Oxford Internet Surveys (OxIS), I noted that:

“This deep rural divide is not new, but it has been invisible in the statistics until now. With a specially designed sample in 2013, we have been able to uncover this divide and see it in the data. A major investment in OxIS has paid off.”

In my opinion, this helps address the failure of many other studies to find the rural divide in the data gathered by survey researchers. First, we required a disproportionate stratified sample in order to obtain a sufficient number of deep rural residents. It took us years to find the support for this boosted sample, and it would not have been possible without the collaboration with the Aberdeen dot.rural project. Secondly, the urban-rural divide was masked by the fact that shallow rural residents often have better connectivity than many urban users. Since we had a large enough rural sample, we were able to disaggregate shallow and deep rural residents and see the divide in the data.

This pattern could be the case in many other nations, so I hope researchers in the US and worldwide take notice of these findings in Britain, including England, Wales and Scotland. Moreover, the report provides an array of qualitative examples to help see the role of rural divides not just in the statistics but also in the lives of rural residents.


This most detailed survey so far of Rural Internet Users refines many popular notions of the urban-rural digital divide and allows more detailed evidence of the impact of this divide. By looking separately at ‘deep rural’ (remote), ‘shallow rural’ (less remote) and urban Internet users, we are able to highlight the true nature of this divide.

The online behaviour of those living and working in deep and shallow rural areas reflects constraints on Internet connectivity – the effects of which include an overall limitation on what people are able to do online compared with what they want to do. Those residing in deep rural areas are most likely to be unserved or underserved (with speeds of less than 2.2Mbit/s) by broadband connectivity and are less likely than others in Britain to be able to engage online.

Ofcom’s mobile telecommunications data, reported at local authority level, shows that mobile Internet (3G and 4G) access in many rural areas remains limited, or non-existent and is not a feasible alternative means of connectivity to those without fixed broadband servicing their home or business premises.

See the report at:

Tags: , , , , , , , ,

Reframing the Broadband Debate in the US


Fiber initiatives, such as by Google, are being framed by a debate over universal service versus letting the marketplace decide. This is the framing for example of today’s WSJ article on Google Fiber Rollouts. This leads to a bias against such initiatives, as if they are undermining access to the Internet. This issue would be better framed as one of a number of approaches to enabling more competition in the broadband marketplace, which such initiatives are likely to foster, and targeting public policy initiatives on addressing areas that are not being well served by the marketplace. In this instance, the US could learn from the UK, which has arguably more competition for broadband services, and is seeing increasing efforts to focus public infrastructure spending on areas, such as deep rural areas, where the marketplace is not effective in providing adequate access to broadband, mobile and other telecommunication services.

See the WSJ Article


Tags: , , , , , ,