ABOUT

How to assess consolidation in broadcasting.

Monday, October 2nd, 2017

Last week, Vincent (Vinnie) Curren, Principal at Breakthrough Public Media Consulting, Inc., gave an insightful Quello Center presentation about the technological and market potential of ATSC 3.0, an IP-based standard created by the Advanced Television Systems Committee (ATSC).[1] As CNET put it, this standard was created with the idea that most devices would be Internet-connected, enabling a hybrid system whereby the main content (audio and video) would be sent over the air, but other content (advertisements) would be sent over broadband and integrated into the program. This creates some very interesting opportunities for individualized marketing, though as ATSC touts in a somewhat cutesy promotional video, ATSC 3.0 is capable of a lot more.

Vincent Curren

The conversation with Vinnie took an interesting turn (to me anyhow), when he contrasted the state of public broadcasting in Michigan with that in Arkansas. According to Vinnie, public broadcast station management in Michigan is highly balkanized, whereas in Arkansas, it is largely centralized. This implied far fewer individual station engineers and managers in Arkansas, where budget savings from having a smaller bureaucracy are instead applied toward better local news coverage. Effectively, Vinnie was touting the benefits of merger to (state level) monopoly.

This statement immediately set off my antitrust alarm (which sounds like this). After all, even if a merger between two firms that preserves both firms’ products (e.g., broadcast stations) can reduce costs, monopolistic ownership could still raise prices above that in a duopoly by internalizing competition between the firms. More specifically, when one firm in a duopoly raises its price, some of its customers will switch to its competitor’s product and vice versa. This competitive threat puts downward pressure on prices relative to what happens under a monopoly. When a monopolist sells both products, a rise in the price of one positively impacts demand for the other, inducing the monopolist to set higher prices unless a merger to monopoly lowers costs sufficiently to offset this anti-competitive effect. The fact that antitrust practitioners seldom consent to a merger to monopoly suggests that the anti-competitive effect usually dominates.

However, the broadcasting market is different! Broadcasters operate in a multi-sided market that is likely to become even more complicated by the spread of ATSC 3.0. First, consumers of content do not pay broadcasters to watch television. Instead, broadcasters subsidize consumers, but charge advertisers for airing commercials (though in the case of public broadcasting, this is largely supplemented by contributions from viewers like you). Broadcasters may also charge retransmission fees to cable operators who carry broadcasting content and do charge consumers for content generally. Moreover, with ATS 3.0, Internet service providers will have to be involved in this market if advertisements are to be integrated via broadband. This means that the effect of merger operates through a mechanism that is far more complex than the “internalization of competition.”

After Vinnie’s presentation I considered whether economists have attempted to tackle the issue of merger in a multi-sided market. The issue is relatively understudied, but two papers stood out in my literature search:[2]

  1. Chandra, A., & Collard‐Wexler, A. (2009). Mergers in Two‐Sided Markets: An Application to the Canadian Newspaper Industry. Journal of Economics & Management Strategy, 18(4), 1045-1070.
  2. Tremblay, M. J. (2017). Market Power and Mergers in Multi-Sided Markets. Available at https://ssrn.com/abstract=2972701.

Chandra and Collard‐Wexler (2009) theoretically explore a two-sided merger from duopoly to monopoly and then use difference-in-differences approaches to empirically investigate mergers by newspaper publishers. As in many other two-sided markets, newspaper publishers offer one side (consumers) a subsidy by charging below cost. This is because newspapers not only value readers’ circulation revenue, but also the value that advertisers place on consumers. In the model of Chandra and Collard‐Wexler (2009), the key factor that determines how newspaper mergers affect prices is how newspapers value the marginal consumer who is indifferent between two competing newspapers.

If the revenue that this consumer indirectly brings in through advertisement consumption is lower than the loss to the newspaper of subsidizing the consumer’s newspaper purchase, then competing duopolists will set higher circulation prices in equilibrium than a monopoly owner of the two papers (even absent any cost reduction by the monopolist). This result is driven in large part by the authors’ assumption that consumers who are indifferent between the two papers will turn out to be less valuable to advertisers, and hence will bring in advertising revenues that are lower than the subsidy they enjoy on the paper. The assumption is well motivated in the paper, but may not necessarily apply in broadcasting. Moreover, if the reader provides a positive value to the newspaper, then mergers can still increase prices (unless cost reduction is sufficient to counteract market power).[3]

Tremblay (2017) sets up a relatively general multi-sided platform model that he uses to measure platform market power and to assess the effect of platform mergers. In this model, multiple platforms that facilitate interactions between distinct groups (e.g., broadcasters might serve consumers of content and advertisers) compete by pricing for each interaction facilitated by the platform.

The model highlights the complexity of analyzing multi-sided markets by recognizing that demand for any interaction is a function of not only the vector of prices involved in that interaction—as in a “one-sided” market—but also of the vector of all other interaction types! Thus, not only must we consider the demand response to a change in price for that interaction, but also the demand response to the numerous potential externalities that might exist (e.g., a negative network externality can occur on media platforms where greater consumer advertisements diminish consumer usage on the platform).[4]

As such, in addition to consisting of the usual marginal cost and demand elasticity contingent markup, the equilibrium price for a specific interaction is also dictated by what Tremblay refers to as “marginal profit elsewhere,” which consists of the marginal changes that the interaction in question engenders on all other interactions. Moreover, in the case of a multi-platform seller (e.g., broadcaster that owns multiple stations), as might follow post-merger, the equilibrium price is impacted not only by the standard diversion term that gauges the extent to which a merger can internalize competition, but also by “diversion elsewhere,” which results from multi-sidedness. This “diversion elsewhere” means that some platform prices may decrease post-merger, suggesting that even without cost-reduction benefits, a horizontal platform merger may be efficient.

Certain factors complicate matters even further in broadcasting. As Vinnie pointed out, a significant part of a local television station’s advertising revenue comes from national advertisers, especially in the larger markets. In many cases, prices are not set unilaterally, but are determined through negotiations with advertisers. A larger multiple-market footprint gives larger broadcast groups leverage when they negotiate pricing for national clients. The effect of a broadcasting merger surely depends on this countervailing bargaining power as well as on whether content consumers view advertising as a good or a bad.

Additionally, a significant part of local station revenue comes from “retransmission consent fees.”[5] If it opts for retransmission consent, a cable service provider is not required to carry the broadcaster’s channel, but if the cable operator chooses to do so, the broadcaster can demand “retransmission” or rights fees.  A large station owner like Sinclair, which operates hundreds of stations, has additional leverage when negotiating retransmission consent fees with a large cable operator like Comcast. Of course, cable companies may pass these fees down in the form of higher prices to consumers. The additional revenue on the broadcaster side may lead to better content, but that will probably come at a higher price for cable service.

ADDENDUM

After reading this post, a former colleague who is very knowledgeable in this area pointed out that there has been some research on the trade-offs of consolidation in two-sided markets and related issues that predates the modern multi-sided market literature.

An early two-sided market analysis by Robert Masson, Ram Mudambi, and Robert Reynolds (1990) shows that competition can sometimes lead to a price increase. Moreover, in the model, competition either makes advertisers better off while making media-consumers worse off or the other way around. An even older related piece by James Rosse (1970) seeks to estimate cost functions in the newspaper industry without cost data. Yet another article concerning the newspaper industry by Roger Blair and Richard Romano (1993), looks at newspaper monopolists, which as the authors point out, nevertheless frequently sold newspapers at below cost. I suspect that the two-sided logic for this to occur is a lot more clear to economists today than it was in 1993.

 

[1] ATSC is an international, non-profit organization developing voluntary standards for digital television. Member organizations represent the broadcast, broadcast equipment, motion picture, consumer electronics, computer, cable, satellite, and semiconductor industries. See https://www.atsc.org/about-us/about-atsc/.

[2] Other related work includes Filistrucchi et al. (2012) and Song (2013). See Filistrucchi, L., Klein, T. J., & Michielsen, T. O. (2012). Assessing unilateral merger effects in a two-sided market: an application to the Dutch daily newspaper market. Journal of Competition Law and Economics, 8(2), 297-329; Song, M. (2013). Estimating platform market power in two-sided markets with an application to magazine advertising. Available at https://ssrn.com/abstract=1908621.

[3] Note that I have not discussed the impact of merger on the price of advertising. The authors find that the effect on advertising price is indirect: if there is an increase in a newspaper’s circulation price, this will increase the average value to advertisers of that newspaper.

[4] In the words of Tremblay, demand contains an infinite feedback loop because demand for an interaction by platform X is a function of demand for an interaction Y and vice versa.

[5] Commercial stations have a choice between two options with respect to making their programming available to cable and satellite systems. They can exercise “must carry.”  If they do this, the cable service provider is required to carry the broadcaster’s primary channel but does not have to pay the broadcaster any rights fees for carrying the channel.  Alternatively, cable service providers can exercise “retransmission consent.”

Tags: , , , ,

No Comments


Should the FCC price-cap business broadband?

Saturday, August 26th, 2017

Last month, prolific telecomm researcher Susan Crawford wrote about the multi-billion dollar market for business data services (BDS). This market consists of “middle mile” networks used to connect consumers and businesses across cities and neighborhoods. As I explained in a 2016 Quello Center presentation concerning this market (previously referred to as “special access”), these connections, which are owned by “local exchange carriers,” are used, for example, by large businesses to facilitate intranet communication, by cell-phone providers to funnel voice and data traffic between towers, and by banks to connect to their ATMs. Incumbent local exchange carriers (ILECs) also wholesale business data services to rival, competitive local exchange carriers (CLECs), who compete with them head on.

Ajit Pai reciting Lewis Carroll’s parable on the dangers of over-regulation in the communications sector.

Professor Crawford’s concern, and that of others before her,[1] is that unregulated ILECs will exploit their monopoly power to keep prices high and competition out. As Crawford notes, following a massive data collection to analyze the BDS market, in 2016, the Commission appeared on the cusp of extending regulation in this market, but reversed course following the 2017 change in leadership (see 2017 Commission Order here). In particular, the FCC provided that there would be no new regulation of packet-based BDS, and that the continuation of currently regulated TDM-based services would be determined by a market test that Crawford called “unbelievably counterfactual [and] low.”[2]

The regulation in question is price-cap regulation of ILECs’ wholesale and retail prices, which, in markets deemed not to be sufficiently competitive, constraints the prices that ILECs charge for various regulated business-data services that they offer. As my former FCC colleague, Omar Nayeem, and I show in a recent theoretical working paper motivated by the FCC’s BDS proceeding and set to be presented at next month’s TPRC conference, while the case for price-cap regulation appears rather strong, Chairman Pai is justified in his evident concern about the potential deleterious effect of regulation on competition.[3]

In our work, Omar and I study a setting in which an ILEC sells business data services in the (enterprise) “retail” market as well as to a potential CLEC, who purchases access to ILEC networks and/or facilities and resells it in the retail market.[4] Our interest is in the static welfare and dynamic investment ramifications of price-cap regulation in this market relative to what would happen without price-cap regulation.

Static Welfare Results: In our static analysis, we consider profits and consumer-surplus levels that would prevail if the FCC capped the ILEC downstream (retail) and wholesale prices for BDS at marginal cost as well as those that would prevail without price-cap regulation. Our interest is not on a comparison of these two scenarios—ILEC profits are obviously lower and consumer surplus higher following price-cap regulation—but rather how the relevant regulatory regime affects competition and incentives by ILECs to foreclose potential entrants.

To our surprise, we discovered that, when price-cap regulation is not in place in such a market, CLEC entry, at least in theory, leads to what Chen and Riordan (2008) have dubbed “price-increasing competition.” That is, the ILEC ends up setting higher prices following CLEC entry than it would as a monopolist. This occurs because the ILEC can exploit its control over the wholesale price of BDS to force the CLEC to set a high retail price, which mitigates the negative impact of entry on the ILEC’s retail sales. In addition, when the wholesale price is high, the incumbent incurs a greater opportunity cost of lowering its price through lost unit sales to the entrant. Thus, the entrant’s reliance on the incumbent in the upstream market undoes the typical effect of entry, which normally is a disciplining force on the incumbent’s retail price.

Naturally, the price-increasing competition that follows wholesale entry can lead consumers to be worse off than they might be under an ILEC monopoly. By forbidding price-increasing competition, price-cap regulation ensures that consumers are better off (because of their increased choice) following entry than they would be under a price-capped ILEC monopoly. Importantly, we find that price caps should not raise any concerns about foreclosure. In particular, the ILEC does not have an incentive to foreclose the CLEC when it is price-cap-regulated unless it also has that incentive in the absence of price caps. The intuition for this finding is rather straightforward: when ILEC retail prices are capped at marginal cost, the only way it can now earn positive economic profit is by selling to the CLEC at wholesale to save on any downstream retailing costs.

Dynamic Investment Results: Though Omar and I investigate the impact of price-cap regulation on both ILEC and CLEC investment incentives, for brevity (as if this blog post weren’t already long enough), I discuss here the impact of price caps on CLEC investments to self-provision. In other words, Omar and I ask the following question: might there be situations under which the CLEC would choose to invest in its own duplicative network facilities to obviate its reliance on wholesale BDS when the ILEC is not price-capped, but choose to continue to rely on wholesale BDS under price caps? Conversely, what about the other way around?

The answer is ex-ante unclear. Under regulation, the ILEC’s initial downstream price is relatively low (equal to marginal cost) and does not drop following self-provisioning (whereas it would drop without price caps as the ILEC responds to its competitor by lowering its price). This means that, under regulation, self-provisioning does not elicit a major competitive response from the ILEC, giving the CLEC a stronger incentive to do so. However, under regulation, the initial wholesale price is low as well (also equal to marginal cost), so that self-provisioning does not lead to as much of a marginal cost reduction as it would in the scenario without price-caps, in which the wholesale price is initially high. What we find is that the latter effect dominates under most reasonable values of the relevant parameters.

If the CLEC has a sufficiently low fixed cost of self-provisioning, it will do so regardless of the presence of price-cap regulation, whereas, if that fixed cost is sufficiently high, the CLEC will remain a wholesale entrant regardless of the regulatory regime. The significance of our finding is that, under most parameter specifications, there is an intermediate range of fixed costs of self-provisioning whereby a CLEC might invest in the scenario without regulation, but would not do so under price-caps.

The idea that regulation might forestall investment is far from new in telecommunications. In the debate over net neutrality, opponents frequently touted the likely deleterious effect of net neutrality on broadband investment.[5] Similarly, in various proceedings involving roaming by wireless service providers, opponents of FCC roaming regulations were concerned with attempts by rivals to “piggy-back” on their networks.

What we find is that this concern is relevant in the context of price-cap regulation as well. However, whether this concern justifies FCC actions to reduce the scope of price-cap regulation is an empirical question we leave for future researchers. In our work, Omar and I found that price-caps have positive social effects, both static and dynamic (though the latter are not discussed in this post). These benefits must be weighed against the concern about forestalling entrant investment.

 

[1] See, for instance, two posts from the Benton Foundation here and here.

[2] Unlike more recent packet-based technologies, time-division multiplexing (TDM) transmits signals by means of synchronized switches at each end of the transmission line.

[3] Most of the analysis in this work was performed while I was at the Quello Center and Omar was an economist at the FCC. In particular, the analysis was begun during the Wheeler administration and completed during the Pai administration, and represents the opinions of the authors, not the FCC or any of its Commissioners.

[4] Indeed, our theoretical framework applies more broadly to markets where firms supply their rivals (i.e., energy, water and sewage, etc.).

[5] George Ford discusses this concern and takes a straightforward econometrics approach to try to answer this question here and in other Phoenix Center Perspectives.

Tags: , , , , , ,

1 Comment


Dennis Rodman: Cryptocurrency Ambassador

Monday, June 19th, 2017

Last week, Dennis Rodman once again entered the media spotlight by taking a trip to North Korea. In spite of the media hullabaloo over the alleged purpose of Dennis Rodman’s latest round of basketball diplomacy, and apparent subsequent disappointment over the lack of controversy following the trip, the controversial star’s intent seems patently obvious: he is America’s cryptocurrency ambassador.

Media photographs of Rodman consistently pictured him decked out in gear emblazoned with the logo of his sponsor, PotCoin.com. Potcoin.com touts itself as “an ultra-secure digital cryptocurrency, network and banking solution for the $100 billion global legal marijuana industry,”—so Bitcoin, but marketed for pot entrepreneurs. The video embedded in the center of its homepage—which I could not help but to transcribe below—explains it all:

Image courtesy of BTC Keychain (https://www.flickr.com/photos/btckeychain/20401933105)

“Potcoins are digital coins you could send through the Internet. Potcoins have a number of advantages. Potcoins are transferred directly from person to person via the Net. This means that the fees are much lower. You can use them in every country. Your account cannot be frozen and there are no prerequisites or arbitrary limits [so you can pay for as much pot as you want in a single transaction].  

“Let’s look at how it works. Your Potcoins are kept in your digital wallet on your computer or mobile device. Sending Potcoins is as simple as sending an e-mail and you can purchase anything with Potcoin [the possibilities!!!]. The Potcoin network is secured by thousands of computers using state of the art encryption. Anyone can join the Potcoin network and the software is completely open source so anyone can review the code. Potcoin opens up a whole new platform for innovation.  

“Potcoin is changing finance the same way the Web changed publishing. When everyone has access to a global market, great ideas flourish [insert marijuana joke here]!”

In summary, because Potcoin appears to be Bitcoin for pot and also for things other than pot, it is effectively yet another cryptocurrency alternative to Bitcoin. Indeed, after watching this video, I found an eerily similar one on the homepage of bitcoin.org, with the word “pot” replaced by the word “bit.” In other words, the Potcoin video is a rehash (pun absolutely intended).

Earlier this month, I attended Northwestern University’s annual Internet Commerce and Innovation Conference, where Hanna Halaburda and Gur Huberman, economists who know quite a bit more than the average person on this topic, kindly explained how cryptocurrencies such as Bitcoin actually work.

I will keep the explanation at a bird’s eye view level, but a more in-depth discussion is available on Scott Driscoll’s blog post entitled, “How Bitcoin Works Under the Hood,” and the paper by Satoshi Nakamoto, “Bitcoin: A Peer-to-Peer Electronics Cash System,” available at bitcoin.org. Hanna Halaburda has also written a book on the topic: “Beyond Bitcoin: The Economics of Digital Currencies.”

Transactions: Suppose that a Bitcoin user, call him Scottie, in possession of a Michael Jordan Hand Signed 50th Anniversary Basketball, is willing to accept a single Bitcoin from existing Bitcoin user, Dennis, who happens to have precisely one Bitcoin (around $2,500 at the time of this writing) in his possession. Any cheaper than that and Dennis may as well steal the ball. As Bitcoin users, both Scottie and Dennis have installed Bitcoin wallet software that allows them to facilitate the transaction.

Upon first usage, the wallet software on both users’ devices downloads a record of every Bitcoin transaction by everyone ever made. This record, called the Blockchain, represents a complete history of incremental groups of completed transactions, referred to as blocks. The Blockchain takes the place of a public ledger for Bitcoin. To undertake the transaction, Dennis must provide a digital signature to authenticate that he is in possession of and can transact Bitcoins. In addition to having a private component only available to Dennis, it contains a public component that reveals that Dennis can undertake the transaction and allows him to have the transaction recorded in the Blockchain. Once the transaction is verified by other Bitcoin users (more on these below), it is added to the ever growing Blockchain.

Verification: Here is where it gets weird: groups of transactions directing transfers of Bitcoins entail a public broadcast of these transactions interlinked with a mathematical puzzle that is, as I understand it, too complex to solve by any means other than computer driven guess work. Users, called miners, compete against each other to solve problems associated with each transaction. The first miner to solve the puzzle associated with a group of transactions gets to add that group to the Blockchain, thus expanding the history of transactions and enabling the transfer of Bitcoins. The miner is rewarded with Bitcoins—which is how new Bitcoins come into being—and earns the right to charge a transaction fee for verifying a group of transactions (more on this in the abovementioned references). If the transaction between Scottie and Dennis belongs to that particular group, the financial component of their transaction is completed (with some caveats left out) and Scottie can ship the ball to Dennis. However, as I indicate below, Scottie might wish to smoke two joints while he waits for a few additional blocks to be added to the chain before shipping the ball (though I make no personal recommendation as to what Scottie should actually do with his time while he waits).

Fraud: As described, the transaction verification process does not involve a centralized authority such as a bank. This leaves potential room for fraud as follows. If Dennis were a character of ill repute, he might wish to set up an additional transaction sending his one Bitcoin to an alternative account that he possesses. If that transaction is verified prior to his transaction with Scottie, Scottie’s transaction is considered nullified, but this will not be discovered until an attempt is made to add the transaction to the Blockchain. If Scottie, worried that the ball would deflate if not sent in time, sends it before the transaction with Dennis is added to the Blockchain, he would have no recourse unless he personally knows Dennis and can verify the terms of the transaction to a centralized authority. Moreover, even if Scottie could verify these terms, he might learn that Dennis lives in some place like North Korea, which might not offer him any recourse.

Alternatively, Scottie might view a new block being added to the Blockchain as sufficient time before sending the basketball. But then he risks that Dennis might execute and verify (by mining) additional fraudulent transactions on top of his now fraudulent block to extend the fraudulent chain before others solve sufficient puzzles to extend the true chain sufficiently far to actuate public agreement stipulating that the true chain is indeed the correct representation of the full Blockchain. This is not particularly likely unless Dennis has sufficient computing power to outcompete all active miners until Scottie sends him the ball. Thus far, I have made a stab at explaining buyer side fraud, but sellers with no reputation may be fraudsters as well, and it is less clear to me how to resolve concerns over seller side fraud. I found a rather creative suggested solution on an archived Reddit post requiring a buyer to confirm receipt before a seller can access the Bitcoins sent by the buyer, but also preventing the buyer from any further access to the Bitcoins once she has undertaken the transaction (sent the Bitcoins).

Disclaimer: Here I have only touched upon the bare bones of Bitcoin, but missing from this blog post is a slew of mathematics, economics, and practicalities. For those interested, I advise you to consult one or more of the readings above or to talk to an actual expert on the subject. I do not condone the use of any illicit substance and do not recommend the abuse or misuse of any mind or mood altering drug, whether illicit or not. I am, however, a vocal advocate and frequent user of double entendre.

Tags: , ,

1 Comment


Is the environment a barrier to infrastructure deployment?

Tuesday, April 18th, 2017

This week, the Quello Center had the privilege of hosting Federal Communications Commission (FCC) Biologist, Dr. Joelle Gehring (event page) to discuss her work on reducing avian collisions with communications towers. Dr. Gehring’s work, which was recently profiled by NPR, presently involves collaborating with federal regulators such as the Federal Aviation Administration (FAA) and communication tower owners to adjust tower lighting in order to reduce migratory bird collisions.

Back of the envelope calculations suggest that the efforts of Dr. Gehring and her colleagues have the potential to reduce avian fatalities by 4-5 million per year in the U.S. and Canada alone. Moreover, as Dr. Gehring pointed out, the efforts that tower owners need to undertake are relatively minimal and result in reduced maintenance and energy costs. Dr. Gehring briefly outlines the steps that tower owners should undertake here (additional FCC guidance here) and a more complete set of guidelines is available from the FAA.

Dr. Gehring’s work reminded me of ongoing FCC developments concerning the broader topic of environmental compliance by tower owners, an issue dealt with by the FCC Wireless Telecommunications Bureau’s Competition & Infrastructure Policy Division (CIPD). In particular, as Bill Dutton, Mitch Shapiro and I discuss in our Wireless Regulatory Analysis (see Section 3.1), the FCC’s rules for environmental review ensure that licensees and registrants take appropriate measures to protect environmental and historic resources. In light of all the other major FCC related developments that are grabbing headlines (Susan Crawford tees up some of these here), one that may have been much less noticed is a soon to be released Notice of Proposed Rulemaking and Notice of Inquiry (NPRM and NOI) concerning the FCC’s environmental and historic review.

Specifically, the NPRM and NOI commence an FCC examination of the regulatory impediments to wireless network infrastructure investment and deployments in an effort to expedite wireless infrastructure deployment. Among the topics discussed by the NPRM are potential changes to the FCC’s approach to the National Environmental Policy Act (NEPA) and the National Historic Preservation Act (NHPA). Presently, a new tower construction requires, among other things, approval from state or local governing authorities as well as compliance with FCC rules implementing NEPA and NHPA.

NEPA compliance requires three different levels of analysis depending on the potential environmental impact. Actions which do not have a significant effect on the (human) environment do not require an environmental assessment or impact statement and are categorically excluded. For actions that are not categorically excluded, a document presenting the reasons why the action will not have a significant effect on the environment must be prepared. A detailed written statement is required when an action is determined to significantly affect the quality of the environment.

Naturally, wireless providers seeking to enhance service and expand throughout the U.S. have raised concerns that the FCC’s environmental and historic preservation review processes increase the costs of deployment and pose lengthy delays. Issues that have been raised include the need to compensate Tribal Nations claiming large geographic areas (including several full states) within their geographic areas of interest for review of submissions, the burdens of dual reviews by local authorities and State Historic Preservation Officers (SHPO), and the expense of environmental compliance in cases where minimal likelihood of harms are alleged by wireless providers.

The NPRM seeks to mitigate some of the issues, asking stakeholders to weigh in when and what kind of Tribal Nation compensation is justified, how to deal with delays that may result from SHPO review broadly, and whether or not to include categorical exclusions for small cells and distributed antenna systems (DAS) facilities. These actions may all be well intended, well-reasoned, and ultimately in the public interest, but what concerns me is how one sided the FCC’s NPRM reads at the moment. The NPRM elaborates on and in some instances quantifies the cost of NEPA and NHPA review, but little attention is devoted to attempting to qualify or quantify the potential benefits of these additional review processes, or alternatively the potential costs of NOT undergoing NEPA and NHPA review.

Having learned about this NPRM quite late in the game myself, I noticed that the FCC’s Electronic Comment Filing System contains comments from stakeholders on both sides, including wireless service providers and infrastructure owners on one side along with Native Tribes and parties concerned with historic and environmental preservation on the other (the relevant Docket Numbers are 17-79 and 15-180). However, having searched for the word “comment” throughout the NPRM, I observed that the FCC has only cited the former in the NPRM (e.g., see footnotes 72 citing Sprint and Verizon, footnote 73 citing the Competitive Carrier Association, Crown Castle, and Verizon, and so on). Is this an indication that the FCC has already made its decision regarding what to do and simply unveiling the NPRM to indicate that it has thought about the issue before making a ruling? I sincerely hope not, but I am concerned.

Thus, in light of the lack of press concerning this issue I urge the following: If you are worried about the impact that the expansion of wireless infrastructure has on the environment, please make your voice heard. If you have an opinion regarding the extent to which wireless infrastructure developers and/or regulators should consider historic preservation, please tell regulators why you think historic preservation is important. If you are an expert in either of these issues, please try to quantify your response to the FCC. I can’t stress enough the last part: the FCC needs to perform a cost benefit analysis, or stated differently, compare the costs of delays to broadband expansion to those of degradation of environmental preservation standards. If the FCC can place a dollar amount on both issues, it makes it far more likely that a socially and economically sensible decision could be reached.

Dr. Gehring’s own work, which was started over a decade ago during her time as a Conservation Scientist with Michigan State University, highlights the importance of reaching and listening to stakeholders on all sides of the debate. Through her teams’ relentless efforts, regulators were able to come up with environmentally friendly approaches that also reduced costs—a win-win. I hope that regulators can learn from Dr. Gehring’s accomplishments.

Tags: , , , , , ,

No Comments


Something to consider before restructuring the FCC . . .

Wednesday, January 18th, 2017

The Chief Economist of the Federal Communications Commission is a temporary position—with a term of a year or so of late—typically bestowed on economists with impressive credentials and experience related to media or telecommunications. Having worked at the FCC long enough to overlap with several chief economists, I noticed an interesting pattern. Many join the FCC full of hope—capable as they are—that they will reform the agency to better integrate “economic thinking” into regular policy decisions, but to quote a former colleague, “leave the agency with their sense of humor intact.”

I have heard many a former FCC economist rail against the lack of economic thinking at the FCC, with some former chief economists going very much on the record to do so (for instance, see here and here). Others (not necessarily affiliated with the FCC) have gone as far as to point out that much of what the FCC does or attempts to do is duplicative of the competition policies of the Department of Justice and Federal Trade Commission. These latter points are not a secret. The FCC publicly says so in every major transaction that it approves.

For example, in a transaction that I have had the pleasure to separately write about with one of the FCC’s former chief economists and a number of other colleagues, AT&T’s acquisition of former competitor Leap Wireless (see here and here), the FCC wrote (see ¶ 15):

Our competitive analysis, which forms an important part of the public interest evaluation, is informed by, but not limited to, traditional antitrust principles. The Commission and the Department of Justice (“DOJ”) each have independent authority to examine the competitive impacts of proposed communications mergers and transactions involving transfers of Commission licenses.

This standard language can be found in the “Standard of Review” section in any major FCC transaction order. The difference is that whereas the DOJ reviews telecom mergers pursuant to Section 7 of the Clayton Act, the FCC’s evaluation encompasses the “broad aims of the Communications Act.” From a competition analysis standpoint, a major difference is that if the DOJ wishes to stop a merger, “it must demonstrate to a court that the merger may substantially lessen competition or tend to create a monopoly.” In contrast, parties subject to FCC review have the burden of showing that the transaction, among other things, will enhance existing competition.

Such duplication and the alleged lack of economics at the FCC has led a number of individuals to suggest that the FCC should be restructured and some of its powers curtailed, particularly with respect to matters that are separately within the purview of the antitrust agencies. In particular, recently, a number of individuals in Donald Trump’s FCC transition team have written (read here) that Congress “should consider merging the FCC’s competition and consumer protection functions with those of the Federal Trade Commission, thus combining the FCC’s industry expertise and capabilities with the generic statutory authority of the FTC.”

I do not completely disagree—I would be remiss if I did not admit that the transition team makes a number of highly valid points in its comments on “Modernizing the Communications Act.” However, as Harold Feld, senior VP of Public Knowledge recently pointed out, efforts to restructure the FCC present a relatively “radical” undertaking and my main motivation in writing this post is to highlight Feld’s point by reminding readers of a recent court ruling.

In 2007—well before its acquisition of DIRECTV and its offer of unlimited data to customers who bundle its AT&T and DIRECT services—AT&T offered mobile wireless customers unlimited data plans. AT&T later phased out these plans except for customers who were “grandfathered”—those customers who signed up for an unlimited plan while it was available and never switched to an alternative option. In October 2011, perhaps worried about the implications of unlimited data in a data hungry world, AT&T reduced speeds for grandfathered customers on legacy plans whose monthly data usage surpassed a certain threshold—a practice that the FTC refers to as data throttling.

The FTC filed a complaint against AT&T under Section 5 of the FTC Act, alleging that customers who had been throttled by AT&T experienced drastically reduced service, but were not adequately informed of AT&T’s throttling program. As part of its complaint, the FTC claimed that AT&T’s actions violated the FTC Act and sought a permanent injunction on throttling and other equitable relief as deemed necessary by the Court.

Now here is where things get interesting: AT&T moved to dismiss on the basis that it is exempt as a “common carrier.” That is, AT&T claimed that the appropriate act that sets out jurisdiction over its actions is the Communications Act, and not the FTC Act. Moreover, AT&T’s position was that an entity with common carrier status cannot be regulated under the section that the FTC brought to this case (§ 45(a)), even when it is providing services other than common carriage services. This led one of my former colleagues to joke that this would mean that if AT&T were to buy General Motors, then it could use false advertising to sell cars and be exempt from FTC scrutiny.

The District Court for the Northern District of California happened to consider this matter after the FCC reclassified mobile data from a non-common carriage service to a common carriage service (in its Open Internet Order), but before the reclassification had gone into effect. The Court concluded that contrary to AT&T’s arguments, “the common carrier exception applies only where the entity has the status of common carrier and is actually engaging in common carrier activity.” Moreover, it denied AT&T’s motion because AT&T’s mobile data service was not regulated as common carrier activity by the FCC when the FTC suit was filed. However, in August 2016, this decision was reversed on appeal by the U.S. Court of Appeals for the Ninth Circuit (see here), which ruled that the common carrier exemption was “status based,” not “activity based,” as the lower court had determined.

Unfortunately, this decision leaves quite a regulatory void. To my knowledge, the FCC does not have a division of Common Carrier Consumer Protection (CCCP), and I doubt that any reasonable individual familiar with FCC practice would interpret the Open Internet Order as an attempted FCC power grab to attempt to duplicate or supplant FTC consumer protection authority. Indeed, the FCC articulated quite the reverse position by recently filing an Amicus Curiae Brief in support of the FTC’s October 2016 Petition to the Ninth Circuit to have the case reheard by the full court.

So what’s my point? Well first, the agencies are not intentionally attempting to step on each other’s toes. By and large, the FCC understands the role of the FTC and the DOJ and vice versa. Were AT&T to acquire General Motors, it is highly probable that given the state of regulation as it stands, employees at the FCC would find it preferable if the FTC continued to oversee General Motors’ advertising practices. A related stipulation applies to the FCC’s competition analysis. Whereas the analysis may be similar to that of the antitrust agencies, it is motivated at least in part by the FCC’s unique mission to establish or maintain universal service, which can lead to different decisions being made in the same case (for instance, whereas the DOJ did not challenge AT&T’s acquisition of Leap Wireless, the FCC imposed a number of conditions to safeguard against loss of service).

Of course, one could argue that confusion stemming from the above case might have been avoided had the FCC never had authority over common carriage in the first place. But if making that argument, one must be cognizant of the fact that although the FTC Act predates the Communications Act of 1934, prior to 1934, it was the Interstate Commerce Act, not the FTC Act, that lay out regulations for common carriers.  In other words, legislative attempts to rewrite the Communications Act will necessitate changes in various other pieces of legislation in order to assure that there are no voids in crucial protections to competition and consumers. Thus, to bolster Harold Feld’s points: those wishing to restructure the FCC need to do so being fully aware of what the FCC actually does and doesn’t do, they must take heed of all the subtleties underlying the legislation that lays the groundwork for the various agencies, and they should be mindful of potential for interpretation and reinterpretation under the common law aspects of our legal system.

Tags: , , , , ,

No Comments


Undesirable Incentives in the Incentive Auction (w. Emily Schaal)

Saturday, December 10th, 2016

Following the 2016 U.S. Presidential election, in a letter to FCC Chairman Wheeler, Republicans urged the FCC to avoid “controversial items” during the presidential transition.  Shortly thereafter, the Commission largely scrubbed its Nov. 17 agenda resulting in perhaps the shortest Open Commission Meeting in recent history.  Start at 9:30 here for some stern words from Chairman Wheeler in response.  Viewers are urged to pay particular attention to an important history and civics lesson from the Chairman in response to a question at 17:20 (though this should not indicate our agreement with everything that the Chairman says).

So what is the Commission to do prior to the transition?  According to the Senate Committee on Commerce, Science, and Transportation, the FCC can “focus its energies” on “many consensus and administrative matters.”  Presumably, this includes the FCC’s ongoing incentive auction, now set for its fourth round of bidding, and subject to its own controversies, with dissenting votes on major items released in 2014 (auction rules and policies regarding mobile spectrum) by Republican Commissioners concerned about FCC bidding restrictions and “market manipulation,” along with a statement by a Democratic Commissioner saying that FCC bidding restrictions did not go far enough.

The Incentive Auction

Initially described in the 2010 National Broadband Plan, the Incentive Auction is one of the ways in which the FCC is attempting to meet modern day demands for video and broadband services.  The FCC describes the auction for a broad audience in some detail here and here.  In short, the auction was intended to repurpose up to 126 megahertz of TV band spectrum, primarily in the 600 MHz band, for “flexible use” such as that relied on by mobile wireless providers to offer wireless broadband.  The auction consists of two separate but interdependent auctions—a reverse auction used to determine the price at which broadcasters will voluntarily relinquish their spectrum usage rights and a forward auction used to determine the price companies are willing to pay for the flexible use wireless licenses.

Repackaging

What makes this auction particularly complicated is a “repackaging” process that connects the reverse and forward auction.  The current licenses held by broadcast television stations are not necessarily suitable for the type of contiguous blocks of spectrum that are necessary to set up and expand regional or nationwide mobile wireless networks.  As such, repackaging involves reorganizing and assigning channels to the remaining broadcast television stations—that remain operational post-auction—in order to clear spectrum for flexible use.

The economics and technical complexities underlying this auction are well described in a recent working paper entitled “Ownership Concentration and Strategic Supply Reduction,” by Ulrich Doraszelski, Katja Seim, Michael Sinkinson, and Peichun Wang (henceforth Doraszelski et al. 2016) now making its way through major economic conferences (Searle, AEA).  As the authors point out with regard to the repackaging process (p. 6):

[It] is visually similar to defragmenting a hard drive on a personal computer.  However, it is far more complex because many pairs of TV stations cannot be located on adjacent channels, even across markets, without causing unacceptable levels of interference.  As a result, the repackaging process is global in nature in that it ties together all local media markets.

With regard to the reverse auction, Doraszelski et al. (2016) note that (p. 7):

[T]he auction uses a descending clock to determine the cost of acquiring a set of licenses that would allow the repacking process to meet the clearing target.  There are many different feasible sets of licenses that could be surrendered to meet a particular clearing target given the complex interference patterns between stations; the reverse auction is intended to identify the low-cost set . . . if any remaining license can no longer be repacked, the price it sees is “frozen” and it is provisionally winning, in that the FCC will accept its bid to surrender its license.

The idea is that the FCC should minimize the total cost of licenses sold on the reverse auction while making sure that its nationwide clearing target is satisfied.  As Doraszelski et al. (2016) note, the incentive auction has various desirable properties.  Of particular note is strategy proofness (see Milgrom and Segal 2015), whereby it is (weakly) optimal for broadcast license owners to truthfully reveal each station’s value as a going concern in the event that TV licenses are separately owned.

Strategic Supply Reduction

However, the author’s main concern in their working paper is that in spite of strategy proofness, the auction rules do not prevent firms that own multiple broadcast TV licenses from potentially engaging in strategic supply reduction.  As Doraszelski et al. (2016) show, this can lead to some fairly controversial consequences in the reverse auction that might compound any issues that could arise (e.g., decreased revenue) due to bidding restrictions in the forward auction.  Specifically, the authors find that multi-license holders are able to earn large rents from a supply reduction strategy where they strategically withhold some of their licenses from the auction to drive up the closing price for the remaining licenses they own.

The incentive auction aside, strategic supply reduction is a fairly common phenomenon in standard economic models of competition.  Consider for instance a typical model of differentiated product competition (or the Cournot model of homogenous product competition).  In each of these frameworks, firms’ best response strategies lead them to set prices or quantities such that the quantity sold is below the “perfectly competitive” level and prices are above marginal cost—thus, firms individually find it optimal to keep quantity low to make themselves (and consequently, their competitors) better off than under perfect competition.

In the incentive auction, a multi-license holder that withdraws a license from the auction could similarly increase the price for the remaining broadcast TV licenses that it owns (as well as the price of other broadcast TV license owners).  However, in contrast to the aforementioned economic models, in which firms effectively reduce supply by underproducing, a firm engaging in strategic supply reduction is left with a TV station that it might have otherwise sold in the auction.  The firm is OK with this if the gain from raising the closing price for other stations exceeds the loss from continuing to own a TV station instead of selling it into the auction.


Example 1

Consider the following highly stylized example of strategic supply reduction: There are two broadcasters, B1 and B2, in a market where the FCC needs to clear three stations (the reverse auction clearing target) and there are three different license “qualities,” A, B, and C, for which broadcasters have different reservation prices and holdings as follows:

B1 Quantity B2 Quantity Reservation Price
A 1 2 10
B 1 0 6
C 2 1 2

Suppose that the auctioneer does not distinguish between differences in licenses (this is a tremendous simplification relative to the real world).  Consider a reverse descending clock auction in which the auctioneer lowers its price in decrements of $2 starting at $10 (so $10 at time 1, $8 at time 2, and so on until the auction ends), and ceases to lower its price as soon as it realizes that any additional licensee drop outs would not permit it to clear its desired number of stations (as would for instance happen when quality A and B licenses drop out).  Suppose that a broadcaster playing “truthfully” that is indifferent between selling its quality license and dropping out remains in the auction (so that for instance, A quality licenses are not withdrawn until the price falls from $10 to $8).

In a reverse descending clock auction in which broadcasters play “naïve” strategies, each broadcaster would offer all of their licenses and drop some from consideration as the price decreases over time. However, there is another “strategic” option, in which B1 withholds a quality C license from the auction (B1 can do so by either overstating its reservation price for this license—say claiming that it is $10—or by not including it in the auction to begin with):

Naive Strategic
B1 B2 B1 B1 B2
Offered Offered Offered Withheld Offered
A 1 2 1 2
B 1 1
C 2 1 1 1 1
Licenses Auctioned 7 6

The results of the naïve bidding versus the strategic bidding auction are quite different.  In the naïve bidding auction, the auctioneer can continue to lower its price down to $4 at which point B1 pulls out its B quality license and the auction is frozen (further drop outs would not permit the desired number of licenses to be cleared).  Each broadcaster earns $4 for each quality C license with B1 earning a profit of 2×($4-$2)=$4.

Suppose instead that broadcaster B1 withheld one quality C license.  Then the auction would stop at $8 (because there are only three licenses left as soon as A quality licenses are withdrawn).  Each broadcaster now earns $8 per license sold, with B1 earning a profit of ($8-$6)+($8-$2)=$8.  Moreover, B2 benefits from B1’s withholding, earning profit of $6 instead of $2, as in the naïve bidding case.  The astute reader will notice that B1 could have done even better by withholding its B quality license instead!  This is a result of our assumption that the auctioneer treats all cleared licenses equally, which is not true in the actual incentive auction.  Finally, notice that even though B2 owns three licenses in this example, strategic withholding could not have helped it more than B1’s strategic withholding did unless it colluded with B1 (this entails B2 to withhold its quality A licenses and B1 to withhold both quality C licenses).


Evidence of Strategic Supply Reduction

Doraszelski et al. (2016) explain that certain types of geographic markets and broadcast licenses are more suitable for strategic supply reduction.  They write:

First, ideal markets from a supply reduction perspective are [those] in which the FCC intends to acquire a positive number of broadcast licenses and that have relatively steep supply curves around the expected demand level.  This maximizes the impact of withholding a license from the auction on the closing price . . .  Second, suitable groups of licenses consist of sets of relatively low value licenses, some with higher broadcast volume to sell into the auction and some with lower broadcast volume to withhold.

What is perhaps disconcerting is the fact that Doraszelski et al. (2016) have found evidence indicating that certain private equity firms spent millions acquiring TV licenses primarily from failing or insolvent stations in distress, often covering the same market and in most instances on the peripheries of major markets along the U.S. coasts.  Consistent with their model, the authors found that many of the stations acquired had high broadcast volume and low valuations.

Upon performing more in depth analysis that attempts to simulate the reverse auction using ownership data on the universe of broadcast TV stations together with FCC data files related to repacking—the rather interesting details of which we would encourage our audience to read— Doraszelski et al. (2016) conclude that strategic supply reduction is highly profitable.  In particular, using fairly conservative tractability assumptions, the authors found that simulated total payouts increased from $17 billion under naïve bidding to $20.7 billion with strategic supply reduction, with much of that gain occurring in markets in which private equity firms were active.


Example 2

Suppose that in our example above that the quality C stations held by broadcaster B1 were initially under the control of two separate entities, call these B3 and B4.  Then, if B1, B2, B3, and B4 were to participate in the auction, strategic withholding on the part of B1 would no longer benefit it.  However, B1 could make itself better off by purchasing one, or potentially both of the individual C quality licenses held by B3 and B4.  Consider the scenario where B1 offers to buy B3’s license.  B3 is willing to sell at $4 or more, the amount it will earn under naïve bidding in the auction and Bertrand style competition between B3 and B4 will keep B1 from offering more than that.  With a single C quality license, B1 can proceed to withhold either its B or C quality license, raise the price to $8, and benefit both itself, and the other broadcasters who make a sale in the auction.


This result, whether realized by the FCC ex-ante or not, is problematic for several reasons.  First, it raises the prospect that revenues raised in the forward auction will not be sufficient to meet payout requirements in the reverse auction.  As is, this has already occurred three times, with the FCC having had lowered its clearance target to 84 megahertz from the initial 126 megahertz; though we caution that the FCC is currently not permitted to release data regarding the prices at which different broadcasters drop out of the auction, so we cannot verify whether final prices in earlier stages of the reverse auction were impacted by strategic supply reduction.  Second, as is the case with standard oligopoly models, strategic supply reduction is beneficial for sellers, but not so for buyers or consumers.

Third, strategic supply reduction by private equity firms raises questions about the proper role and regulation of such firms.  The existence of such firms is generally justified by their role in providing liquidity to asset markets.  However, strategic supply reduction seems to contradict this role, particularly so if withheld stations are not put to good use—something Doraszelski et al. (2016) don’t deliberate on.  Moreover, strategic supply reduction relies on what antitrust agencies often term as unilateral effects—that is, supply reduction is individually optimal and does not rely on explicit or tacit collusion.  However, whereas antitrust laws are intended to deal with cases of monopolization and collusion, it does not seem to us that they can easily mitigate strategic supply reduction.

Doraszelski et al. (2016) propose a partial remedy that does not rely on the antitrust laws: require multi-license owners to withdraw licenses in order of broadcast volume from highest to lowest.  Their simulations show that this leads to a substantial reduction in payouts from strategic bidding (and a glance at Example 1 suggests that it would be effective in preventing strategic supply reduction there as well).  Although this suggestion has unfortunately come too late for the FCC’s Incentive Auction we hope (as surely do the authors) that it will inform future auctions abroad hoping to learn from the U.S. experience.

This post was written in collaboration with Emily Schaal, a student at The College of William and Mary who is pursuing work in mathematics and economics.  Emily and I previously worked together at the Federal Communications Commission, where she provided invaluable assistance to a team of wireless economists.  

Tags: , , , ,

No Comments


Trends in ISP Internet and Video Subscribership

Wednesday, August 24th, 2016

A number of colleagues and I recently completed work on a large grant proposal, and as is typical with grant proposals and research more broadly, a lot of worthwhile research that went in did not survive the final cut.  In this case, one of the core sources of data that motivated the identification strategy used in our proposal stemmed from Internet Service Provider (ISP) data on Internet and video subscribers.  Tables 1 and 2 below, which we did not ultimately submit, display this data for residential and non-enterprise business customers of major publically traded local exchange carriers (LECs) and cable companies for respectively, Internet and video subscriptions.


Table 1: Internet Subscribers for Major Public ISPs

ISP 2010 2011 2012 2013 2014 2015
Cable ISPs
Cable One Unavailable Unavailable Unavailable 473 489 501
Cablevision 2,653 2,701 2,763 2,780 2,760 2,809
Charter 3,385 3,655 3,978 4,640 5,075 5,572
Comcast 16,985 18,144 19,367 20,685 21,962 23,329
Mediacom 379 383 410 431 449 480
TWC 9,803 10,344 11,395 11,606 12,253 13,313
Local Exchange Carrier ISPs
AT&T 16,309 16,427 16,390 16,425 16,028 15,778
CenturyLink 2,349 5,655 5,851 5,991 6,082 6,048
Cincinnati Bell 256 257 259 268 270 287
EarthLink 2,029 1,636 1,350 1,139 976 821
Frontier 1,719 1,764 1,754 1,867 2,360 2,462
Verizon 8,392 8,670 8,795 9,015 9,205 9,228
Windstream 1,567 1,676 1,645 1,469 1,399 1,333

Notes: All subscriber numbers in thousands.  Data obtained from 2010-2015 SEC Annual Reports (10-K) for each firm.


Table 2: Video Subscribers for Major Public ISPs

ISP 2010 2011 2012 2013 2014 2015
Cable ISPs
Cable One Unavailable Unavailable Unavailable 539 451 364
Cablevision 3,008 2,947 2,893 2,813 2,681 2,594
Charter 4,520 4,314 4,158 4,342 4,419 4,430
Comcast 22,790 22,331 22,844 22,577 22,383 22,347
Mediacom 530 473 442 417 390 375
TWC 12,422 12,061 12,218 11,393 10,992 11,035
Local Exchange Carrier ISPs
AT&T 2,987 3,791 4,536 5,460 5,943 5,614
CenturyLink Unavailable 65 106 175 242 285
Cincinnati Bell 28 40 55 74 91 114
EarthLink 0 0 0 0 0 0
Frontier 310 225 347 385 582 554
Verizon 3,472 4,173 4,726 5,262 5,649 5,827
Windstream 427 441 426 402 385 359

Notes: All subscriber numbers in thousands.  For LEC ISPs, subscriber numbers generally do not include affiliated video subscription to satellite video programming.  Data obtained from 2010-2015 SEC Annual Reports (10-K) for each firm.

Casual observation of Table 1 shows that the number of Internet subscribers has continued to grow between 2010 and 2015 for most ISPs, whether cable or LEC.  In contrast, casual observation of Table 2 shows that the number of video subscribers has declined for most cable companies, but grown for most LECs over this time-frame, though LEC video subscribership remained substantially below that of the cable companies.

The general trend in Table 1 will not be surprising to Internet researchers or people who have not been living under a rock.  The Internet has been kind of a big deal the last few years.  For example, it has fostered business innovation (Brynjolfsson and Saunders 2010; Cusumano and Goeldi 2013; Evans and Schmalensee 2016; Parker, Van Alstyne, and Choudary 2016), economic growth (Czernich et al. 2011; Greenstein and McDevitt 2009, Kolko 2012), my ability to blog, and your ability to consume the items in the hyperlinks above.

The trends in Table 2 are less well known outside the world of Internet research and business practice and are at least in part attributable to historical developments involving the Internet. As described by Greg Rosston (2009), LECs initially got into the business of high-speed broadband to improve upon their previously offered dial-up Internet services—they were not initially in the multichannel video programming distribution (MVPD) market.  In contrast, the cable companies became ISPs after it became apparent that coaxial cables used to transmit cable television signals could also be used for high-speed broadband.

Thus, whereas cable companies could use their networks to offer subscribers video and Internet bundles, many LECs have had to partner with satellite video programming distributors or resell competitors’ services to be able to advertise a bundled service.  Eventually, some LECs acquired their own video customers either through purchases of smaller cable competitors in certain areas, or by relying on Internet Protocol television (IPTV), either through construction of fiber networks that deliver service to the home as was the case with Verizon or by doing whatever it is that AT&T does. This has explained the growth of LEC video customers, whereas competition from LECs, video on demand, and mobile wireless service providers should at least partly explain the decline in cable video subscribership.

To put these trends into perspective, I have included one additional Table (Table 3), which displays the ratios of video to Internet subscribers for the ISPs above.  As the Table makes evident, the ratios have declined for most cable companies and increased for most LECs between 2010 and 2015. If I had to make an educated guess, the cable company trend will continue in the coming years, but I am less certain that the trend on the LEC side is sustainable as video on demand and mobile wireless continue to eat into the traditional video market.


Table 3: Ratio of Video to Internet Subscribers for Major Public ISPs

ISP 2010 2011 2012 2013 2014 2015
Cable ISPs
Cable One Unavailable Unavailable Unavailable 1.14 0.92 0.73
Cablevision 1.13 1.09 1.05 1.01 0.97 0.92
Charter 1.34 1.18 1.05 0.94 0.87 0.80
Comcast 1.34 1.23 1.18 1.09 1.02 0.96
Mediacom 1.40 1.23 1.08 0.97 0.87 0.78
TWC 1.27 1.17 1.07 0.98 0.90 0.83
Local Exchange Carrier ISPs
AT&T 0.18 0.23 0.28 0.33 0.37 0.36
CenturyLink Unavailable 0.01 0.02 0.03 0.04 0.05
Cincinnati Bell 0.11 0.15 0.21 0.28 0.34 0.40
EarthLink 0.00 0.00 0.00 0.00 0.00 0.00
Frontier 0.18 0.13 0.20 0.21 0.25 0.23
Verizon 0.41 0.48 0.54 0.58 0.61 0.63
Windstream 0.27 0.26 0.26 0.27 0.28 0.27

Notes: Ratios represent the fraction of residential and non-enterprise business customers who subscribe to a video service relative to those who subscribe to high-speed Internet.  For LEC ISPs, ratios generally do not include affiliated video subscription to satellite video programming.

If you want to reuse or make fancy graphs out of the data located in this post, please attribute the data to Aleksandr Yankelevich, Quello Center, Michigan State University.

Tags: , , , , , , ,

No Comments


An Abridged History of Open Internet Regulation and Its Policy Implications (w. Kendall Koning)

Friday, June 17th, 2016

On June 14, 2016, the United States Court of Appeals for the District of Columbia (D.C. Circuit) upheld the FCC’s 2015 network neutrality regulations, soundly denying myriad legal challenges brought by the telecommunications industry (U.S. Telecomm. Ass’n v. FCC 2016).  Thus, unless the Supreme Court says otherwise, Congress rewrites the rules, or INSERT TRENDING CELEBRITY NAME truly breaks the Internet, we can expect to receive our lawful content without concerns that it would be throttled or that the content provider paid a termination fee.  How did we get here?  As my colleague Kendall Koning, a telecommunications attorney and Ph.D. candidate at the Department of Media and Information at Michigan State and I lay out in this blog post outlining the history of net neutrality regulation, it has been a long road.

The most recent D.C. Circuit case represented the third time that FCC network neutrality rules had been before that court, the first two having been struck down on largely procedural grounds.  The FCC’s 2015 Open Internet Order remedied these flaws by formally grounding the rules in Title II of the Telecommunications Act (47 U.S.C. § 201 et. sq. 2016) while simultaneously exercising a separate forbearance authority to exempt ISPs from some of the more restrictive rules left over from the PSTN era.

The U.S. Telecommunications Association (USTelecom), a trade group representing the nation’s broadband service providers along with various other petitioners, had challenged the FCC’s Order on a number of grounds.  USTelecom’s central challenge echoed earlier arguments that ISPs don’t really offer telecommunications, i.e., the ability to communicate with third parties without ISPs altering form and content, but an integrated information service, where ISP servers exercise control over the form and content of information transmitted over the network.  As explained below, this perspective was a historical artifact from the era of America Online and dial-up ISPs, but had been used successfully at the start of the broadband era.  In a stinging rejection of ISP arguments, the D.C. Circuit not only found that the FCC’s reclassification of Internet access as telecommunications was reasonable and within the bounds of the FCC’s discretionary authority but offered a strong endorsement of this perspective (U.S. Telecomm. Ass’n v. FCC supra at 25-26):

That consumers focus on transmission to the exclusion of add-on applications is hardly controversial. Even the most limited examination of contemporary broadband usage reveals that consumers rely on the service primarily to access third-party content . . . Indeed, given the tremendous impact third-party internet content has had on our society, it would be hard to deny its dominance in the broadband experience. Over the past two decades, this content has transformed nearly every aspect of our lives, from profound actions like choosing a leader, building a career, and falling in love to more quotidian ones like hailing a cab and watching a movie. The same assuredly cannot be said for broadband providers’ own add-on applications.

The Rules, What are They Good For?

At present, the FCC states that its current Open Internet rules “protect and maintain open, uninhibited access to legal online content without broadband Internet access providers being allowed to block, impair, or establish fast/slow lanes to lawful content.”  In particular, the present rules make clear the following three conditions, each of which is subject to a reasonable network management stipulation (FCC 2015 ¶¶ 15-18):

  1. No Blocking: A person engaged in the provision of broadband Internet access service . . . shall not block lawful content, applications, services, or non-harmful devices . . . .
  2. No Throttling: A person engaged in the provision of broadband Internet access service . . . shall not impair or degrade lawful Internet traffic on the basis of Internet content . . . .
  3. No Paid Prioritization: A person engaged in the provision of broadband Internet access service . . . shall not engage in paid prioritization . . . [—the] management of a broadband provider’s network to directly or indirectly favor some traffic over other traffic . . . either (a) in exchange for consideration (monetary or otherwise) from a third party, or (b) to benefit an affiliated entity.

These rules are, to a degree, a modern version of common carrier non-discrimination rules adapted for the Internet.  47 U.S.C. §201(b) requires that “all charges, practices, classifications, and regulations for . . . communication service shall be just and reasonable.”  Whereas in the United States, these statutes date back to the Telecommunications Act of of 1934, common carrier rules more generally have quite a long history, with precursors going as far back as the Roman Empire (Noam 1994).  One of the purposes of these rules is to protect consumers from what is frequently deemed unreasonable price discrimination: if a product or service is critically important, only available from a very small number of firms, and not subject to arbitrage, suppliers may be able to charge each consumer a price closer to that consumer’s willingness to pay, rather than a single market price.

Consumers of Internet services are not only individuals but also content providers, like ESPN, Facebook, Google, Netflix, and others, who rely on the Internet to reach their customers.  As a general-purpose network platform, the Internet connects consumers and content providers via myriad competing broadband provider networks, none of which can reach every single consumer (FCC 2010 ¶ 24).  The D.C. Circuit succinctly laid it out, writing (U.S. Telecomm. Ass’n v. FCC, supra at 9):

When an end user wishes to check last night’s baseball scores on ESPN.com, his computer sends a signal to his broadband provider, which in turn transmits it across the backbone to ESPN’s broadband provider, which transmits the signal to ESPN’s computer.  Having received the signal, ESPN’s computer breaks the scores into packets of information which travel back across ESPN’s broadband provider network to the backbone and then across the end user’s broadband provider network to the end user, who will then know that the Nats won 5 to 3.

Thus, when individuals or entities at the “edge” of the Internet wish to connect to others outside their host ISP network, that ISP facilitates the connection by using its own peering and transit arrangements with other ISPs to move the content (data) from the point of origination to the point of termination.

One of the key issues in the the network neutrality debate was whether or not ISPs where traffic terminates should be allowed to offer these companies, for a fee, a way to prioritize their Internet traffic over the traffic of others when network capacity was insufficient to satisfy current demand.  Many worried that structuring Internet pricing in this way would enable price discrimination among content providers (Choi, Jeon, and Kim 2015) and might have several undesirable side effects.

First, welfare might be diminished if prioritization results in a diminished diversity of content (Economides and Hermalin 2012).  Second, because prioritization is only valuable when network demand is greater than its capacity, selling prioritization might create a perverse incentive to keep network capacity scarce (Choi and Kim 2010; Cheng, Bandyopadhyay, Guo 2011).  Third, ISPs who offer cable services or are otherwise vertically integrated into content might use both of these features to disadvantage their competitors in the content markets.  In light of the risk that ISPs pursue price discrimination to defend their vertically integrated content interests, network neutrality can be seen as an application of the essential facilities doctrine from antitrust law (Pitofsky, Patterson, and Hooks 2002) to the modern telecommunications industry.

In response, broadband ISPs have claimed that discriminatory treatment of certain traffic was necessary to mitigate congestion (FTC 2007; Lee and Wu 2009 broadly articulate this argument).[1]  ISPs also claim that regulation prohibiting discriminatory treatment of traffic would dissuade them from continued investment in reliable Internet service provision (e.g., FCC 2010 ¶ 40 and n. 128; FCC 2015 at ¶ 411 and n. 1198) and even the FCC noted that its 2015 net neutrality rules could reduce investment incentives (FCC 2015 at ¶ 410).  Nevertheless, the FCC partially justified the implementation of net neutrality by noting that it believed that any potential investment-chilling effect of its regulation was likely to be short term and would dissipate over time as the marketplace internalized its decision.  Moreover, the FCC claimed that prior time periods of robust ISP regulation coincided with upswings in broadband network investment (FCC 2015 at ¶ 414).

How the Rules Came About?

The Commission’s Open Internet rules are far from the first time that the telecommunications industry has faced similar issues.  Half a century ago, AT&T refused to allow the use of cordless phones manufactured by third parties until it was forced to do so by a federal court (Carter v. AT&T, 250 F.Supp 188, N.D. Tex. 1966). The federal courts also needed to intervene before MCI was allowed to purchase local telephone service from AT&T to complete the last leg of long-distance telephone calls (MCI v. AT&T, 496 F.2d 214, 3rd Cir. 1974).  AT&T’s refusal to provide local telephone service to it’s long-distance competitor was deemed an abuse of its monopoly in local telephone service to protect its monopoly in long-distance telephone service, and featured prominently in the breakup of AT&T in 1984 (U.S. v. AT&T, 522 F.Supp. 131, D.D.C. 1982).  Subsequent vigorous competition in the long-distance market helped drive down prices significantly.

The rules developed for computer networks throughout the FCC’s decades long Computer Inquiries were also designed to ensure third party companies had non-discriminatory access to necessary network facilities, and to facilitate competition in the emerging online services market (Cannon 2003).  For example, basic telecommunications services, like dedicated long-distance facilities, were required to be offered separately without being bundled with equipment or computer processing services.  These services were the building blocks upon which the commercial Internet was built.

The rules that came out of the Computer Inquiries were codified by Congress in the Telecommunications Act of 1996, by classifying the Computer Inquiry’s basic services as telecommunications services under the 1996 Act, the Computer Inquiry’s enhanced services as information services under the 1996 Act, and subjecting only the former to the non-discrimination requirements of Title II (FCC 2015 at ¶¶ 63, 311-313; Cannon 2003; Koning 2015).[2]  In particular, 47 U.S.C. Title II stipulates that it is unlawful for telecommunications carriers “to make or give any undue or unreasonable preference or advantage to any particular person, class of persons, or locality, or to subject any particular person, class of persons, or locality to any undue or unreasonable prejudice or disadvantage (47 U.S.C. § 202(a) 2016).”

Internet access specifically was first considered in terms of this classification in 1998.  Alaska Sen. Ted Stevens and others wanted dial-up ISPs to pay fees into the Universal Service Fund, which subsidized services for poor and rural areas.  The FCC ruled that ISPs were information services because they “alter the format of information through computer processing applications such as protocol conversion” (FCC 1998 ¶ 33).  However, to understand this classification, it is important to keep in mind that ISP services at this time were provided using dial-up modems as the PSTN.  In other words, in 1998 the Internet was an “overlay” network—one that uses a different network as the underlying connections between network points (see, e.g., Clark et al. 2006).  If consumers’ connections to their ISPs were made using dial-up telephone connections, then USF fees for the underlying telecommunications network were already being paid through consumers’ telephone bills.

In this context, applying USF fees to both ISPs and the underlying network would have effectively been double taxation.  Additionally, the service dial-up ISPs provided could reasonably be described as converting an analog telecommunications signal (from a modem) on one network (the PSTN) to a digital packet switched one (the Internet), which is precisely the sort of protocol conversion that had been treated as an enhanced service under the Computer Inquiry rules.  The same reasoning does not apply to broadband Internet access service, because it provides access to a digital packet switched network directly rather than through a separate underlying network service (Koning 2015).  However, the FCC continued to apply this classification to broadband ISPs, effectively removing broadband services from regulation under Title II.

Modern policy concerns over these issues reappeared in the early 2000s when the competitive dial-up ISP market was being replaced with the broadband duopoly of Cable and DSL providers.[3]  The concern was that if ISPs had market power, they might deviate from the end-to-end openness and design principles that characterized the early Internet (Lemley and Lessig 2001).  Early efforts focused on preserving competition in the ISP market by fighting to keep last-mile infrastructure available to third-party ISPs as had been the case in the dial-up era.  However, difficult experiences with implementing the unbundling regime of the 1996 Act, differing regulatory regimes for DSL and Cable (local loops for DSL had been subjected to the unbundling provisions of the 1996 Act, but Cable networks were not; an analysis of the consequences of doing this can be found in Hazlett and Caliskan 2008), and the existence of at least duopoly competition between these two incumbents discouraged the FCC from taking that path (FCC 2002, 2005b).  Third-party ISPs tried to argue that Cable modem connections were themselves a telecommunications service and therefore should be subject to the common-carrier provisions of Title II.  The FCC disagreed, pointing to its classification of Internet access as an information service under the 1996 Act.  This classification was ultimately upheld by the Supreme Court in NCTA v. Brand X (545 U.S. 967, 2005).

Unable to rely on the structural protection of a robustly competitive ISP market, the FCC shifted its focus towards the possibility of enforcing an Internet non-discrimination regime through regulation.  During this time period, the meaning and ramifications of “net neutrality,” a term coined in 2003 (Wu 2003), became the subject of vigorous academic debate.  Under the computer inquiries, non-discrimination rules had applied to the underlying network infrastructure, but it was also possible for non-discrimination rules to apply to Internet service itself, just as they had been to other packet-switched networks (X.25 and Frame Relay) in the past (Koning 2015).  However, there was extensive debate over the specific formulation and likely effects of any such rules, particularly among legal scholars (e.g., Cherry 2006, Sidak 2006, Sandvig 2007, Zittrain 2008, Lee and Wu 2009).  Although to that point, there had been no rulemaking proceeding specifically addressing non-discrimination on the Internet, a number of major ISPs had agreed to forego such discrimination in exchange for FCC merger approval (FCC 2015 ¶ 65) and there was still a general expectation that ISPs would not engage in egregious blocking behavior.  In one early case, the Commission fined an ISP for blocking a competitor’s VoIP telephone service (FCC 2005a).  In 2008, the FCC also ruled against Comcast’s blocking of peer-to-peer applications (FCC 2008).  However, the Comcast order was later reversed by the D.C. Circuit (Comcast v. FCC, 600 F.3d 642, D.C. Cir. 2010).

In response to this legal challenge, the FCC initiated formal rulemaking proceedings to codify its network neutrality rules.  In 2010, the FCC released its initial Open Internet Order, which applied the FCC’s Section 706 authority under the Communications Act to address net neutrality directly (FCC 2010 ¶¶ 117-123).  Among other things, the 2010 Open Internet Order adopted the following rule (FCC 2010 ¶ 68):

A person engaged in the provision of fixed broadband Internet service, insofar as such person is so engaged, shall not unreasonably discriminate in transmitting lawful network traffic over a consumer’s broadband Internet access service.  Reasonable network management shall not constitute unreasonable discrimination.

However, these rules were struck down by the D.C. Circuit in January 2014 (Verizon v. FCC, 740 F.3d 623, D.C. Cir. 2014). The root of the problem was that the Commission had continued to classify broadband Internet access as an “information service” under the 1996 Act, where its authority was severely limited.  As the court wrote: “[w]e think it obvious that the Commission would violate the Communications Act were it to regulate broadband providers as common carriers. Given the Commission’s still-binding decision to classify broadband providers not as providers of ‘telecommunications services’ but instead as providers of ‘information services,’ [] such treatment would run afoul of section [47 U.S.C §]153(51): ‘A telecommunications carrier shall be treated as a common carrier under this [Act] only to the extent that it is engaged in providing telecommunications services (Verizon v. FCC, supra at 650).’”

The FCC went back to the drawing board and issued its most recent Open Internet Order in 2015.  This time, the Commission grounded its rules in a reclassification of Internet access service as a Title II telecommunications service.  Moreover, unlike in the 2010 Order, which only subjected mobile broadband providers to a transparency and no blocking requirement (FCC 2010 ¶¶ 97-103), the Commission applied the same rules to providers of fixed and mobile broadband in the 2015 Order (FCC 2015 ¶ 14).

In contrast to information services, telecommunications services are subject to Title II common carrier non-discrimination provisions of the Act (FCC 2005b at ¶ 108 and n. 336).  As discussed above, these statutes expressly address the non-discrimination issues central to the network neutrality issue.  The reclassification permitted the Commission to exercise its Section 706 authority to implement the non-discrimination rules codified in Title II (FCC 2015 ¶¶ 306-309, 363, 365, 434).  On June 14, 2016, the D.C. Circuit upheld the FCC’s Open Internet rules as based on this and other statutes from Title II, 47 U.S.C. § 201 et. sq.

The Future of Net Neutrality

Although the Commission’s long evolving Open Internet rules appear to have found a solid legal grounding, it is important to understand that they are not without limits.  For instance, crucially, the rules stipulate what ISPs can and cannot do at termination, whereas they do not restrict the terms of interconnection and peering agreements with ISP networks (FCC 2015, ¶ 30).  Critically, in contrast to what HBO’s John Oliver might conclude from the FCC’s recent court victory, the Order does not prevent ISPs such as Comcast from requiring payment for interconnection to their networks; it merely subjects interconnection to the general rule under Title II that the prices charged must be reasonable and non-discriminatory.  Rather than making any prospective regulations on interconnection itself, the FCC’s 2015 Order leaves those issues open for future consideration on a case-by-case basis (FCC 2015, ¶ 203).

Additionally, academics are far from a consensus regarding the welfare implications of net neutrality.  When handing out judgement, the D.C. Circuit was careful to point out that its ruling was limited by a determination of whether the FCC has acted “within the limits of Congress’s delegation” (U.S. Telecomm. Ass’n v. FCC, supra note 1 at 23) of authority, and not on the economic merits or lack thereof of the FCC’s Internet regulations.[4]  In contrast to some of the aforementioned theoretical economics articles, there are a number of theoretical studies that find the type of quality of service tiering that is ruled out by the 2015 Order is likely to result in higher broadband investment and increase diversity of content (Krämer and Wiewiorra 2012; Bourreau, Kourandi, Valletti 2015), or for that matter, that under certain circumstances, it may not matter at all (Gans 2015; Gans and Katz 2016; Greenstein, Peitz, and Valletti 2016).  The empirical economic literature on net neutrality is at a very early stage and has thus far mostly focused on the consequences of other regulatory policies that might be likened to net neutrality regulation (Chang, Koski, and Majumdar 2003; Crandall, Ingraham, and Sidak 2004; Hausman and Sidak 2005; Hazlett and Caliskan 2008; Grajec and Röller 2012). To the extent that economists and other academicians reach some consensus on certain aspects of broadband regulation in the future, the FCC may be persuaded to update its rules.

Finally, the scope of the existing Open Internet rules remains under debate.  For instance, public interest group, Public Knowledge, recently rekindled the debate regarding whether zero rating (alternatively referred to as sponsored data plans) policies that exempt certain content from broadband caps imposed by certain providers constitute a violation of Open Internet principles (see Public Knowledge 2016; Comcast 2016).  Although the Commission has not ruled such policies out, in the 2015 Order, it left the door open to reassess them (FCC 2015, ¶¶ 151-153).

Signaling its concern about such policies, the FCC conditioned its recent approval of the merger between Charter Communications and Time Warner Cable on the parties consent not to impose data caps or usage-based pricing (FCC 2016 ¶ 457).  Academic research on this topic remains scarce.  Economides and Hermalin (2015) have suggested that in the presence of a sufficient number of content providers, ISPs able to set a binding cap will install more bandwidth than ones barred from doing so; to our knowledge, economists have not rigorously assessed zero rating and the FCC continues its inquiry into these policies.


[1] It should be noted that notwithstanding these claims, congestion control is already built into the TCP/IP protocol.  Further, more advanced forms of congestion management have been developed for specific applications, such as buffering and adaptive quality for streaming video, that allow these applications to adapt to network congestion.  Whereas real-time network QoS guarantees could be useful for certain applications (e.g., live teleconferencing), these applications represent a small share of overall Internet traffic.

[2] The categorizations embodied by the Computer Inquiries decisions initially stemmed from an attempt to create a legal and regulatory distinction between “pure communications” and “pure data processing,” the former of which was initially provisioned by an incumbent regulated monopoly (primarily AT&T), and the latter of which was viewed as largely competitive and needing little regulation.  The culmination of these inquiries implicitly led to a layered model of regulation, dividing communication policy into (i) a physical network layer (to which common carrier regulation might apply), (ii) a logical network layer (to which open access issues might apply), (iii) an applications and services layer, and (iv) a content layer (Cannon 2003 pp. 194-5, Koning 2015 pp. 286-7).

[3] One 1999 study found a total of 6,006 ISPs in the U.S.  See, e.g., Greenstein and Downes (1999) at 195-212.

[4] In particular, the Court wrote, “Nor do we inquire whether `some or many economists would disapprove of the [agency’s] approach’ because ‘we do not sit as a panel of referees on a professional economics journal, but as a panel of generalist judges obliged to defer to a reasonable judgement by an agency acting pursuant to congressionally delegated authority.”

Tags: , , , , , , , , ,

No Comments


Annie Waldherr Seminar on Food Safety in Online Issue Networks

Thursday, May 5th, 2016

Annie Waldherr presented a joint Media & Information and Quello Center seminar entitled “Discussing food safety in online issue networks: Empirical results and methodological prospects.”  Her talk highlighted that civil society actors concerned about food safety issues—GMOs, pesticide residues, and antibiotic-resistant superbugs—build coalitions that can eventually result in movement networks.  These connections can be empirically observed in online issue networks—sets of interlinked websites treating a common issue.

To assess mobilization potentials of actor coalitions Annie and her colleagues study the extent to which actors link to each other and to which actors talk about the same topics.  They combine hyperlink network analysis with probabilistic topic modeling to gain empirical insights on both, the structural as well as the content dimension of the issue networks.  Preliminary results for the US indicate a densely connected issue network spanning from central challenger actors to websites of mass media and political actors. A high number of issues expand through major parts of the network, such as contaminated food and regulation, genetically modified food, organic farming and sustainable agriculture. A fewer number of issues such as use of antibiotics or pollution of drinking water remain restricted to specific parts of the network.

Dr. Annie Waldherr is a Researcher in the Division of Communication Theory and Media Effects, Institute for Media and Communication Studies at Freie Universität Berlin.  Annie has used agent-based modeling (ABM) and network analysis to study computer-mediated communications processes. Her recent work using ABM was published in the Journal of Communication.

 

 

Tags: , , , , ,

No Comments


The Un#ballogetic World of Wireless Ads

Friday, April 1st, 2016

I belong to that rare breed of human that enjoys commercials.  As a social scientist with an interest in the impact of advertisement on consumer behavior, I often find myself, possibly to the chagrin of my wife (though she has not complained), assessing commercials out loud.  Are they informative?  Are they persuasive or attempt simply to elicit attention to the good in the ad?  Might they unintentionally lead to brand confusion?  Most importantly, are they funny?

Thus, having also spent some time among wireless regulators, I cannot help but comment on the recent spate of wireless attack ads perpetuated by three of the U.S. nationwide mobile wireless providers.  The initial culprit this time around was Verizon Wireless, which determined that balls were a good method to represent relative mobile wireless performance among the nationwide competitors.  Shortly thereafter, Sprint aired a commercial using bigger balls while T-Mobile brought in Steve Harvey to demand that Verizon #Ballagize.

There are myriad takeaways that can be had from these commercials.  First, at least on the face of it, the nationwide mobile wireless providers appear to be fiercely competitive with one another.  It would be interesting to look at advertising to sales ratios for this industry relative to that of other industries in the U.S., though at the time of writing of this blog, I did not have access to such data (Ad Age appears to be a convenient source).  Moreover, the content of the commercials suggests that although price continues to be an important factor (Sprint did not veer away from its “half-off” theme in its ball commercial), quality competition that allows competitors to differentiate their product (and in doing so, justify higher prices) remains paramount.

Unfortunately, as a consumer, it is difficult for me to properly assess what these commercials say about wireless quality.  There are a number of points at play here.

  1. The relative comparisons are vague: When Sprint says that it delivers faster download speeds than the other nationwide providers, what does that mean?  When I zoom into the aforementioned Sprint commercial at the 10 second mark, the bottom of the screen shows, “Claim based on Sprint’s analysis of average LTE download speeds using Nielsen NMP data (Oct. thru Dec. 2015).  NMP data captures real consumer usage and performance for downloads of all file sizes greater than 150kb.  Actual speeds may vary by location and device capability.”  As a consumer who spends most of his time in East Lansing, MI, I am not particularly well informed by a nationwide average.  Moreover, I know nothing about the statistical validity of the data (though here I am willing to give Nielsen the benefit of the doubt).  Moreover, I would be interested to know when Sprint states that it delivers faster download speeds, how much faster they are (in absolute terms) relative to the next fastest competitor.
  2. The small print is too small: Verizon took flak from its competitors for using outdated data in its commercial.  This is a valid claim.  Verizon’s small print (13 second mark in its commercial) states that RootMetrics data is based on the 1st half of 2015.  But unless I am actually analyzing these commercials as I am here, and viewing them side by side, it is difficult for me to make the comparison.
  3. The mobile wireless providers constantly question one another’s credibility, and this is likely to make me less willing to believe that they are indeed credible. Ricky Gervais explains this much better than I do: Ricky Gervais on speed, coverage, and network comparisons.

Alas, how is a consumer supposed to assess wireless providers?  An obvious source is Consumer Reports, but my sense, without paying for a subscription, is that these largely depend on expert reviews and not necessarily data analysis (someone correct me if I am wrong).  Another if one is not in the habit of paying for information about rival firms is the FCC.  The FCC’s Wireless Telecommunications Bureau publishes an “Annual Report and Analysis of Competitive Market Conditions with Respect to Mobile Wireless.”  The most recent, Eighteenth Report, contains a lengthy section on industry metrics with a focus on coverage (see Section III) as well as a section on service quality (see Section VI.C).  The latter section focuses on nationwide average speed according to FCC Speed Test data as well as on data from private sources Ookla, RootMetrics (yes, the one mentioned in those commercials), and CalSPEED (for California only).  If you are interested, be sure to check out the Appendix, which has a wealth of additional data.  For those who don’t want to read through a massive pdf file, there is also a set of Quick Facts containing some of the aforementioned data.

However, what I think is lacking is speed data at a granular level.  When analyzing transactions or assessing competition, the FCC does so at a level that is far more granular than the state, and rightly so, as consumers do not generally make purchasing decision across an entire state, needless to say, the nation as a whole.  This is because service where consumers are likely to be present for the majority of their time is a major concern when deciding on wireless quality.  In a previous blog post I mentioned that the FCC releases granular fixed broadband data, but unfortunately, as far as I am aware, this is still not the case for wireless, particularly with regard to individual carrier speed data.

The FCC Speed Test App provides the FCC with such data.  The Android version which I have on my phone provides nifty statistics about download and upload speed as well as latency and packet loss, with the option to parse the data according to mobile or WiFi.  My monthly mobile only data for the past month showed a download speed above 30 Mbps.  Go Verizon!  My Wifi average was more than double that.  Go SpartenNet!  Yet, my observation does not allow me to compare data across providers in East Lansing and my current contract happens to expire in a couple of weeks.  The problem is that in a place like East Lansing and particularly so in more rural areas of the United States, not enough people have downloaded the FCC Speed Test App and I doubt that the FCC would be willing to report firm level data at a level deemed not to have statistical validity.

For all I know, the entire East Lansing sample consists of my twice or so daily automatic tests that if aggregated to a quarter of a year make up less than 200 observations for Verizon Wireless.  Whether this is sufficient for a statistically significant sample depends on the dispersion in speed observations for a non-parametric measure such as a median speed and also on the assumed distribution for mean speeds.  I encourage people to try this app out.  The more people who download it, the more likely that the FCC will have sufficient data to be comfortable enough to report it at a level that will make it reliable as a decision making tool.  Perhaps then, the FCC will also redesign the app to also report competitor speeds for the relevant geographic area.

Tags: , , , , , , , ,

5 Comments