I thought I’d write a short follow-up in response to the exchange of comments following my recent post on issues related to impacts of the FCC’s Open Internet order on ISP investment.
I very much appreciate the responses to my post, especially from Hal Singer and Mark Jamison, whose work was the target of my sometimes insufficiently respectful criticism. It helped me understand the substantive issues better and also reminded me that respectful dialog on important and controversial issues may not always be easy, but is certainly worth the effort…and that I’m still somewhat haltingly learning that lesson.
I especially appreciated the content and tone of Mark’s comment, including:
I won’t make the claim that my approach revealed reality and that yours did not. We have too little information for that. And even if we had sufficient data for a proper study, there would still be errors. That said, I would be glad to work with you and/or your colleagues on a study once sufficient data are available.
In my view this pretty well describes the aim of the Quello Center’s investigation of this policy issue: to gather as much useful data as we can and to apply to it a mix of the most useful modes of analysis to understand what’s going on and to refine the models we use to understand and predict policy outcomes. I hope to be part of that process, contributing my best skills and strengths, being humble enough to acknowledge their limits, and learning from others who have different expertise and perspectives.
Mark’s comment reminds me of the story about the blind men trying to describe the elephant, all of them describing it differently based on which part of the massive creature they were feeling with their hands. While I wouldn’t describe all of us focused on this issue as blind, I think it’s fair to say that we (and, as Mark notes, our methods) all suffer from some form of perceptual limitation. Some of us are nearsighted, others farsighted and perhaps others see clearly only with one eye…and occasionally we all may feel compelled to close our eyes to avoid seeing something that makes us very uncomfortable.
Though when it comes to policy research we may never be able to see and agree on “the truth,” my hope is that the Quello Center’s research team can be part of an effort to carefully study this and other policy “elephants” from as many angles as we can, and work together to understand their key dynamics, while at the same time remembering the value of respectful dialog, even when a voice inside our head might be telling us “that guy describing the elephant’s tail must be a fool or a scoundrel.”
In the past week or so I’ve seen several articles that remind me how important the Quello Center’s empirically-grounded study of net neutrality impacts is for clarifying what these impacts will be—especially since net neutrality is one of those policy topics where arguments are often driven by ideology and/or competing financial interests.
As far as I can tell, this series of articles began with an August 25 piece written by economist Hal Singer and published by Forbes under the following headline: Does The Tumble In Broadband Investment Spell Doom For The FCC’s Open Internet Order? Per his Forbes bio, Singer is a principal at Economists Incorporated, a senior fellow at the Progressive Policy Institute, and an adjunct professor at Georgetown University’s McDonough School of Business.
Singer’s piece was followed roughly a week later by two op-ed pieces published on the American Enterprise Institute’s web site. The title of the first AEI piece, authored by Mark Jamison, was Title II’s real-world impact on broadband investment. This was followed a day later by Bronwyn Howell’s commentary Title II is hurting investment. How will – and should – the FCC respond?
What struck me about this series of op-ed pieces published by economists and organizations whose theoretical models and policy preferences appear to favor unregulated market structures was that their claims that “Title II is hurting investment” were all empirically anchored in Singer’s references to declines in ISP capital spending during the first half of 2015. As a member of the Quello Center’s research team studying the impacts of net neutrality, I was intrigued, and eager to dig into the CapEx data and understand its significance.
While my digging has only begun, what I found reminded me how much the communication policy community needs the kind of fact-based, impartial and in-depth empirical analysis the Quello Center has embarked upon, and how risky it is to rely on the kind of ideologically-driven analysis that too often dominates public policy debates, especially on contentious issues like net neutrality.
My point here is not to argue that there are clear signs that Title II will increase ISP investment, but rather that claims by Singer and others that there are already signs that it is hurting investment are not only premature, but also based on an incomplete reading of evidence that can be uncovered by careful and unbiased review of publicly available information.
I hope to have more to say on this topic in future posts, but will make a few points here.
The crux of Singer’s argument is based on his observation that capital spending had declined fairly dramatically for a number of major ISPs during the first half of 2015, dragging down the entire sector’s spending for that period (though its not clear from the article, my sense is that Singer’s reference to “all” wireline ISPs refers to the industry’s larger players and says nothing about investment by smaller companies and the growing ranks of publicly and privately owned FTTH-based competitors). He then briefly reviews and dismisses potential alternative explanations for these declines, concluding that their only other logical cause is ISPs’ response to the FCC’s Open Internet Order (bolding is mine):
AT&T’s capital expenditure (capex) was down 29 percent in the first half of 2015 compared to the first half of 2014. Charter’s capex was down by the same percentage. Cablevision’s and Verizon’s capex were down ten and four percent, respectively. CenturyLink’s capex was down nine percent. (Update: The average decline across all wireline ISPs was 12 percent. Including wireless ISPs Sprint and T-Mobile in the sample reduces the average decline to eight percent.)..
This capital flight is remarkable considering there have been only two occasions in the history of the broadband industry when capex declined relative to the prior year: In 2001, after the dot.com meltdown, and in 2009, after the Great Recession. In every other year save 2015, broadband capex has climbed, as ISPs—like hamsters on a wheel—were forced to upgrade their networks to prevent customers from switching to rivals offering faster connections.
What changed in early 2015 besides the FCC’s Open Internet Order that can explain the ISP capex tumble? GDP grew in both the first and second quarters of 2015. Broadband capital intensity—defined as the ratio of ISP capex to revenues—decreased over the period, ruling out the possibility that falling revenues were to blame. Although cord cutting is on the rise, pay TV revenue is still growing, and the closest substitute to cable TV is broadband video. Absent compelling alternatives, the FCC’s Order is the best explanation for the capex meltdown.
I haven’t had a chance to carefully review the financial statements and related earnings material of all the companies cited by Singer, but did take a quick look at this material for AT&T and Charter since, as he notes, they experienced by far the largest percentage drop in spending. What I found doesn’t strike me as supporting his conclusion that the decline was network neutrality-driven. Instead, in both cases it seems to pretty clearly reflect the end of major investment projects by both companies and related industry trends that seem to have nothing to do with the FCC’s Open Internet order.
My perspective on this is based on statements made by company officials during their second quarter 2015 earnings calls, as well as capex-related data in their financial reporting.
During AT&T’s earnings call, a Wall Street analyst asked the following question: “[T]he $18 billion in CapEx this year implies a nice downtick in the U.S. spending, what’s driving that? Are you finding that you just don’t need to spend it or are you sort of pushing that out to next year?” In his response to the question, John Stephens, the company’s CFO, made no mention of network neutrality or FCC policy decisions. Instead he explained where the company was in terms of key wireless and wireline strategic network investment cycles (bolding is mine):
Well, I think a couple of things. And the simplest thing is to say [is that the] network team did a great job in getting the work done and we’ve got 300, nearly 310 million POPs with LTE right now. And we are putting our spectrum to use as opposed to building towers. And so that aspect of it is just a utilization of spectrum we own and capabilities we have that don’t require as much CapEx. Secondly, the 57 million IP broadband and what is now approximately 900,000 business customer locations passed with fiber. Once again, the network guys have done a great job in getting the Project VIP initiatives completed. And when they are done…the additional spend isn’t necessary, because the project has been concluded not for lack of anything, but for success.
Later on in the call, another analyst asked Stephens “[a]s you look out over the technology roadmap, like 5G coming down the pipeline, do you anticipate that we will see another period of elevated investment?”
While Stephens pointed to a potential future of moderated capital spending, he made no reference to network neutrality or FCC policy, focusing instead on the investment implications of the company’s (and the industry’s) evolution to software-defined networks.
I would tell you that’s kind of a longer term perspective. What we are seeing is our move to get this fiber deep into the network and getting LTE out deep into the wireless network and the solutions that we are finding in a software-defined network opportunity, we see a real opportunity to actually strive to bring investments, if you will, lower or more efficient from historical levels. Right now, I will tell you that this year’s investment is going to be in that $18 billion range, which is about 15%. We are certainly – we are not going to give any guidance with regard to next year or the year after. And we will give an update on this year’s guidance, if and when in our analyst conference if we get that opportunity. With that being said, I think there is a real opportunity with some of the activities are going on in software-defined networks on a longer term basis to actually bring that in capital intensity to a more modest level.
Charter’s large drop in capital spending appears to be driven by a similar “investment cycle” dynamic. During its 2Q15 earnings call, CFO Christopher Winfrey noted that Charter’s year-over-year decline in total CapEx “was driven by the completion of All-Digital during the fourth quarter of last year,” referring to the company’s migration of its channel lineup and other content to an all-digital format.
A review of the company’s earnings call and financial statements suggests that a large portion of the “All-Digital” capital spending was focused on deploying digital set-top boxes to Charter customers, resulting in a precipitous decline in the “customer premise equipment” (CPE) category of CapEx. According to Charter’s financial statements, first-half CPE-related CapEx fell by more than half, or $341 million, from $626 million to $285 million. Excluding this sharp falloff in CPE spending driven by the end of Charter’s All-Digital conversion, the remainder of the company’s capital spending was actually up 3% during the first half of 2015. And this included a 7% increase in spending on “line extensions,” which Charter defines as “network costs associated with entering new service areas.” It seems to me that, if Charter was concerned that the Commission’s Open Internet order would weaken its business model, it would be cutting rather than increasing its investment in expanding the geographic scope of its network.
To understand the significance of Charter’s spending decline, I think it’s important to note that its 29% decline in first half total CapEx was driven by a 54% decline in CPE spending, and that the company’s non-CPE investment—including line extensions—actually increased during that period. I found it odd that, even as he ignored this key dynamic for Charter, Singer seemed to dismiss the significance of Comcast’s CapEx increase during the same period by noting that it was “attributed to customer premises equipment to support [Comcast’s] X1 entertainment operating system and other cloud-based initiatives.”
I also couldn’t help notice that, in his oddly brief reference to the nation’s largest ISP, Singer ignored the fact that every category of Comcast’s capital spending increased by double-digits during the first half of 2015, including its investment in growth-focused network infrastructure, which expanded 24% from 2014 levels. Comcast’s total cable CapEx was up 18% for the first half of the year, while at Time Warner Cable, the nation’s second largest cable operator, it increased 16%.
While these increases may have nothing to do with FCC policy, they seem very difficult to reconcile with Singer’s strongly-assserted argument, especially when coupled with the above discussion of company-specific reasons for large CapEx declines for AT&T and Charter. As that discussion suggests, the reality behind aggregated industry numbers (especially when viewed through a short-term window of time) is often more complex and situation-specific than our economic models and ideologies would like it to be. This may make our research harder and messier to do at times, but certainly not less valuable. It also speaks to the value of longitudinal data collection and analysis, to better understand both short-term trends and those that only become clear over a longer term. That longitudinal component is central to the approach being taken by the Quello Center’s study of net neutrality impacts.
One last general point before closing out this post. I didn’t see any reference in Singer’s piece or the AEI-published follow-ups to spending by non-incumbent competitive providers, including municipally and privately owned fiber networks that are offering attractive combinations of speed and price in a growing number of markets around the country. While this category of spending may be far more difficult to measure than investments by large publicly-owned ISPs, it may be quite significant in relation to public policy, given its potential impact on available speeds, prices and competitive dynamics.
Expect to see more on this important topic and the Quello Center’s investigation of it in later posts, and please feel free to contribute to the discussion via comments on this and/or future posts.
Though not as essential as drinking water, I consider both electricity and Internet access to be core infrastructure, with high fixed costs and providing general purpose support for the requirements of modern life.
As the Internet expands to include networked “things” as well as people, and electric utilities pursue a future “smart grid,” the scope of these two sectors’ activity seem likely to overlap more than ever in the future. One example of this is the widespread deployment by utilities of so-called “smart meters” connected via the utility’s own (usually wireless) network. Though it’s unclear how this strategy will evolve (and, as I explain here, its most typical forms may be seriously flawed), it strikes me as a step toward creating an Internet of Things (IoT) in which utility-controlled devices provide key control functions.
This movement of electric utilities into the communication space raises some interesting questions about how all this will and should evolve. Two related issues come initially to mind.
One is the general approach we take as a society to ensure that companies operating in core infrastructure sectors serve the public interest. In the telecom and Internet space, the model that has evolved over the years is mainly one of encouraging facilities-based competition (as I discussed in an earlier post, the Lansing area [and my own household] is currently benefiting from such competition). And though the FCC recently added to this a Title II-based framework for enforcing key non-discrimination requirements, the Commission emphasized its intention to make heavy use of regulatory forbearance in achieving this policy goal. As this communication policy model has evolved at the federal level, the regulatory powers of state and local government have been steadily and considerably diminished in the telecom/Internet space.
In contrast, the locus of public interest policy enforcement in the electric power industry has remained state-level regulatory agencies. And while there has been an increase in choice and competition in the provision of centralized power generation, there is virtually no facilities-based competition in retail distribution, and considerable disagreement about how to handle distributed generation, most notably rooftop solar. Whereas the Internet has enabled its users to become producers as well as consumers of online-enabled content and services, the evolution to an “empowered prosumer” future is less clear and increasingly contentious in the electric power industry.
This ties into another issue, which relates to the control of customer premise devices and the customer-related information they collect and transmit to the utility. In a paper entitled “Getting Smarter About the Smart Grid,” Timothy Schoechle suggested the electric utility industry look to the telecom sector for guidance rather than move forward with their current “smart meter” plans.
The demarcation between monopoly utility space and customer market space was clarified over two decades ago in the case of wire-line telephone monopolies with the decisions and policy changes culminating in the divestiture of AT&T. One result was enormous…growth in new markets for premises equipment and services. The electricity grid today is facing the same demarcation inflection point as the telephone network experienced. The gateway belongs to the consumer, not to the electric utility. A demarcation and opening of the consumer premises space to market competition could unleash the creative energy of the consumer electronics industry, the home appliance industry, and others. Full two-way smart grid communication among premises-based systems, products, and services—facilitated by a consumer-controlled gateway device and already available data services (i.e., Internet and Web access via DSL, cable, fiber, etc.) —would free the smart grid from the stifling control of utilities and their proprietary meter-reading networks…
Data to be collected by the smart meters, including intimate personal details of citizens’ lives, is not necessary to the basic purpose of the smart grid—supply/demand balancing, demand response (DR), dynamic pricing, renewable integration, or local generation and storage—as promoters of the meters, and uninformed parties, routinely claim. Instead, the meter data is serving to create an extraneous market for consumer data mining and advertising (i.e., “big data” analytics).
A concern I have regarding the electric power industry’s evolution to a “smart grid” and renewable-rich, low-carbon future is whether traditional state regulatory agencies have the necessary multidisciplinary expertise, regulatory systems and broad vision to guide this evolution wisely, especially amidst signs that global warming is reaching dangerous and potentially irreversible levels faster than expected, and the fact that the smart grid represents a significant extension into the communication sector. (Note: some cities, including Lansing and East Lansing, the Quello Center’s home base, are served by municipally-owned utilities, which are typically not subject to the same level of state regulatory oversight as investor owned utilities.)
Perhaps this is an appropriate topic for study by MSU’s Institute of Public Utilities, which includes faculty from multiple colleges, including some with close ties to the Quello Center and/or the Media and Information department in MSU’s College of Communication Arts and Sciences. With so much at stake for the future of our nation’s electrical grid, the IoT, and society as a whole, it certainly seems a topic worthy of the kind of multidisciplinary analysis that IPU seems well positioned to convene, perhaps in collaboration with the communication policy-focused Quello Center.
As I explained in an earlier blog post, I believe potential risks associated with our ever-more-intensive use of wireless devices, and the expanding body of research suggesting such risks do exist, are being unwisely ignored in our rush to enjoy the benefits of these technologies.
As that earlier post suggested, I see a need for:
1) A much-expanded program of research focused on understanding and mitigating EMF-related health risks, especially for vulnerable populations;
2) A fact-based and respectful discussion of research and public policy issues related to such risks.
Given my interest in this subject, I thought I might learn something useful from a recent NYT article by Carol Pogash entitled Cellphone Ordinance Puts Berkeley at Forefront of Radiation Debate. But, as I read the piece, I discovered that it used a mix of questionable journalistic practices to convey a different and dismayingly biased message, one worthy of a headline more like “Crazy Berkeley Radicals Once Again Deny Science by Legislating Onerous Anti-Business Regulations Based on Unfounded Sky-Is-Falling Cancer Claims.”
The first set of warning lights flashed when I read Pogash’s lead paragraph:
Leave it to Berkeley: This city, which has led the nation in passing all manner of laws favored by the left, has done it again. This time, the city passed a measure — not actually backed by science — requiring cellphone stores to warn customers that the products could be hazardous to their health, presumably by emitting dangerous levels of cancer-causing radiation.
While the first sentence may be true (I can’t tell without some independent research, since Pogash doesn’t cite any other “left-favored” laws passed by the city), it’s worth noting because it sets an effective perceptual frame for communicating the “Crazy Fact-Denying Berkeley Radicals Are At It Again” message. And it is especially potent in that regard when followed by two much more egregious statements in that paragraph. These claim that the Berkeley ordinance:
• is “not actually backed by science” and;
• warns customers that cellphones “could be hazardous to their health, presumably by emitting dangerous levels of cancer-causing radiation.”
It seems to me that, after reading the first paragraph, uninformed readers might reasonably assume that the ordinance’s disclosure requirement made unsubstantiated claims that cellphone use will expose users to “dangerous levels of cancer-causing radiation.”
While launching a piece on the Berkeley ordinance this way may have been fun to write (it is, after all, entertainingly written), I was surprised and disappointed to see that it survived the Times’ editorial process. I found the “presumably” phrase particularly egregious in that regard, since Pogash’s “presumption” had no relation to the actual content of the ordinance, though many readers would not have known it when they read the lead paragraph (or possibly even after reading the whole article).
In her second paragraph, Pogash retains her dismissive tone by referring to the new Berkeley law as the “so-called” Right to Know ordinance, whose provisions she cites partially and in pieces, rather than in whole.
After reading these first two paragraphs, it seems reasonable to me that uninformed readers would assume that the ordinance and its requirements made some extreme statements about health risks, including some direct reference to increased cancer risk. All the more so after Pogash begins her third paragraph by focusing readers’ attention back to her preferred angle on the story, the “there’s no definitive proof of cancer risk” straw man.
Even supporters of the ordinance acknowledge that there is no definitive scientific link between cellphones and cancer, although they argue that it may take years for cancers to develop. The American Cancer Society says that cases of people developing cancer after carrying cellphones may be coincidental or anecdotal.
In the second part of that paragraph Pogash somewhat grudgingly acknowledges the actual content and purpose of the ordinance by adding that:
But some supporters are undeterred, noting that there are similar warnings in the fine print of cellphone manuals, and that the Berkeley warning is carefully written to reflect that language, albeit with additional cautionary words.
But right after doing that she jumps back to a poorly documented and superficial “debunking” of claims regarding potential links between cellphone use and cancer, which takes up the bulk of the remaining column inches devoted to the piece.
What she does not make clear is what Larry Lessig, who supported the Berkeley city council with pro bono legal services, explained in a blog post published shortly after the ordinance was passed:
The Quello Center is anchored in the Department of Media and Information at MSU, which provides attractive opportunities for doctoral studies with a multi-disciplinary faculty. The Media and Information Studies (MIS) PhD program at Michigan State University would like to hear from outstanding students who wish to join an innovative interdisciplinary program of study at the intersection of the social sciences, humanities, including the study of socio technical systems, Internet studies, and communication policy and governance. Our diverse faculty develops and applies transformative knowledge about media and society and evolving information and communication technologies. The program engages students to become active scholars, teachers, and leaders in the media and information fields.
Offered jointly by the Department of Advertising+Public Relations, the School of Journalism, and the Department of Media and Information, the MIS PhD program give students access to fifty PhD faculty with research interests that span important current and emerging issues in media and information studies. Students get involved early on in projects, complementing theoretical coursework with hands-on research experiences.
Particularly strong research interests of our faculty include:
New this year is an option for undergraduates interested in pursuing advanced studies through an accelerated MA program leading to early admission to our PhD program.
Over 90 percent of our current students are supported by graduate teaching and research assistantships with generous stipends of $2000 per month, tuition remission, and health benefits . University fellowships, dissertation completion fellowships, summer research fellowships, and stipends for travel to academic conferences round out the resources available for students.
Over three-fourths of our graduates are hired into faculty positions at four-year institutions at graduation. They are found in departments of mass media, journalism, advertising, public relations, and information studies across the United States and around the world. Others go on to careers in public service and business.
The National Communication Association (NCA), in their most recent doctoral program reputation study, ranked MSU’s Ph.D. programs as No. 1 in educating researchers in communication technology, and in the top four in mass communication. Michigan State University ranked third in frequency of faculty publication in communication in a study reported in The Electronic Journal of Communication in 2012. QS World University rankings place MSU 11th in the world and 7th in the U.S. in communication and media studies.
East Lansing and the greater Lansing area offer a vibrant cultural environment with easy access to a variety of outdoor activities and the scenic beauty of our state year-round. Blending urban and sub-urban living, it is one of the nation’s most affordable places to complete a doctoral program in media and information studies.