Media and Information Policy Issues

by

From discussions in courses and within the Quello Center Advisory Board, the Center has been developing a set of key issues tied to media, communication and information policy and practice. We’d welcome you thoughts on issues we’ve missed or issues noted that do not merit more sustained research and debate. Your feedback on this list would be most welcome, and will be posted as comments on this post.

Quello Advisory Board Meeting

I. Innovation-led Policy Issues

New Developments around Robotics and Artificial Intelligence: What are the implications for individual control, privacy, and security? Security is no longer so clearly a cyber issue as cyber security increasingly shapes the physical world of autonomous vehicles, drones, and robots.

Internet of Things (IoT): With tens of billions of things moving online, how can individuals protect their privacy and safety and well being as their environments are monitored and controlled by their movement through space? There are likely to be implications for urban informatics, transportation and environmental systems, systems in the household, and worn (wearables above). A possible focus within this set would be on developments in households.

Wearables: What appears to be an incremental step in the IoT space could have major implications across many sectors, from health to privacy and surveillance.

The Future of Content Delivery: Content delivery, particularly around broadcasting of film and television, in the digital age: technology, business models, and social impact of the rapidly developing ecosystem, such as on localism, diversity, and quality.

Free (and Open Source) Software: The prominence and future of free as well as open source software continues to evolve. Are rules, licensing, and institutional support, such as around the Free Software Foundation, meeting the needs of this free software community?

Big Data: How can individuals protect their privacy in the age of computational analytics and increasing capture of personal data and mass surveillance? What policies or practices can be developed to guide data collection, analysis, and public awareness?

Encryption: Advances in encryption technologies at a time of increasing threats to the privacy of individual communications, such as email, could lead to a massive uptake of tools to keep private communications private. How can this development be accelerated and spread across all sectors of the Internet community?

Internet2: Just as the development of the Internet within academia has shaped the future of communications, so might the next generation of the Internet – so-called Internet2 – have even greater implications in shaping the future of research and educational networking in the first instance, but public communications in the longer-term. Who is tracking its development and potential implications?

Other Contending Issues: Drones, Cloud computing, …

II. Problem-led Initiatives

Transparency: Many new issues of the digital age, such as concerns over privacy and surveillance, are tied to a lack of transparency. What is being done with your data, by whom, and for what purposes? In commercial and governmental settings, many public concerns could be addressed to a degree through the provision of greater transparency, and the accountability that should follow.

Censorship and Internet Filtering: Internet filtering and censorship was limited to a few states at the turn of the century. But over the last decade, fueled by fear of radical extremist content, and associated fears of self-radicalization, censorship has spread to most nation states. Are we entering a new digital world in which Internet content filtering is the norm? What can be done to mitigate the impact on freedom of expression and freedom of connection?

Psychological Manipulation: Citizen and consumers are increasingly worried about the ways in which they can be manipulated by advertising, (fake) news, social media and more that leads them to vote, buy, protest, or otherwise act in ways that the purveyors of the new propaganda of the digital age would like. While many worried about propaganda around the mass media, should there be comparable attention given to the hacking of psychological processes by the designers of digital media content? Is this a critical focus for consumer protection?

(In)Equities in Access: Inequalities in access to communication and information services might be growing locally and globally, despite the move to digital media and ICTs. The concept of a digital divide may no longer be adequate to capture these developments.

Privacy and Surveillance: The release of documents by Edward Snowden has joined with other events to draw increasing attention to the threats of mass unwarranted surveillance. It has been an enduring issue, but it is increasingly clear that developments heretofore perceived to be impossible are increasingly feasible and being used to monitor individuals. What can be done?

ICT4D or Internet for Development: Policy and technology initiatives in communication to support developing nations and regions, both in emergency responses, such as in relation to infectious diseases, or around more explicit economic development issues.

Digital Preservation: Despite discussion over more than a decade, it merits more attention, and stronger links with policy developments, such as ‘right to forget’. ‘Our cultural and historical records are at stake.’

III. Enduring Policy Issues Reshaped by Digital Media and Information Developments

Media Concentration and the Plurality of Voices: Trends in the diversity and plurality of ownership, and sources of content, particularly around news. Early work on media concentration needs new frameworks for addressing global trends on the Web, with new media, in print media, automated text generation, and more.

Diversity of Content: In a global Internet context, how can we reasonably quantify or address issues of diversity in local and national media? Does diversity become more important in a digital age in which individuals will go online or on satellite services if the mainstream media in a nation ignore content of interest to their background?

Privacy and Privacy Policy: Efforts to balance security, surveillance and privacy, post-Snowden, and in wake of concerns over social media, and big data. White House work in 2014 on big data and privacy should be considered. Policy and practice in industry v government could be a focus. Is there a unifying sector specific perspective?

Freedom of Expression: New and enduring challenges to expression in the digital age.

IV. Changing Media and Information Policy and Governance

Communication Policy: Rewrite of the 1934 Communications Act, last up-dated in 1996: This is unlikely to occur in the current political environment, but is nevertheless a critical focus.

Universal Access v Universal Service: With citizens and consumers dropping some traditional services, such as fixed line phones, how can universal service be best translated into the digital age of broadband services?

Network Neutrality: Should there be Internet fast lanes and more? Efforts to ensure the fair treatment of content, from multiple providers, through regulation has been one of the more contentious issues in the USA. To some, the issue has been ‘beaten to death’, but it has been brought to life again through the regulatory initiatives of FCC Chairman Wheeler, and more recently with the new Trump Administration, where the fate of net neutrality is problematic. Can we research the implications of this policy?

Internet Governance and Policy: Normative and empirical perspectives on governance of the Internet at the global and national level. Timely issue critical to future of the Internet, and a global information age, and rise of national Internet policy initiatives.

Acknowledgements: In addition to the Quello Advisory Board, special thanks to some of my students for their stimulating discussion that surfaced many of these issues. Thanks to Jingwei Cheng, Bingzhe Li, and Irem Yildirim, for their contributions to this list.

Tags: , , , , , , ,


Work Begun on James H. Quello Archive

by

We have just begun work on a digital archive of James H. Quello’s speeches, articles, and statements dating from 21 January 1974, for his Senate Confirmation Hearing. My thanks to the MSU Library for helping the Quello Center with this project, and from today we will start searching for funding to support this archiving project.

James H. Quello

James H. Quello

The core material will be Commissioner Quello’s written speeches, articles and statements, but we will be adding biographical materials, photos, and video material. This should be a valuable source for anyone seriously interested in the history of regulation and policy in the communication sector in the USA.

Our thanks to the MSU Library and to Sarah Roberts with the MSU Archives & Historical Collections.

Tags: , , , , , ,


Response to the House of Lords Inquiry on Online Platforms and a EU Single Market

by

Responses to Questions on the Inquiry into ‘Online Platforms and the EU Digital Single Market’ by the Internal Market Subcommittee of the Select Committee on the European Union, House of Lords [For background see the Digital Single Market Strategy for Europe]

Submitted by Professor William H. Dutton and Professor Thomas D. Jeitschko, Quello Center, Michigan State University, USA.

Electronic marketplaces and online platforms are relatively new formats for exchange, and require far more research on their use and impact before there is clarity on their effective regulation and governance. A central motivation for a single market for Europe is the potential for enhancing the competitive position of Europe and its nations in the global economy. If the EU were to move forward on the range of regulatory initiatives that drive this call for evidence, would Britain and the other nations of the EU be in a more competitive position in an increasingly worldwide digital economy? We are concerned that the regulation of online platforms is likely to undermine Internet innovations in Europe, by European entrepreneurs, businesses and industries, and thereby harm the competitive position of the European economy.

Our responses to the questions of the Inquiry explain the rationale behind this concern.

1. Do you agree with the Commission’s definition of online platforms?

The types of platforms identified by the Commission define much of what we identify as the Internet, broadly defined, to include the means by which consumers access and interact with each other and business and public services over the Internet. All devices used to access the Internet, and its various online platforms, and the Web, are critical elements of the Internet’s infrastructure and services. In addition, the types of online platforms identified in the Inquiry are not separate, but interwoven in most real world cases. They are defined by their purposes, such as collaboration, communication, or information propagation, dissemination or exchange, but these various purposes are often supported by the same underlying software and systems.

A contemporary approach to regulating the Internet is analogous to the old Indian parable of the blind men and the elephant: focusing on any one aspect—no matter how accurate in detail, will fail to recognize the interconnectedness and interdependence of the whole. In this sense, the Inquiry puts its hand on different parts of the Internet and seeks to design regulation around each part. The sum total of the regulatory regimes it will yield are likely to create obstacles to innovation of each part and the whole. Moreover, in the process of moving in this direction, the establishment of such a piecemeal regulatory regime will create uncertainties and barriers that will diminish investment and innovation, and potentially advantage dominant businesses within the Internet industry, which have the scale to support the legal and administrative costs of negotiating through this regulatory complexity.

2. How and to what extent do online platforms shape and control the online environment and the experience of those using them?

The platforms enumerated define a large proportion of what people do online, such as collaborating, communicating and furthering information exchange. However, the list provided is already dated, illustrating the degree that any delineation of these platforms will be quickly out-dated and will inevitably be incomplete. For example, they miss most of the developments that will increasingly define the Internet, such as the developments around device-to-device communications and the “Internet of Things”, including, incidentally, the many specific uses and applications that have become instrumental in facilitating access to markets and securing value chains in developing countries today.

Nevertheless, it is clear that the online platforms enumerated by this Inquiry are of growing importance in terms of money transferred, and time spent on them for both accessing information, entertainment, and services. For example, an increasing proportion of time is spent on social media, and the mobile smartphone is rapidly becoming a nearly ubiquitous device for access to the Internet and most online platforms. Yet smartphones are only one of a growing variety of devices, from mobile devices to personal computers and televisions, that users will employ to access information and people online – all parts of an rapidly evolving Internet infrastructure.

3. What benefits have online platforms brought consumers and businesses that rely on platforms to sell their goods and services, as well as the wider economy?

It is difficult to imagine a successful business or industry that does not utilize the Internet for access and use of many of the platforms defined by the Inquiry. In a number of the most rapidly developing economies, such as in China, users are employing many of these platforms, such as social networking, far more than are users within the nations of Europe and North America. Regulation of these platforms in Europe will almost certainly magnify this differential in the coming years, potentially shifting the locus of user-driven innovation to Asia and the global South.

4. What problems, if any, do online platforms cause for you or others, and how can these be addressed? If you wish to describe a particular experience, please do so here.

The key issues are the privacy and security of personal information, and competition – with the advantages of scale fueling increasing concentration in such areas as search. These need to be addressed by adherence to principles that are common across the platforms, and not uniquely designed for each specific platform in ways that must be constantly up-dated. However, it is important to recognize that information—including personal information—is being made readily available by consumers. Increasingly, the questions surrounding the handling of personal information are less about whether information can be kept private, and more about how information is handled and managed in a manner that is consistent with consumers’ expectations and self-identified interests. An analogy is the degree to which governments need to move from traditional information silos to information sharing across departments, and locations. This is a major and long-term evolution of a culture and appropriate practices that support sharing in the ways intended, and that prevent the misuse of information by any one of the many parties involved in this process.

5. In addition to concerns for consumers and businesses, do online platforms raise wider social and political concerns?

Broadly defined, the Internet, including online platforms, has generated a variety of social and political concerns, such as around empowering individual users, for better or worse. However, a great number of these concerns have risen to levels that are not proportionate to the likely harms, approaching contemporary ‘moral panics’ that often accompany most new technologies, such as television in an earlier era. This can lead to inappropriate and disproportionate reactions, such as arresting an individual for a single bad 140 character Twitter message. The democratic empowerment of individuals is a concern in autocratic nations, but even there, this role of the Internet is countered by many levels of control by the state and other institutions in ways that temper the actual consequences of use. The most valuable approach to most of these concerns is learning and education about how to appropriately use the Internet, what some call schooling in ‘digital citizenship’.

6. Is the European Commission right to be concerned about online platforms? Will other initiatives in the Digital Single Market Strategy have a positive or negative impact on online platforms?
Given the growing importance that the Internet, and platform markets in particular, play in modern societies, there is certainly a need to foster a better understanding of how these shape society. However, as noted in our introduction, it would be premature if not outright foolhardy to undertake some of the measures proposed in the Digital Single Market Strategy which are sure to not only reduce innovation and the development of the Internet industry across Europe, but also undermine all government, business and industries that will be increasingly using platforms that will become dated and frozen by regulation. Creative software developers will go elsewhere, and all businesses will be left behind in a worldwide digital economy. Inherently dynamic markets feed off innovation, which is at risk of being curtailed. Our fear is that these initiatives will harm Europe, and innovation in Europe, which is a diminishing portion of the global Internet population.

7. Is there evidence that some online platforms have excessive market power? Do they abuse this power? If so, how does this happen and how does it affect you or others?

There is evidence of concentration in such areas as search and social networking. However, in this context, it is important to emphasize that the understanding of competition in these (multi-sided) markets is at a very early stage of development, although it is quickly evolving: There is a larger literature on competition in these markets in static settings, but it is also recognized that the markets in question are inherently very dynamic markets; and models of dynamic competition in platform markets are only beginning to take shape. Current research addressing competition over time (e.g., entry, exit, foreclosure, growth and decline) in these markets is of high quality, but is still in its infancy. It is likely that some general insights will soon emerge concerning how platform competition is most beneficial to consumers, but we are still a few years away from this knowledge crystalizing. The most important insights to be had concerning the potential for excessive market power and its possible abuse are tied to a better understanding of how potential and actual entry and displacement of incumbent platforms takes place in these markets.

Moreover, as suggested above, these markets are evolving quickly as well—so that general insights that may be gleaned today are likely to be outdated tomorrow. These dramatic changes are primarily driven by innovation, rather than shifting demand. Therefore, anything that serves to solidify or corral market dynamics, such as through regulations, may well have adverse and unanticipated affects. These might be compounded in markets where emerging innovation is likely to take place, but may not have come to fruition yet, such as in the UK where there is a strong creative economy, evidenced by the UK being a net exporter of media.

Another implication of (pre-mature) regulatory efforts is that innovation may be adversely affected in other areas that are not the target of the regulation but where regulation feeds though the market into those other areas. It is not uncommon to see innovation cross media platforms and markets, thus generating new market structures. For example, if one of the sides of a multi-sided market is subject to regulation, then this naturally feeds into the other sides and is likely to affect platform competition in unanticipated ways. This can have further knock-on effects by requiring regulatory responses to the innovation around regulation in other areas. As a result there is a real potential for a regulatory patchwork to emerge from such initiatives that becomes incoherent and internally inconsistent. This will have adverse effects on innovation, without any offsetting benefits to consumers – even in the short run.

8. Online platforms often provide free services to consumers, operate in two- or multisided markets, and can operate in many different markets and across geographic borders. Is European competition law able adequately to address abuse by online platforms? What changes, if any, are required?

Many traditional regulatory approaches cannot be applied easily to the online environment. A central problem with contemporary efforts to regulate the Internet involve the transfer of models from older media to the Internet, such as in treating Internet intermediaries, for instance Internet Service Providers, as if they were broadcasters. Likewise, competition laws and regulations need to be reconsidered for online platforms that are potentially global in their reach. For example, there is an inherent tension between large network effects, and competition on price and quality. Most generally, there is an interconnectedness in multi-sided markets, in which the economic understandings and insights that are known from one-sided markets do not readily carry over to multi-sided markets. This is an area in need of research and development of regulatory principles, and not the simple but most often inappropriate transfer of conventional models to very new and different media.

9. What role do data play in the business model of online platforms? How are data gathered, stored and used by online platforms and what control and access do consumers have to data concerning them?

Data have become an important aspect of the business models of online platforms, but also central to many other aspects in a self-reinforcing manner. Data on search can improve search for example. It is important here to note that in most cases, data is used to provide a better service. For example, data on an individual’s location is obviously critical to providing that person directions to another location. For such reasons, it is often in the consumer or citizen’s interest to provide personal information. The key issues are around the provision of information collected for one purpose by one provider to be sold to third parties for other purposes. In many cases, this can also be advantageous for a consumer, such as in saving time by preventing them from providing the same personal information to every provider—this is the basis for so-called lead-generators. However, there is a need to find suitable mechanisms to insure that users can understand and agree to the provision of personal information. Present approaches to advise and consent agreements appear to be leaving too many consumers with little real choice, should they wish to use a service, and also ill-informed of potential risks.

10. Is consumer and government understanding and oversight of the collection and use of data by online platforms sufficient? If not, why not? Will the proposed General Data Protection Regulation adequately address these concerns? Are further changes required and what should they be?

General Data Protection Regulations are not adequate for traditional services, such as by creating uncertainty over what is personal information or sensitive personal information. Weaknesses of data protection need to be addressed and also brought into the digital age of social media, where most consumers and regulators do not understand how data is collected and used, creating inadequate responses and unnecessary fears. There is a need for innovations that can help consumers and other users know who uses what data for what purposes, such as the development of micro-payments to consumers when their data is utilized. This has been proposed many times over the decades since the 1960s, but is becoming increasingly feasible through developments in big data analytics.

11. Should online platforms have to explain the inferences of their data-driven algorithms, and should they be made accountable for them? If so, how?

Many proprietary businesses develop algorithms that are critical to their services, and define their strategic advantage in their field. Search algorithms are a prime example. Ideally, competition would reduce concerns over the exact features of any algorithm. Consumers would choose the best search firm, for example, based on the criteria critical to the users, whether it be ease of use or the quality of results. Open and transparent algorithms are available for use by developers, but the creation of a globally competitive industry in Europe will not be fostered by any one single model, open or proprietary. Different companies should be able to protect certain algorithms as intellectual property in competition with other service providers.

12. Can you describe the challenges that the collaborative economy brings? What possible solutions, regulatory or otherwise, do you propose?
The concept of a collaborative economy is novel, but it leaves out much of what is critical to online platforms. Unless collaboration is defined very broadly, many activities online are not collaborative directly with other individuals. That said, the success of distributed collaboration, such as in citizen science and conceptions of the wisdom of crowds, is dependent on reaching a scale of users that could not be achieved without truly global reach. For such reasons, one of the greatest threats to reaping the benefits of collaboration within national markets and even more internationally, through the power of online connectivity is the premature stifling of innovation through regulations that curtail innovations in this area.

13. How are online platforms regulated at present? What are the main barriers to their growth in the UK and EU, compared to other countries?

Most of the advanced industrial nations have sought not to regulate or minimally regulate the development of the Internet and its services. This has been largely driven by a technology-led industrial strategy. Light regulation of the Internet has generally supported rapid innovation, moving the Internet from an interesting innovation in the aftermath of the dotcom crash to become a critical infrastructure of contemporary digital economies and societies. Nevertheless, at the very time in which the Internet and its associated platforms, are increasingly basic arenas of rapid innovation potential, regulation and governance of how people use the Internet is expanding. First, it should be clear that laws and policies in the offline context apply in the online context – the Internet or online platforms are not the Wild West. Basic laws, such as around fraud or theft, apply online as well as offline, but require up-dated definitions of traditional conceptions of property, for example, and new perspectives on approaches to enforcement. In addition, many private platforms develop their own regulations to govern users. Indeed many online communities appear to be finding more effective ways to self-regulate and enable social regulation – such as creating mechanisms for members of online communities to provide better feedback (not just a ‘Like’ button). This, too, is a dimension of competition and innovation between platforms and market places.

14. Should online platforms be more transparent about how they work? If so, how?

Not necessarily. There should be opportunities for the development of proprietary systems, as well as open standards and open source software. No one model should be imposed on the Internet industry. A more open approach not only allows for further exploration, development and innovation of new models, but it is also reflective of the highly heterogeneous uses and needs of consumers that access and make use of the Internet. This is clearly an area where the one-size-fits-all approach will restrict what to many is most valuable about the Internet. Indeed, the rapid growth of the Internet is tied precisely to its ability to allow for many different and often unanticipated purposes to be fulfilled by diverse populations of users through bottom-up innovations.

15. What regulatory changes, if any, do you suggest in relation to online platforms? Why are 
they required and how would they work in practice? What would be the risks and benefits of these changes? Would the changes apply equally to all online platforms, regardless of type or size?

There is a definite value that could be gained by more learning and education about the use of online platforms. This should be aimed at all age groups and levels. For example, children may find it easy to play with a tablet computer, but they need to be taught how to communicate with people online and respect the rights and dignity of other users.

That said, we emphasize our concerns and our resulting recommendation that the EU move slowly and cautiously towards any further regulation of the Internet through online platforms.

16. Are these issues best dealt with at EU or member state level?

There are likely to be local, regional, national and global issues. The future of the Internet and online platforms will be advantaged if most issues can be moved to a global level, where multistakeholder governance can inform approaches. However, a likely but worrisome development is evident around moves toward the so-called ‘Balkanization’ of the Internet as nations increasingly assert national regulatory authority over global technologies, such as in moving towards data localization in order to control the regulatory regime governing personal data. More issues will move to more local levels, and this will most likely undermine the vitality of Internet innovation in those nations that insist on exerting more national sovereignty.

William H. Dutton is the Quello Professor of Media and Information Policy in the College of Communication Arts and Sciences at MSU, where he is Director of the Quello Center. Bill was the first Professor of Internet Studies at the University of Oxford where he was founding director of the Oxford Internet Institute (OII). While at the OII, he served as chair of Ofcom’s Advisory Committee for England.

Thomas D. Jeitschko is a Professor in the Department of Economics and the Associate Dean for Graduate Studies in the College of Social Science at Michigan State University, and a Research Associate of the Quello Center. Professor Jeitschko previously worked as a research economist in the Antitrust Division of the U.S. Department of Justice, where he analyzed mergers and potentially anticompetitive behaviors.

Tags: , , , , ,


A Reminder Why the Quello Center Net Neutrality Impact Study is Important

by

In the past week or so I’ve seen several articles that remind me how important the Quello Center’s empirically-grounded study of net neutrality impacts is for clarifying what these impacts will be—especially since net neutrality is one of those policy topics where arguments are often driven by ideology and/or competing financial interests.

As far as I can tell, this series of articles began with an August 25 piece written by economist Hal Singer and published by Forbes under the following headline: Does The Tumble In Broadband Investment Spell Doom For The FCC’s Open Internet Order? Per his Forbes bio, Singer is a principal at Economists Incorporated, a senior fellow at the Progressive Policy Institute, and an adjunct professor at Georgetown University’s McDonough School of Business.

Singer’s piece was followed roughly a week later by two op-ed pieces published on the American Enterprise Institute’s web site. The title of the first AEI piece, authored by Mark Jamison, was Title II’s real-world impact on broadband investment. This was followed a day later by Bronwyn Howell’s commentary Title II is hurting investment. How will – and should – the FCC respond?

What struck me about this series of op-ed pieces published by economists and organizations whose theoretical models and policy preferences appear to favor unregulated market structures was that their claims that “Title II is hurting investment” were all empirically anchored in Singer’s references to declines in ISP capital spending during the first half of 2015. As a member of the Quello Center’s research team studying the impacts of net neutrality, I was intrigued, and eager to dig into the CapEx data and understand its significance.

While my digging has only begun, what I found reminded me how much the communication policy community needs the kind of fact-based, impartial and in-depth empirical analysis the Quello Center has embarked upon, and how risky it is to rely on the kind of ideologically-driven analysis that too often dominates public policy debates, especially on contentious issues like net neutrality.

My point here is not to argue that there are clear signs that Title II will increase ISP investment, but rather that claims by Singer and others that there are already signs that it is hurting investment are not only premature, but also based on an incomplete reading of evidence that can be uncovered by careful and unbiased review of publicly available information.

I hope to have more to say on this topic in future posts, but will make a few points here.

The crux of Singer’s argument is based on his observation that capital spending had declined fairly dramatically for a number of major ISPs during the first half of 2015, dragging down the entire sector’s spending for that period (though its not clear from the article, my sense is that Singer’s reference to “all” wireline ISPs refers to the industry’s larger players and says nothing about investment by smaller companies and the growing ranks of publicly and privately owned FTTH-based competitors). He then briefly reviews and dismisses potential alternative explanations for these declines, concluding that their only other logical cause is ISPs’ response to the FCC’s Open Internet Order (bolding is mine):

AT&T’s capital expenditure (capex) was down 29 percent in the first half of 2015 compared to the first half of 2014. Charter’s capex was down by the same percentage. Cablevision’s and Verizon’s capex were down ten and four percent, respectively. CenturyLink’s capex was down nine percent. (Update: The average decline across all wireline ISPs was 12 percent. Including wireless ISPs Sprint and T-Mobile in the sample reduces the average decline to eight percent.)..

This capital flight is remarkable considering there have been only two occasions in the history of the broadband industry when capex declined relative to the prior year: In 2001, after the dot.com meltdown, and in 2009, after the Great Recession. In every other year save 2015, broadband capex has climbed, as ISPs—like hamsters on a wheel—were forced to upgrade their networks to prevent customers from switching to rivals offering faster connections.

What changed in early 2015 besides the FCC’s Open Internet Order that can explain the ISP capex tumble? GDP grew in both the first and second quarters of 2015. Broadband capital intensity—defined as the ratio of ISP capex to revenues—decreased over the period, ruling out the possibility that falling revenues were to blame. Although cord cutting is on the rise, pay TV revenue is still growing, and the closest substitute to cable TV is broadband video. Absent compelling alternatives, the FCC’s Order is the best explanation for the capex meltdown.

I haven’t had a chance to carefully review the financial statements and related earnings material of all the companies cited by Singer, but did take a quick look at this material for AT&T and Charter since, as he notes, they experienced by far the largest percentage drop in spending.  What I found doesn’t strike me as supporting his conclusion that the decline was network neutrality-driven.  Instead, in both cases it seems to pretty clearly reflect the end of major investment projects by both companies and related industry trends that seem to have nothing to do with the FCC’s Open Internet order.

My perspective on this is based on statements made by company officials during their second quarter 2015 earnings calls, as well as capex-related data in their financial reporting.

During AT&T’s earnings call, a Wall Street analyst asked the following question: “[T]he $18 billion in CapEx this year implies a nice downtick in the U.S. spending, what’s driving that? Are you finding that you just don’t need to spend it or are you sort of pushing that out to next year?” In his response to the question, John Stephens, the company’s CFO, made no mention of network neutrality or FCC policy decisions. Instead he explained where the company was in terms of key wireless and wireline strategic network investment cycles (bolding is mine):

Well, I think a couple of things. And the simplest thing is to say [is that the] network team did a great job in getting the work done and we’ve got 300, nearly 310 million POPs with LTE right now. And we are putting our spectrum to use as opposed to building towers. And so that aspect of it is just a utilization of spectrum we own and capabilities we have that don’t require as much CapEx. Secondly, the 57 million IP broadband and what is now approximately 900,000 business customer locations passed with fiber. Once again, the network guys have done a great job in getting the Project VIP initiatives completed. And when they are done…the additional spend isn’t necessary, because the project has been concluded not for lack of anything, but for success.

Later on in the call, another analyst asked Stephens “[a]s you look out over the technology roadmap, like 5G coming down the pipeline, do you anticipate that we will see another period of elevated investment?”

While Stephens pointed to a potential future of moderated capital spending, he made no reference to network neutrality or FCC policy, focusing instead on the investment implications of the company’s (and the industry’s) evolution to software-defined networks.

I would tell you that’s kind of a longer term perspective. What we are seeing is our move to get this fiber deep into the network and getting LTE out deep into the wireless network and the solutions that we are finding in a software-defined network opportunity, we see a real opportunity to actually strive to bring investments, if you will, lower or more efficient from historical levels. Right now, I will tell you that this year’s investment is going to be in that $18 billion range, which is about 15%. We are certainly – we are not going to give any guidance with regard to next year or the year after. And we will give an update on this year’s guidance, if and when in our analyst conference if we get that opportunity. With that being said, I think there is a real opportunity with some of the activities are going on in software-defined networks on a longer term basis to actually bring that in capital intensity to a more modest level.

Charter’s large drop in capital spending appears to be driven by a similar “investment cycle” dynamic. During its 2Q15 earnings call, CFO Christopher Winfrey noted that Charter’s year-over-year decline in total CapEx “was driven by the completion of All-Digital during the fourth quarter of last year,” referring to the company’s migration of its channel lineup and other content to an all-digital format.

A review of the company’s earnings call and financial statements suggests that a large portion of the “All-Digital” capital spending was focused on deploying digital set-top boxes to Charter customers, resulting in a precipitous decline in the “customer premise equipment” (CPE) category of CapEx. According to Charter’s financial statements, first-half CPE-related CapEx fell by more than half, or $341 million, from $626 million to $285 million. Excluding this sharp falloff in CPE spending driven by the end of Charter’s All-Digital conversion, the remainder of the company’s capital spending was actually up 3% during the first half of 2015. And this included a 7% increase in spending on “line extensions,” which Charter defines as “network costs associated with entering new service areas.” It seems to me that, if Charter was concerned that the Commission’s Open Internet order would weaken its business model, it would be cutting rather than increasing its investment in expanding the geographic scope of its network.

To understand the significance of Charter’s spending decline, I think it’s important to note that its 29% decline in first half total CapEx was driven by a 54% decline in CPE spending, and that the company’s non-CPE investment—including line extensions—actually increased during that period.  I found it odd that, even as he ignored this key dynamic for Charter, Singer seemed to dismiss the significance of Comcast’s CapEx increase during the same period by noting that it was “attributed to customer premises equipment to support [Comcast’s] X1 entertainment operating system and other cloud-based initiatives.”

I also couldn’t help notice that, in his oddly brief reference to the nation’s largest ISP, Singer ignored the fact that every category of Comcast’s capital spending increased by double-digits during the first half of 2015, including its investment in growth-focused network infrastructure, which expanded 24% from 2014 levels.  Comcast’s total cable CapEx was up 18% for the first half of the year, while at Time Warner Cable, the nation’s second largest cable operator, it increased 16%.

While these increases may have nothing to do with FCC policy, they seem very difficult to reconcile with Singer’s strongly-assserted argument, especially when coupled with the above discussion of company-specific reasons for large CapEx declines for AT&T and Charter.  As that discussion suggests, the reality behind aggregated industry numbers (especially when viewed through a short-term window of time) is often more complex and situation-specific than our economic models and ideologies would like it to be.  This may make our research harder and messier to do at times, but certainly not less valuable.  It also speaks to the value of longitudinal data collection and analysis, to better understand both short-term trends and those that only become clear over a longer term.  That longitudinal component is central to the approach being taken by the Quello Center’s study of net neutrality impacts.

One last general point before closing out this post. I didn’t see any reference in Singer’s piece or the AEI-published follow-ups to spending by non-incumbent competitive providers, including municipally and privately owned fiber networks that are offering attractive combinations of speed and price in a growing number of markets around the country. While this category of spending may be far more difficult to measure than investments by large publicly-owned ISPs, it may be quite significant in relation to public policy, given its potential impact on available speeds, prices and competitive dynamics.

Expect to see more on this important topic and the Quello Center’s investigation of it in later posts, and please feel free to contribute to the discussion via comments on this and/or future posts.

Tags: , , , , ,


Society Meets Social Media: Canaries at the Coal Face of the Internet

by

Bill Dutton gave a keynote presentation for Social Media & Society 2015, an international conference held 27-29 July 2015 at the Rogers School of Management at Reyerson University, Toronto, Canada. An abstract, and links to the powerpoint and video of the talk are posted here.

Abstract

Legal and regulatory initiatives shaped by moral panics over social media are a microcosm of many general threats to the vitality of a free, open and global Internet. The belief is widespread that social media and related Internet developments are unstoppable and beyond the control of governments and regulators across the world. However, initiatives afoot to address increasingly vocal public support for ‘doing something’ about concerns ranging from cyber-bullying to privacy, are pushing politicians and regulators to bring traditional approaches to media regulation to bear on social media and the Internet. These initiatives are unlikely to accomplish their intended aims but could well undermine the vitality of social media and the larger ecology of the Internet. Several types of response are critical. First, academics and practionners need to come forward with a regulatory model that is purpose built for social media and related applications of the Internet. Secondly, educational efforts need to be prioritized to help children and others learn how to use social media in more ethical, safe and effective ways. Thirdly, social media need to be designed in ways that enable users to hold other users more socially accountable for their actions.

Slides for the Talk are on Slideshare at: http://www.slideshare.net/WHDutton/society-meets-social-media-at-reyerson2015

Video of the Talk: https://ryecast.ryerson.ca/12/watch/9167.aspx

Tags: , , , , , ,


Shareholder value, the public interest & the Comcast/TWC deal

by

Over the past several days I’ve seen a number of post-mortems on the decision by Comcast to drop its bid to acquire Time Warner Cable after it became clear regulators weren’t going to approve the deal. Two items in particular caught my attention over the weekend: a piece  by Eric Lipton in the New York Times discussing Comcast’s not-so-successful lobbying effort in Congress, and an interview with Comcast Chairman and CEO Brian Roberts on Squawk Box, a program carried on CNBC, a cable network owned by Comcast since it acquired NBCUniversal roughly two years ago.

One of the things that struck me about the CNBC interview is that it clearly illustrates one perspective on the deal and Comcast’s impressive growth, and on the net value of regulation. I’d call this the “investor” perspective.  From this perspective, the key metrics for evaluating Comcast, its actions and external factors impacting the company (e.g., regulation) are tied directly to the company’s ability to “maximize shareholder value,” something Brian Roberts and his team have been very good at over the years.

In contrast, the focus of the Times piece was concerns about the merger’s likely impact on the public interest rather than on shareholder value.

Market power skews shareholder value away from public interest

While some (perhaps some libertarian-leaning economists and CNBC commentators) might equate these two values, I suspect most people (experts and non-experts alike) would agree they are not the same, nor always positively correlated.

In fact, I’d argue that shareholder value and the public interest are likely to be inversely correlated when the company in question wields extensive market (and political) power and has a history of using it aggressively to gain competitive advantage and additional market power.  All the more so when First Amendment issues are part of the equation, as is very much the case with regard to Comcast.

In a market with healthy competition and low barriers to entry, companies can only succeed if they satisfy their customers. In such markets I wouldn’t be surprised to find a meaningful correlation between shareholder value and the provision of high-quality service.

But, as FCC data (see graph on pg. 12) makes clear, many customers seeking high-speed Internet connections lack an attractive (or any) competitive option to cable-delivered broadband service.  And the cost of market entry into this very capital-intensive sector remains very high.

And, as someone who has been both a Comcast and AT&T Internet customer, and has visited many an online user forum, my view is that, even when there is a choice between these two industry giants (or their peers), switching from one to the other is akin to jumping from the frying pan into the fire. And even if you’re eager to make that jump, the transition may involve a series of frustrating interactions with the CSRs, IVRs, techs, wait-times, billing mistakes and equipment returns/pickups of not one, but two companies.  For an extreme—and hopefully rare—example of this type of experience, spend a few minutes listening to this recording of a Comcast customer attempting to drop his service.

This lack of attractive options and reluctance to jump through the hoops needed to switch between them may help explain why providers of Internet and bundled services often offer big rebates and steep short-term discounts to get customers to switch. Perhaps they’re hoping that, this time, a customer will stick around after the discount expires, since they’ll know that their only option at that point would be to jump back into the same frying pan they left a short while ago. Switching back and forth may be a game some consumers are willing to continue playing, but I suspect it’s too time-consuming and frustrating for most (at least it would be for me).  Most, I suspect, simply want fast and reliable speeds, and responsive customer service and tech support.  Unfortunately, providing that may cost a bit more than offering switching rebates and discounts (or so it seems based on companies’ actions).

An admirable focus on the public interest

To their credit, the FCC and Justice Department took seriously their responsibilities related to determining the competitive and public interest impacts of the proposed deal. And though Congress had no direct say in these decisions, it seems that many of its members also remained unconvinced that “what’s good for Comcast is good for the country,” even after months of heavy lobbying.  As Lipton reports in the Times.

Despite the distribution of $5.9 million in campaign contributions by the two companies during the 2014 election cycle, and the expenditure of an extraordinary $25 million on lobbying last year, no more than a handful of lawmakers signed letters endorsing the deal…Congress has no direct power to approve or disapprove any merger, but endorsements, particularly if they come from black and Hispanic leaders, can send a subtle but important message to regulators that the deal is in the public interest and should be cleared…

Lawmakers cited a variety of reasons as to why Comcast’s elaborate pitch failed to gain traction this time: The miserable customer service ratings the company earns, for instance, made politicians leery of helping it out. In addition, there were much more substantial antitrust concerns associated with this deal, and some members of Congress said they thought Comcast had failed to live up to its promises in the NBCUniversal deal, and so could not be trusted this time.

Other lawmakers and staff members on Capitol Hill, in interviews Friday, cited Comcast’s swagger in trying to promote this deal. They said they felt that Comcast was so convinced in the early stages that the deal would be approved that it was dismissing concerns about the transaction, or simply taking the conversation in a different direction when asked about them…

“They talked a lot about the benefits, and how much they were going to invest in Time Warner Cable and improve the service it provided,” said one senior Senate staff aide…“But every time you talked about industry consolidation and the incentive they would have to leverage their market power to hurt competition, they gave us unsatisfactory answers.”

Together, the CNBC interview and NYT article highlight the difference between a thoughtful and holistic perspective on communication policy and public policy in general, and what I’d call the CNBC/libertarian/Wall Street perspective (for an extreme example of the latter, see Rick Santelli’s infamous trading floor rant attacking “losers” seeking mortgage modifications while ignoring trillion dollar bank bailouts and Wall Street criminality)

Having observed Brian Roberts’ career since its early days, my sense is that he is an extremely capable strategist, manager and dealmaker, and also a person of integrity.  And he has plenty of reason to be proud of the company his father and he have built.  It’s been impressive to watch.

But I also believe that he sees his primary role as maximizing shareholder value, and his primary constituency as being Wall Street analysts and investors, not Comcast’s customers. This perspective might not trigger regulatory problems if his company didn’t enjoy high levels of market power in key bottlenecks sectors of the communications industry. But, as the FCC and DOJ rightly concluded, Comcast does wield such market power and was seeking to augment it significantly with the TWC deal.

Customer satisfaction as a key indicator

As one longstanding piece of evidence to support my view of Comcast’s priorities, I’d point to its history of being consistently among the lowest-ranked companies in its industry (and among all U.S. companies) in terms of customer satisfaction.

Though I can understand Squawkbox hosts choosing not to confront their “boss” with tough questions, I would have liked to see one of them ask him about why this prolonged history of poor customer service has not yet been remedied, and how much Comcast planned to spend to address this issue in the future. Instead, we see the discussion about what’s next for the company leading to Roberts’ comment that:

The deal was going to slightly increase our leverage. That is now not happening. So that opens up room for further stock buybacks. And I think that’s an area that certainly we’re open to thinking about and talking about with the board.

I would have liked to see Roberts instead (or at least also) say that investing heavily to improve customer service was something he was going to discuss with the board, and that he was seriously committed to turning his company into a leader rather than a laggard in satisfying its customers, as measured by independent surveys.

But, just as the hiker only had to outrun his fellow hiker, not the bear, Comcast, to augment its shareholder value, need only leverage the fact that its local access pipe is much faster than most of its telco competitors, and invest just enough to ensure that the poor quality of its customer service doesn’t outweigh its speed advantage for too many customers.

And even if Roberts actually did announce a seriously-funded customer service initiative (or a large scale commitment to all-fiber networks), Wall Street analysts would most likely respond with downgrades of its stock, and pressure to direct cash flow to buybacks and dividends rather than to improved customer service and investments that could yield positive externalities with great social value but uncertain prospects for monetization by the company.

This speaks to the difference between what Marjorie Kelly calls “generative” and “extractive” business and ownership models, which I wrote about in relation to Internet access here and here (and may write about on the Quello Center blog in the future).

Tags: , , , ,


Steve Wildman on the Future of Media Content Delivery

by

The Quello Center Advisory Board identified the future of content delivery as one of the Center’s most critical issues for research. Late in 2014, Professor Steve Wildman provided an overview of the prospects for new forms of content delivery to a group of visiting executives. Prior to his lecture, Bill Dutton interviewed him about the key points he planned to cover. You’ve find this video of this interview to be a succinct summary of major issues facing the future of broadcasting and the media more generally. We’d welcome your comments – whether you agree to disagree with the future painted by Professor Wildman.

Delivering Media Content in a New Technological Environment: An Exploration of Policy Implications from Quello Center on Vimeo.

His video is at: https://vimeo.com/110827928

Tags: , , , , , , ,


Please Help Me Understand What’s Wrong with Title II

by

In a blog entry here yesterday I described FCC Chairman Wheeler’s Title II proposal as “replanting the roots” of communication policy in the digital age. Shortly after I posted it, the Commission released a four-page summary of the proposed “New Rules for Protecting the Open Internet.” Not surprisingly, the document triggered a barrage of public responses from a range of interested parties on both sides of the issue.

After reviewing the FCC’s Fact Sheet and some of these responses, I found myself puzzled about claims regarding risks and problems associated with Title II classification. So I thought I’d invite comments to help clarify what those risks and problems really are.

In yesterday’s post I focused on:

Today I want to focus more on questions of near-term strategy, tactics and risks related to the Commission’s proposed Title II action, and invite comments that clarify how and why the proposed Title II classification is problematic. It’s a claim I’ve heard often, but have difficulty understanding.

Here’s how I see it:

(more…)

Tags: , , , ,


Replanting the Roots of Communication Policy

by

[Update: shortly after this was written, the FCC released details about Chairman Wheeler’s “Protecting the Open Internet” proposal, which will be discussed here in later posts]

With the FCC expected to classify broadband access as a Title II common carrier service, while also preempting state restrictions on municipally-owned access networks, the Commission’s February 26 meeting is poised to launch a new era in U.S. communication policy.

To appreciate the significance of the Commission’s impending Title II decision, it’s useful to step back from the drama and details of today’s regulatory and market battles, and consider the agency’s upcoming vote from a historical perspective, starting with the Communications Act of 1934. I’d suggest that, viewed from that perspective, the FCC’s decision to treat broadband access under Title II is an attempt to replant the roots of communication law in the fertile ground of today’s First Amendment-friendly technology.

The Act’s stated purpose was:

“to make available, so far as possible, to all the people of the United States a rapid, efficient, nationwide, and worldwide wire and radio communication service with adequate facilities at reasonable charges.”

Given the relatively primitive technology of that era, the 1934 Act adopted different regulatory schemes for wireless broadcasting and wireline telephony, each designed to accommodate the technical constraints of the industry it was to regulate. Wireless broadcasting, constrained by technical interference among a cacophony of competing “voices,” was addressed by a system of exclusive licensing. This gave a relative handful of licensees First Amendment megaphones of unprecedented reach and power, in exchange for a vague and difficult-to-enforce set of “public interest” obligations.

Unwieldy at best, enforcement of broadcasting’s public interest regulations was largely abandoned in the 1980s under the Reagan Administration, which viewed deregulation as a much needed and broadly applicable solution to the nation’s economic problems. From that perspective, the best way to serve the public interest was, in most cases, to rely on the “magic of the market.” To the Administration’s first FCC Chair, Mark Fowler, the powerful broadcast and cable media were just more markets needing a healthy dose of deregulation.  As he famously put it, television was “a toaster with pictures.”

(more…)

Tags: , , , ,


Quello Policy Research Paper on Mobile Today and Tomorrow

by

The Quello Center has just released a report on ‘Mobile Today and Tomorrow’ by a team of researchers from the Quello Center, Oxford Consulting, and Huawei Technologies. It explores trends in mobile and speculates on future developments. It is anchored in a review of literature, and a set of interviews with leading experts in many aspects of mobile technology, use and policy. It differs from many other overviews in being based on a social science perspective on mobile and striving to be global in its perspective. We invite comments and recommendations on this report and directions for taking further research. The paper is free online as:

Dutton, William H. and Law, Ginette and Groselj, Darja and Hangler, Frank and Vidan, Gili and CHENG, Lin and LU, Xiaobin and ZHI, Hui and ZHAO, Qiyong and WANG, Bin, Mobile Communication Today and Tomorrow (December 4, 2014). A Quello Policy Research Paper, Quello Center, Michigan State University.. Available at SSRN: http://ssrn.com/abstract=2534236

Tags: , , , , ,