A Whole New Class of Faculty at the Quello Center

by

One major outcome of the new faculty joining the Department of Media and Information this academic year that been a coming together of a critical mass of very strong faculty key to social scientific research on the digital age. Suddenly, the Quello Center can enjoy a dramatic rise in the strength of faculty that can inform research, policy, and practice central to the Center’s focus on policy for the digital age.

To ensure that these faculty are visible and recognized from afar, the Center has begun a new category of faculty, entitled Quello Research Fellows. The first four Fellows include three new faculty, Keith Hampton, Natascha Just, and David Ewoldsen, and one long-term member of the Quello faculty, Johannes Bauer. They bring major strengths in Internet studies, sociology, economics, social psychology, and policy into the Quello Center’s multidisciplinary team.

Together with our research team, associate faculty across the university, and graduate student researchers, these new Quello Research Fellows boost the capacity of the Quello Center to tackle an ever-wider range of research of importance to policy and practice for the digital age.

I fully expect this new class of faculty to help inform and lead debate over policy and practice that responds to the societal implications of the Internet and related digital media, communication, and information technologies.

Natascha Just

Esther Thorson and David Ewoldsen speaking at Fake News Roundtable

Keith Hampton and Rachel Mourao at the Fake News Roundtable

Johannes Bauer

Tags: , , , , , , , , , ,


Media and Information Policy Issues

by

From discussions in courses and within the Quello Center Advisory Board, the Center has been developing a set of key issues tied to media, communication and information policy and practice. We’d welcome you thoughts on issues we’ve missed or issues noted that do not merit more sustained research and debate. Your feedback on this list would be most welcome, and will be posted as comments on this post.

Quello Advisory Board Meeting

I. Innovation-led Policy Issues

New Developments around Robotics and Artificial Intelligence: What are the implications for individual control, privacy, and security? Security is no longer so clearly a cyber issue as cyber security increasingly shapes the physical world of autonomous vehicles, drones, and robots.

Internet of Things (IoT): With tens of billions of things moving online, how can individuals protect their privacy and safety and well being as their environments are monitored and controlled by their movement through space? There are likely to be implications for urban informatics, transportation and environmental systems, systems in the household, and worn (wearables above). A possible focus within this set would be on developments in households.

Wearables: What appears to be an incremental step in the IoT space could have major implications across many sectors, from health to privacy and surveillance.

The Future of Content Delivery: Content delivery, particularly around broadcasting of film and television, in the digital age: technology, business models, and social impact of the rapidly developing ecosystem, such as on localism, diversity, and quality.

Free (and Open Source) Software: The prominence and future of free as well as open source software continues to evolve. Are rules, licensing, and institutional support, such as around the Free Software Foundation, meeting the needs of this free software community?

Big Data: How can individuals protect their privacy in the age of computational analytics and increasing capture of personal data and mass surveillance? What policies or practices can be developed to guide data collection, analysis, and public awareness?

Encryption: Advances in encryption technologies at a time of increasing threats to the privacy of individual communications, such as email, could lead to a massive uptake of tools to keep private communications private. How can this development be accelerated and spread across all sectors of the Internet community?

Internet2: Just as the development of the Internet within academia has shaped the future of communications, so might the next generation of the Internet – so-called Internet2 – have even greater implications in shaping the future of research and educational networking in the first instance, but public communications in the longer-term. Who is tracking its development and potential implications?

Other Contending Issues: Drones, Cloud computing, …

II. Problem-led Initiatives

Transparency: Many new issues of the digital age, such as concerns over privacy and surveillance, are tied to a lack of transparency. What is being done with your data, by whom, and for what purposes? In commercial and governmental settings, many public concerns could be addressed to a degree through the provision of greater transparency, and the accountability that should follow.

Censorship and Internet Filtering: Internet filtering and censorship was limited to a few states at the turn of the century. But over the last decade, fueled by fear of radical extremist content, and associated fears of self-radicalization, censorship has spread to most nation states. Are we entering a new digital world in which Internet content filtering is the norm? What can be done to mitigate the impact on freedom of expression and freedom of connection?

Psychological Manipulation: Citizen and consumers are increasingly worried about the ways in which they can be manipulated by advertising, (fake) news, social media and more that leads them to vote, buy, protest, or otherwise act in ways that the purveyors of the new propaganda of the digital age would like. While many worried about propaganda around the mass media, should there be comparable attention given to the hacking of psychological processes by the designers of digital media content? Is this a critical focus for consumer protection?

(In)Equities in Access: Inequalities in access to communication and information services might be growing locally and globally, despite the move to digital media and ICTs. The concept of a digital divide may no longer be adequate to capture these developments.

Privacy and Surveillance: The release of documents by Edward Snowden has joined with other events to draw increasing attention to the threats of mass unwarranted surveillance. It has been an enduring issue, but it is increasingly clear that developments heretofore perceived to be impossible are increasingly feasible and being used to monitor individuals. What can be done?

ICT4D or Internet for Development: Policy and technology initiatives in communication to support developing nations and regions, both in emergency responses, such as in relation to infectious diseases, or around more explicit economic development issues.

Digital Preservation: Despite discussion over more than a decade, it merits more attention, and stronger links with policy developments, such as ‘right to forget’. ‘Our cultural and historical records are at stake.’

III. Enduring Policy Issues Reshaped by Digital Media and Information Developments

Media Concentration and the Plurality of Voices: Trends in the diversity and plurality of ownership, and sources of content, particularly around news. Early work on media concentration needs new frameworks for addressing global trends on the Web, with new media, in print media, automated text generation, and more.

Diversity of Content: In a global Internet context, how can we reasonably quantify or address issues of diversity in local and national media? Does diversity become more important in a digital age in which individuals will go online or on satellite services if the mainstream media in a nation ignore content of interest to their background?

Privacy and Privacy Policy: Efforts to balance security, surveillance and privacy, post-Snowden, and in wake of concerns over social media, and big data. White House work in 2014 on big data and privacy should be considered. Policy and practice in industry v government could be a focus. Is there a unifying sector specific perspective?

Freedom of Expression: New and enduring challenges to expression in the digital age.

IV. Changing Media and Information Policy and Governance

Communication Policy: Rewrite of the 1934 Communications Act, last up-dated in 1996: This is unlikely to occur in the current political environment, but is nevertheless a critical focus.

Universal Access v Universal Service: With citizens and consumers dropping some traditional services, such as fixed line phones, how can universal service be best translated into the digital age of broadband services?

Network Neutrality: Should there be Internet fast lanes and more? Efforts to ensure the fair treatment of content, from multiple providers, through regulation has been one of the more contentious issues in the USA. To some, the issue has been ‘beaten to death’, but it has been brought to life again through the regulatory initiatives of FCC Chairman Wheeler, and more recently with the new Trump Administration, where the fate of net neutrality is problematic. Can we research the implications of this policy?

Internet Governance and Policy: Normative and empirical perspectives on governance of the Internet at the global and national level. Timely issue critical to future of the Internet, and a global information age, and rise of national Internet policy initiatives.

Acknowledgements: In addition to the Quello Advisory Board, special thanks to some of my students for their stimulating discussion that surfaced many of these issues. Thanks to Jingwei Cheng, Bingzhe Li, and Irem Yildirim, for their contributions to this list.

Tags: , , , , , , ,


Exploring the Value of Public Investment in “Generative” Fiber Infrastructure

by

In an earlier post I described the BTOP Comprehensive Community Infrastructure (CCI) program as a “very good investment of public funds.” My reasons were twofold, the first one being that it expanded the availability of high-speed connectivity in underserved areas, including more than 42,000 miles of new and 24,000 miles of upgraded fiber infrastructure. The second was that research by ASR Analytics suggests that the CCI program accomplished this expansion in a way that addresses both forms of economic harm claimed by advocates on both sides of the special access regulation debate. As a result, I suggested “that the federal government consider expanding its CCI investment in geographic areas that the FCC’s special access data collection project indicates still face a lack of competitive options and an abundance of excess-profit-extracting prices in the special access market.”

In five related posts, I considered a number of issues and perspectives that inform this policy suggestion, including the following:

In this post I’m going to:

I’ll start with an excerpt from an earlier post:

According to Table 7 on pg. 15 of ASR’s final report, the total amount (including both federal grants and matching funds) budgeted for 109 CCI projects was $3.9 billion. The table also indicates that, at the time the study was done, these projects had connected 21,240 CAIs, at a budgeted cost of $184,141 per CAI. Assuming federal grants paid for 80% of this total cost, the average federal grant amount per CAI would be in the neighborhood of $147,300.

Table 13 on pg. 34 of the report shows the changes in subscription speeds and pricing experienced by the 86 CAI locations providing this information to ASR. The table shows very large increases in speed and, depending on the category of CAI, dramatic 94-96% average reductions in per-Mbps pricing. Table 14 on pg. 36 uses these reported changes in speed and price to extrapolate CAI cost savings from switching to CCI-provided fiber connections. Averaged across all CAI categories, the per-CAI annual savings amounted to $236,151.

This means that, in just one year, the average CAI saved 28% more in operating costs ($236,151) than the total capital cost ($184,141) required to connect it to a CCI fiber network, and 60% more than the federal government’s share of that investment ($147,300). Based only on these direct social costs and benefits, I’d consider this a good investment of public funds.

But these direct cost savings to CAIs were not the only impacts of the federally supported BTOP fiber deployment program that were considered by ASR. It also estimated economic benefits driven by increased broadband availability in areas newly reached by the BTOP fiber networks. Using matched pair county-level analysis, ASR found that CCI-impacted counties achieved broadband penetration two percentage points higher than control counties. Based on this, ASR derived estimates of economic benefits of the $3.9 billion in CCI network investments using a number of widely accepted economic impact models.  These impacts included:

These findings of the ASR Analytics study suggest that:

Building on the ASR Analytics evaluation study

As noted above, ASR’s BTOP evaluation study used matched pair analysis of CCI-impacted counties to compare their growth in broadband availability to that of counties that were comparable on key control variables. ASR used NTIA availability data for multiple time periods to measure and compare these changes in availability (for more details see Appendix D of the ASR final BTOP evaluation report).

As discussed above, ASR found that, on average, the increase in broadband availability for CCI-impacted counties was two percentage points higher than in control counties, using the then-current broadband speed threshold of 3Mbps downstream service. ASR then used this differential to estimate and extrapolate economic impact variables (e.g., GDP, job growth, income) using the broadband impact models referenced above.

In light of ASR’s well-documented research and its promising though preliminary findings, an effort to update and expand on the strong foundation it and NTIA have built strikes me as timely, especially with special access policy questions getting focused attention from the FCC. More specifically, what I’d propose is to:

1. Use updated FCC availability data to explore how the matched-county broadband availability differential has evolved over a longer period of time.

2. Examine this broadband availability differential using speed thresholds higher than the 3 Mbps downstream level used by ASR, including the FCC’s current threshold of 25 Mbps downstream and 3 Mbps upstream.

3. For counties for which data is available, add to the matched pair comparison an analysis of broadband adoption data derived from the Census Bureau’s American Community Survey (ACS). Beginning with 2013 data, this data is being released annually for geographies with populations greater than 65,000, and should be available for virtually all counties on a blended five-year basis starting in 2017.

4. Examine and compare actual county-level economic indicators (e.g., County Business Patterns and other datasets available from the Census Bureau and other sources) for the matched pair counties.  The goal here would be to explore the extent to which the economic impacts predicted by the models used by ASR actually occurred and/or whether there were other impacts suggested by these economic indicators.

5. Where notably large variations are found among the matched pair differentials in broadband availability and/or penetration, and/or in actual economic impact variables, explore potential reasons for these differentials based on qualitative and/or quantitative analysis of CCI projects and CCI-impacted counties exhibiting these large variations. The goal here would be to extract additional insights and lessons learned regarding how CCI networks can best deliver social value, as well as the contextual factors impacting how effective different approaches are in achieving that value.

Factors to be considered in #5 might include the ownership and management models employed by CCI grant recipients, the specific approach they take to providing “open access” to their fiber networks, as well as other policies and strategies they employ in relation to wholesale and last mile providers, CAIs, local community development programs, and local economic, demographic and institutional factors.

For example, the specifics of how CCI grantees approached the BTOP program’s open access requirement have not been uniform, as explained on pages 28-29 of ASR’s final evaluation report:

To help expand service within unserved and underserved areas…[e]ach of the grantees in the evaluation study sample implemented at least one strategy, and in many cases a combination of strategies, to ensure open access to the BTOP-funded network by third-party service providers. For example, the research and education network and the healthcare network in Arkansas established a partnership to deploy new and upgraded fiber and colocation facilities. Merit Network in Michigan offered indefeasible right-of-use agreements to private third- party service providers. MassTech fostered competition by helping CAIs compare services and prices offered by third-party providers that use the BTOP-funded network.

Similarly, different CCI grantees adopted different usage and pricing policies to support positive impacts of their network investments.  For example, as described on pages 3-4 of its case study of Merit Network, a Michigan-based CCI grantee owned by member institutions of higher learning, ASR explained that:

The Merit network connects institutions of higher learning and facilitates collaboration by allowing them to freely connect to other institutions on the network, or access on-net services at speeds up to 1 Gbps. This allows institutions to collaborate on research, and to cut costs by sharing services, including hosting. Merit provides some content over this network as well, including Internet2. These services give faculty, staff, and students fast and reliable access to educational and research opportunities…The free on-net services provide incentive for CAIs to create wide area networks (WAN) using Merit fiber…[and] cost-savings [and greater efficiency] for any CAI organization with multiple locations.

Merit is an example of the Research and Education Network (REN) category of CCI grantee.  Owned by member universities, it exhibits a range of generative characteristics, including support for training, collaboration and feedback among its user community, including:

[Regular] opportunities for Members to learn from each other and share best practices in the networking arena. Forums include the Michigan Information Technology Executive (MITE) Forum, Merit Joint Technical Staff (MJTS), Networking Summit, Bring Your Own Device (BYOD) Summit, and the Merit Member Conference (MMC).

The Merit Advisory Council (MAC) has a direct voice to our Board of Directors and leadership through which feedback and recommendations are provided.

The Merit Services Innovation Group enables Members to provide suggestions and feedback regarding current and future services.

Merit facilitates collaboration between Members and regularly contributes staff and resources to educational and research activities.

Professional Learning events are tailored to the needs of our Members and are offered at reduced cost.

One starting point for considering the impacts of differential policies, structures, strategies and programs among CCI grantees would be a careful review of the dozen CCI case studies conducted by ASR. Another would be using the expanded matched pair county analysis described above to identify differences in the availability, penetration and economic impact variables across CCI projects.

In my view, a research project along these general lines would: 1) help maximize the ongoing value provided by existing CCI projects; 2) provide valuable guidance for consideration of future programs designed to build on the success of and lessons learned from the BTOP program and; 3) shed light on policy debates and options related to special access and perhaps other communication and infrastructure policy issues.

Tags: , , ,


Community Broadband As Generative Infrastructure

by

April 16th, 2016

In an earlier post I discussed Marjorie Kelly’s framework for distinguishing “generative” vs. “extractive” ownership models.  In this post, I’ll try to further clarify this distinction by considering some key characteristics of community-owned local access networks in relation to Kelly’s framework (in my next post I’ll shift my focus to middle mile and special access fiber).

To get started, I’ll reiterate one of my key working premises, that when an extractive ownership model is combined with a lack of competitive pressure or corrective regulation (as is currently the situation in much of the nation’s local access and special access markets), service providers can achieve high levels of financial extraction.  This can lead to broad and substantial economic harm, especially in the case of core infrastructure like the Internet and telecommunications in general.  That’s because these infrastructure resources tend to be “spillover rich” when managed in a non-discriminatory way, but less so when constrained by dominant ISPs’ internal monetization priorities, as I discussed in the latter section of an earlier post, and which Brett Frischmann addressed in far more depth in his book, Infrastructure: The Social Value of Shared Resources.

As discussed in my last post, I view the longstanding focus on shareholder returns at the expense of network upgrades by AT&T, the nation’s largest ILEC, as one example of this dynamic.  Another relates to the nation’s leading cable operators.  On one hand these companies have been able—thanks to their much higher speeds relative to DSL—to capture nearly all net broadband customer growth in recent years (see the table here for 2015 data).  But, at the same time, these same companies have consistently been ranked at or near the bottom among all U.S. industries in customer satisfaction surveys.  Simple economic logic tells us that, if there was minimally healthy competition in the market for these higher-speed broadband connections, this combination of strongly positive market share gains with strongly negative customer satisfaction would be a very unlikely outcome.  As the title of Susan Crawford’s 2013 book points out, these growing ranks of customers signing up for and retaining cable modem service are, in a very practical sense, a Captive Audience.

Later on in this post I’ll suggest some research questions I believe are worth pursuing related to the operation and impacts of community-owned networks and the relevance of Kelly’s ownership framework to the broadband access market. But before I do, I want to consider how some of Kelly’s “generative” characteristics apply in theory and practice to community broadband.

In applying Kelly’s framework to local access ownership models, it makes sense to start with Purpose, which Kelly describes as the most fundamental design element.

Having studied a number of community-owned broadband networks, I’d say that all or most were undertaken with a purpose along the following lines: to provide local households, businesses and public service organizations (e.g., schools, healthcare providers, public safety, etc.) with affordable, reliable, symmetrical, high-capacity broadband connectivity and related services; to support their ability to thrive in an increasingly competitive and knowledge-based global economy and; to provide decent paying jobs to local citizens who, in turn, provide high quality customer service to the network’s customers.

While privately owned networks, including those owned by publicly-traded corporations, might claim to have the same or a very similar purpose, years of listening to earnings calls of these publicly-traded companies tells me that such goals are, at best, secondary priorities to the overriding goal of maximizing shareholder value, and ones that will be jettisoned if they conflict with the latter goal. And if shareholder value-maximizing decisions are not made with sufficient speed, vigor and clarity of (extractive) purpose, a publicly-traded firm’s management is likely to face intense pressure from investors, particularly those focused on a relatively short time horizon for measuring shareholder value.

It’s also clear to me that compared with most publicly-owned cablecos and telcos, community-owned networks tend to have locally Rooted Membership (vs. the Absentee Ownership characterized by publicly-traded stocks); Mission-Controlled Governance focused on achieving benefits for the local community (vs. Governance by Markets, stock price and related measures of financial profitability); and tend to function as part of local and national Ethical Networks focused on supporting sustainable community development (vs. Commodity Networks geared toward maximizing financial extraction).  And, as discussed further below, while the capital intensive nature of last mile access networks makes reliance on Stakeholder Finance challenging, there are models developing on this front as well.

A mix of generative characteristics and results

The general family of community-owned networks includes both municipal ownership (often through an existing municipal power utility) and end-user cooperatives (often through an existing rural electric co-op). While there are legal, organizational and financial differences between these ownership models, my preliminary research suggests these aren’t large enough to significantly alter the fundamental ways in which they differ from ownership by publicly-traded cable and telephone companies.

That being said, different combinations of generative characteristics may yield somewhat different sets of strengths and weaknesses, with these sometimes impacted by other factors, including relevant state laws and local regulations, existing institutional relationships and expertise, local market dynamics, and the mix of stakeholders supporting the project.

The mix of generative characteristics and situational factors can also impact how a community network evolves over time, and whether its particular model is sustainable within the environment in which it has taken root.

For example, in the case of municipally-owned networks, the departure of strong founding project leadership has sometimes led to a migration of decision-making to political leaders lacking appropriate industry expertise, particularly if the network was deployed in a community that lacked an existing public utility that was well-managed and enjoyed the loyalty and respect of its local customer base. In my view this type of management transition led to problems for the community-owned network in Burlington, VT, and highlights a potential point of design weakness for municipally-owned networks lacking a well-established and sufficiently independent, professional, non-political utility management unit.

In other cases, management of a municipally-owned network has experienced disruptive discontinuities when local political leadership has changed. This seems to be most likely if such change occurs when the network and/or city is facing financial or other challenges, and/or when candidates or newly elected officials view the shortcomings of their predecessors’ network project as a useful rallying point for mobilizing political support.  This also supports the notion that, once launched, the ongoing management of a community network needs to be sufficiently sheltered from day-to-day and election-to-election political pressures.  My research suggests that this is most readily achieved in communities that already have some form of municipal utility infrastructure (or set up a strong-enough one when a network project is launched).

Similarly, the interaction of political dynamics among multiple jurisdictions has sometimes complicated and/or delayed decision-making, adding to the challenges of addressing unexpected financial or operational problems. My sense is that this has been a factor for the multi-city UTOPIA fiber network in Utah, one that was made even more difficult by state restrictions on the network’s ability to provide retail services.

This latter point highlights the significance of state regulation as a factor that can influence which generative characteristics are likely to be most effective (or even legal) in achieving a project’s internal and external goals.  In the case of UTOPIA, the retail-service prohibition presented  serious and not-well-understood challenges and risks related to marketing, finances, technology and overall management of a wholesale-only local access enterprise.  As those risks became more clear with time, the task of addressing them was made even more challenging by the project’s multi-city management and financial structure.  As a result, this pioneering and arguably over-ambitious project, launched a dozen years ago, has become the perennial poster boy used by critics to make their case against community broadband.

In my view, the risks related to local political dynamics suggest that communities without existing public utilities take these risks very seriously.  While establishing a strong and sufficiently independent municipal utility structure is one option, another would be to adopt a form of cooperative structure (e.g., similar to the rural telephone and electric cooperatives common in very rural areas).  This would help ensure that the network is responsive to end-users rather than to local politicians with many and often conflicting priorities and, in some cases, focused too much on the next election and lacking appreciation for the management requirements of the network. That being said, cooperative managers are not immune from losing touch with their members needs, nor can these members always be relied upon to wisely exercise the rights and responsibilities of their membership.

These challenges highlight the value of initial and sustainable stakeholder “buy-in” for a community network to succeed, a factor that relates to Kelly’s concepts of Rooted Membership and Mission-Controlled Governance.

The value of building and maintaining stakeholder buy-in seems especially important during a network’s early years, when the bulk of construction is underway. This is because costs are especially high during this startup phase, while revenues are just beginning to ramp up. Case study research suggests that it’s during this startup period that incumbents have the most leverage to mobilize their considerable resources to weaken both the economic viability and credibility of community networks.

Given its role as core communication infrastructure,  a community-owned broadband network is a resource likely to impact virtually every organization in a community, including local government, public safety, education, healthcare, non-profits, and businesses both large and small.  This suggests that important elements of generative structure (e.g., Rooted Membership and Mission-Controlled Governance) will be closely tied to how these various stakeholders (as well as residential users) are involved in setting goals and priorities and the decision-making processes related to network management, build-out plans, resource allocation, service development, pricing, etc. This governance issue also relates to the Ethical Network element of generative ownership, as reflected in the relationships among community leaders, their local constituencies and their counterparts in other communities that have invested in a community-owned network or are considering such an investment.

Stakeholder finance: challenging but potentially fortifying

Deploying a community broadband network that relies on Stakeholder Finance strikes me as more challenging than Kelly’s other generative design elements, in large part because the high upfront cost and capital-intensive nature of communication networks has typically required access to public debt markets.

One of the clearest and apparently successful examples of local stakeholder financing is Vermont’s ECFiber, now officially known as the East Central Vermont Telecommunications District.

As the following excerpt from ECFiber’s website indicates, its decision to initially rely on local stakeholder financing was made out of necessity:

On Town Meeting Day 2008, 24 towns voted to join ECFiber…In August, 2008 23 towns signed the Inter-local Contract and by early September, the initiative’s underwriter, Oppenheimer & Co., had pledges of $70 million. One week later the international financial markets collapsed taking ECFiber’s initial funding effort with it.

ECFiber then submitted several funding proposals under the American Recovery and Reinvestment ACT (stimulus program), but with no operating history at that point, we were edged out by competing proposals from other local companies.

But undaunted, ECFiber returned to the Vermont roots of self-reliance and initiated our current program of grass-roots funding. With the advice of local counsel, ECFiber developed a program of issuing promissory notes in a private placement offering. The notes are offered in $2500 units. The first round of financing, in January,2011, raised $912,000, which enabled us to build our first 20+ mile loop… Additional rounds of financing have brought total investment to nearly $5 million…It is ECFiber’s intention, at some suitable point, to return to the capital markets to seek sufficient funding to build out the entire network in all member towns.

According to a March 11, 2016 press release, ECFiber has reached the point where it is ready to augment its initial base of local stakeholder financing via institutional capital markets:

ECFiber…plans to activate 110 miles of network in 2016 and build an additional 250 miles in 2017. “Working with bond underwriters, we believe ECFiber has reached the point in its financial development that allows us to access institutional capital markets for the first time in 2016,” says Irv Thomae, District Chairman.

While the financing model pioneered by ECFiber may lead to slower network buildout, it may strengthen the Stakeholder Financing element of its generative ownership design, since most of its initial funding came from community members with a three-pronged interest in its success—as customers, as local community members, and as direct financial investors.

This underscores a broader and important point: community broadband planners are likely to increase their chances for success if, from the beginning, they keep in mind all elements of Kelly’s generative ownership structure, including how they interact with each other to support the project’s Living Purpose.

To a large extent, all of Kelly’s ownership characteristics relate to the effective and sustainable harnessing of stakeholder support and participation.  As both successful and unsuccessful community broadband projects have demonstrated, strong and sustained support from community stakeholders provides a solid—and perhaps the most essential— foundation upon which to build a community network.

As noted above, this foundation has proven to be especially important during a project’s startup phase, when learning curves and financial pressures abound, and when community networks are likely to be most vulnerable to well-financed political, legal and predatory pricing attacks by incumbent service providers. The more fully and firmly that stakeholder support is embedded in a community network’s design, the more likely it is to weather these startup storms and any squalls that might follow in later years. And the more likely it will be to remain focused on prioritizing the social benefits and community development goals that give it the Living Purpose that distinguishes it from the financial priorities of extractive ownership models.

More research can help

The existing body of research focused on community broadband networks tends to be heavily polarized and somewhat anecdotal, with proponents focusing on success stories and opponents on the sector’s most notable failures, even if, as with UTOPIA, they were launched many years ago and were subject to a unique mix of situations and constraints that virtually guaranteed they would illustrate painful lessons for others to learn from. To a large extent the tone and content of existing research reflects the often intense political battles at the state, local and national levels regarding restrictions on community broadband projects.  As with most politically charged policy-related research, the result is a strong tendency toward cherry-picking of projects to study and data to analyze.

My own view is that case study-oriented research by myself and others provides clear evidence that community-owned broadband networks can and often do succeed in terms of both their internal economics and in bringing to their communities lower prices, faster speeds, better customer service and more robust support for the potentially large but difficult-to-internally-monetize social goods discussed in Frischmann’s book.

That being said, I also believe that state and local policymakers, local decision-makers and communication scholars could benefit from additional and less agenda-driven research in this area, perhaps conducted by a team of researchers representing a range of perspectives and expertise, and well sheltered from bias based on the source of their funding.  Among the questions I view as worthy of such research are the following:

Given the intensity of debate surrounding state laws restricting the ability of communities to finance and control their local broadband networks; the FCC’s efforts to preempt such state restrictions and; the expansion of both publicly-owned and privately-owned (e.g., Google Fiber) competitive network models, I believe research focused on these questions can help local leaders, state and federal policymakers, and private sector players make better-informed decisions about how best to leverage the power of high-speed Internet access to benefit our nation’s citizens, businesses and public institutions.

While the focus of this post has been the relevance of Kelly’s ownership framework to the local broadband access market, I believe it is also relevant to policy issues and research questions related to the special access market, which is the focus of the final blog post in this series.

 

Tags: , , , ,


Extractive vs. Generative Ownership of Telecom Infrastructure

by

April 16th, 2016

In her 2012 book Owning Our Future: The Emerging Ownership Revolution, Marjorie Kelly, Executive Vice President and a Senior Fellow with The Democracy Collaborative, provides a framework for understanding and distinguishing what she describes as “generative” vs. “extractive” ownership designs.  In key respects, the book builds on Kelly’s first book, The Divine Right of Capital: Dethroning the Corporate Aristocracy, published more than a decade ago (you can read the latter’s introduction here).

Drawing an apt and powerful parallel to the divine right of kings, Kelly’s first book does a masterful job of opening readers’ minds to the arbitrary and distorting nature of the ownership and control model embodied in today’s publicly-traded corporations. In Owning our Future, she does an equally impressive job helping readers understand and appreciate the significance of the range of alternative ownership structures emerging across the economy.  A clear, succinct and enjoyable read, Owning Our Future clarifies:

In a May 17, 2012 talk entitled From the Fringe to the Leading Edge: Generative Design Goes to Scale, at the annual conference of the Business Alliance for Local Living Economies (BALLE), Kelly highlighted the fundamental importance of ownership in our economy and our world, and the problems caused by today’s dominant form of ownership:

Every economy is built on the foundation of ownership… Questions about who owns the wealth-producing infrastructure of an economy, whose interests it serves, these are among the largest issues any society can face…The crises we face today, ecologically and financially, are tangled at their root with the particular form of ownership that dominates our world – the publicly traded corporation, where ownership shares trade in public stock markets. The revenues of the 1,000 largest of these corporations represents roughly 80% of global GDP.

Kelly then briefly reviewed what her years of research have led her to understand about “generative” alternatives to the dominant “extractive” form of ownership. “The first and most important difference” she says is a “Living Purpose.”

…the many ownership alternatives – from community land trusts and cooperatives to social enterprises and community ownership of the commons – these alternatives represent a single, coherent school of design. It’s a family of generative ownership designs. Together, they form the foundation of a generative economy.

 Generative means the carrying on of life, and generative design is about the institutional framework for doing so. In their basic purpose, and in their living impact, these designs have an aim of generating the conditions where all life can thrive. They are built around a Living Purpose.

This is in contrast to the dominant ownership designs of today, which we might call extractive. Their aim is maximum extraction of financial wealth. They are built around a single-minded Financial Purpose.

But, according to Kelly, “purpose alone isn’t enough.”  Also needed, she says, is “the presence of at least one other structural element that holds that purpose in place.”  These additional elements of generative design are:

Membership. Who’s part of the enterprise? Who has a right to a say in profits, and who takes the risk of ownership? Corporations today have Absentee Ownership. Generative ownership has Rooted Membership, with ownership held in human hands.

Governance. Extractive ownership involves Governance by Markets, where control is linked to share price. Generative ownership involves Mission-Controlled Governance, with control held in mission-oriented hands.

Finance. Instead of the Casino Finance of traditional stock market ownership, generative approaches involve Stakeholder Finance, where capital becomes a long-term friend.

Networks…If traditional approaches use Commodity Networks, where goods trade based solely on price, generative economies use Ethical Networks, which offer collective support for social and ecological norms.

Kelly then notes that, while “[n]ot every ownership model has every one of these design elements…the more elements that are used, the more effective the design.”

How does this apply to the telecom sector?

Having listened to many an earnings call and followed the telecom industry for nearly three decades, it seems pretty clear to me that the dominant publicly-traded cable and telephone companies have an overriding Financial Purpose, as expressed by management’s intense focus on cash flow, stock price, profits, market share, average revenue per unit, pricing power, and other financial metrics.

Related to these metrics is an intense focus (in statements made to Wall Street analysts as well as actual financial decision-making) on “return of capital to shareholders,” largely in the form of dividends and stock buybacks.

While this is perfectly legal and very understandable from the perspective of corporate management (whose compensation is often based on stock price), the fact is that these returns to shareholders are allocations of cash flow that might otherwise be used to deliver more value to customers. For example, this cash flow could be invested in network upgrades and/or improved customer service. The latter is particularly notable, since both the cable and telephone industries have longstanding and well-earned reputations for poor customer service, as reflected in virtually all national surveys (and plenty of anecdotes shared among friends and posted on the web). This, I believe, is largely because, as monopolists or duopolists with substantial market power and extractive ownership designs, these companies tend to be more focused on satisfying the desires of shareholders and Wall Street analysts than those of largely-captive customers with limited options for taking their business elsewhere.

These same industry dynamics are also clear evidence that, in addition to Financial Purpose, the nation’s large publicly-traded cable and telecom giants are also characterized by what Kelly refers to as Absentee Ownership, Governance by Markets, Casino Finance and Commodity Networks, and that their managements are heavily influenced by pressure from Wall Street analysts and traders, whose work takes place even more deeply at the core of our economy’s financial extraction machinery.

AT&T as an example of extractive ownership

In my view, AT&T, the nation’s largest ILEC, is a good example of the kind of financially extractive ownership model that currently dominates the top tier of telecom companies, virtually all of which have their historic roots in a monopoly or near-monopoly market environments. Below is a brief review of key elements of the company’s history over the past decade or so, to illustrate what I mean.

In 2004 AT&T (then SBC Communications) announced that its next-generation network upgrade strategy would rely mainly on fiber-to-the-node (FTTN) technology, which uses a form of DSL technology to deliver both Internet and TV services over the final stretch of copper wires that connect customer locations. The initial budget allocated $6 billion to deploy a FTTN network (dubbed U-verse) that passed 18 million premises.

By late 2012, AT&T’s U-verse footprint had expanded to 24.5 million premises, which suggests a total U-verse investment up to that point of about $8 billion. At that time, AT&T announced Project Velocity IP (VIP), which was to invest another $6 billion over three years to expand U-verse availability to 33 million premises (roughly 43% of its total footprint), while deploying next-generation DSL technology to boost Internet speeds for another 24 million premises, suggesting a total next-generation wireline investment of roughly $14 billion over the course of a decade.

It’s important to note that, based on AT&T’s announced plan, when this second major upgrade program was to be completed, roughly 19 million premises (one of every four passed by its networks) would not have access to ANY wireline broadband service from AT&T.

To put AT&T’s network upgrade strategy in a financial and corporate strategy context, consider that, between 2006 and 2015, AT&T returned an average of nearly $14 billion per year to its shareholders in the form of dividends and stock buybacks. This means the company has returned roughly as much money to shareholders in an average year as it has allocated over a decade to its next-generation wireline network upgrade. In total, between 2000 and 2015 the company returned nearly $172 billion to shareholders in the form of dividends and stock buybacks. That’s equal to 73% of its total capital spending during this period, with two years, 2012 and 2013, seeing shareholder returns exceed CapEx (117% of CapEx in 2012 and 107% in 2013).  By my estimation, if AT&T had reallocated just half of these shareholder returns to full “fiber-to-the-premise” network upgrades, it could have extended state-of-the-art all-fiber networks to nearly 90% of the roughly 76 million premises passed by its network, assuming construction costs comparable to those budgeted by Verizon, the nation’s second largest telco, nearly a decade ago.  Boost that CapEx reallocation percentage a bit more, and virtually all of AT&T’s customers and the communities in which they live and work would by now be enjoying the direct and spillover benefits enabled by fiber’s symmetrical gigabit-level speeds and superior reliability.

A related indicator of AT&T’s priorities is its 2015 acquisition of DirecTV for $67 billion, about $48.5 billion of which was financed by equity and the remainder by debt. While this may prove to be a smart move for AT&T from a competitive and financial perspective, given the competitive weakness of its underfunded but once much-touted U-verse upgrade strategy, it’s worth noting that this massive M&A investment: 1) generated virtually no new infrastructure or competitive entry; 2) involved an additional debt burden greater than the total amount AT&T had invested in the initial U-verse and follow-on Project VIP phases of its wireline network upgrade, and a total acquisition-related investment nearly five times the financial magnitude of these wireline upgrade investments.

One need not be a technology or financial expert to get a sense from the above that the strategic priorities driving AT&T’s investment decisions focus more on share price and profits than on maximizing the social value of the Internet’s “spillover rich” infrastructure within the service territory originally granted to Ma Bell as a protected monopoly, and later inherited (and largely reconsolidated) by the corporate entity we now know as AT&T. While this strategy may make very good sense from the perspective of AT&T’s management and shareholders, my point here is that, given the company’s (and its peer group’s) dominant role in the communication sector and our national political economy, “what’s good for AT&T,” may not be so good for the nation as a whole.  And, if that’s the case, perhaps there’s more that we, as a society, can and should do to shift the focus of public policy toward an approach to the Internet that does, in fact, focus on maximizing its positive (yet difficult to quantify and even more difficult to internally monetize) direct and indirect spillover effects.

In the next post in this series I consider Kelly’s framework in the context of the local broadband access market.

Tags: , , ,


The Internet as “Spillover-Rich” Infrastructure

by

April 16th, 2016

A key source that informs my perspective on special access policy—and telecom policy in general—is Brett Frischmann’s 2012 book, Infrastructure: The Social Value of Shared Resources.  Selected chapters from the book can be found here, and Frischmann provides a good overview of the book, including its Internet-related sections, in this talk at Harvard’s Berkman Center).  Frischmann is a professor and co-Director of the Intellectual Property and Information Law program at Cardozo Law School in New York City.

On page five of the book, Frischmann notes that “most economists recognize that infrastructure resources are important to society precisely because infrastructure resources give rise to large social gains.”  But he also notes that infrastructure’s “externalities are sufficiently difficult to observe or measure quantitatively, much less capture in economic transactions, and the benefits may be diffuse and sufficiently small in magnitude to escape the attention of individual beneficiaries.”  On page 6 he cites the “comedy of the commons” concept developed by Carol Rose which, he explains, “arises where open access to a resource leads to scale returns—greater social value with greater use of the resource.”

The book includes a chapter focused specifically on the Internet, which Frischmann describes on page 345 as “a spillover-rich environment because of the basic user capabilities it provides and the incredibly wide variety of user activities that generate and share public and social goods.”

But, as with other infrastructure, it is not easy to quantify the net value of Internet-supported social goods, or even identify what all of those social goods are.  As Frischmann explains on pg. 347, Internet connectivity involves “a very high degree of social value uncertainty.”

 It is impossible to predict with any degree of confidence who or what will be the sources of social value in the future.  Accordingly, there is no reason to defer to private firms in this context. First, there is no reason to believe that firms are better informed, capable of maximizing social value, or likely to resist the pressure to discriminate, prioritize, or optimize the infrastructure based on foreseeable and appropriable private returns. Second, there is no reason to trust that markets will correct misallocations…[F]irms may be strongly biased in their estimation of the future market value to favor services that they currently offer or expect to offer, sponsor, or otherwise control, and to disfavor those that they do not.

One recent example of the tendency of private access network owners with market power to “discriminate, prioritize, or optimize the infrastructure based on…appropriable private returns” is Comcast’s recently launched Stream TV service.  The element of discrimination is clear in that Stream TV, unlike competing services like Netflix, does not count against the data caps that Comcast customers are or will be subject to.

On page 346 Frischmann points to two demand-driven problems that need to be considered by Internet-related policies:

 The Internet infrastructure is a mixed infrastructure, and as such, it faces the two types of demand-driven problems discussed throughout this book: First, it faces concerns about undersupply and underuse of infrastructure to produce infrastructure-dependent public and social goods, which leads to underproduction of those goods. Second, it faces concerns that infrastructure development may be skewed in socially undesirable directions. For example, if private infrastructure owners prematurely optimize infrastructure for uses that they expect will maximize their private returns, and in doing so choose a path that forecloses production of various public or social goods that would yield greater net social returns, the social option value of the Internet is reduced. This latter concern may involve dynamic shifts in the nature of the Internet infrastructure, such as optimizing networks in a manner that shifts from mixed infrastructure toward commercial infrastructure.

In key respects, these two problems correspond to the social harms of primary concern to Singer (undersupply by companies subject to regulation) and Cooper (actions by private infrastructure owners to “maximize their financial returns” in ways “that foreclose production of various public or social goods that would yield greater net social returns”).

Frischmann’s analysis of infrastructure and Internet dynamics leads him to conclude that the abundant but difficult to predict and quantify social benefits of the Internet are best supported via commons management.  On page 7 he describes commons management as “the situation in which a resource is accessible to all members of a community on nondiscriminatory terms, meaning terms that do not depend on the users’ identity or intended use.”  This, he says, “can be implemented through a variety of public and private institutions, including open access, common property, and other resource management or governance regimes.

Frischmann acknowledges that “grouping ‘open access’ and ‘commons’ under the ‘commons management’ umbrella will be troublesome to some property scholars:”

Open access typically implies no ownership or property rights. No entity possesses the right to exclude others from the resource; all who want access can get access, typically for free. Commons typically involves some form of communal ownership (community property rights, public property rights, joint ownership rights), such that members of the relevant community obtain access “under rules that may range from ‘anything goes’ to quite crisply articulated formal rules that are effectively enforced” and nonmembers can be excluded.

 There are at least three dimensions of distinction between open access and commons as traditionally understood: first, ownership (none vs. communal/group); second, the definition of community (public at large vs. a more narrowly defined and circumscribed group with some boundary between members and nonmembers); and third, the degree of exclusion (none vs. exclusion of nonmembers). These distinctions are important, especially for understanding different institutions and how social arrangements operate at different scales.

In two other posts (see here and here) I consider key issues related to infrastructure ownership in more detail, from the perspective of the “generative” vs. “extractive” ownership framework developed by Marjorie Kelly in her book, Owning our Future.

 

Tags: , , ,


The Federal Government CAN Afford to Invest in Infrastructure

by

April 16th, 2016

One argument against federal funding to support special access and community broadband networks—or potentially any infrastructure project—is that the federal government “can’t afford it,” especially given the widely held belief that it should prioritize balancing the federal budget and paying down the federal debt.[1]

My suggestion to those holding this view (or being confused and/or intimidated by it in public policy debates) is to begin examining the extensive literature related to Modern Monetary Theory (MMT), perhaps starting with the selection of material to which I provide links at the end of this post (some of which are scholarly in nature, others geared more toward the layperson).[2]

I certainly don’t expect this single blog post to convince skeptics of the validity of MMT, but will discuss it a bit more before moving on to other perspectives that inform the policy approaches I’m attempting to develop here.

One of the most central and policy-significant concepts of MMT is that what we consider to be the federal government’s “deficit” and “debt” are not the equivalent of the debts carried by private households and businesses (or, for that matter, individual states).  The key difference—and one with major policy implications—is that the federal government is the “issuer” of our nation’s currency (and thus cannot “run out of dollars”), whereas the rest of us are “users” of that currency (and definitely can run out of dollars). This doesn’t mean that the federal deficit and federal spending levels don’t matter at all, it just means that how they matter isn’t the same as how household and business debts matter.  As MMT economist Bill Mitchell put it in a long blog post that I excerpted in a much shorter one (bolding is mine):

[A] nation will have maximum fiscal space:

 1) If it operates with a sovereign currency; that is, a currency that is issued by the sovereign government and that is not pegged to foreign currencies; and

 2) If it avoids incurring debt in foreign currencies, and avoids guaranteeing the foreign currency debt of domestic entities (firms, households, or state, province, or city debts).

 Under these conditions, the national government can always afford to purchase anything that is available for sale in its own currency. This means that if there are unemployed resources, the government can always mobilize them – putting them to productive use – through the use of fiscal policy. Such a government is not revenue-constrained, which means it does not face the financing constraints that a private household or firm faces in framing their expenditure decision.

 To put it as simply as possible – this means that if there are unemployed workers who are willing to work, a sovereign government can afford to hire them to perform useful work in the public interest. From a macroeconomic efficiency argument, a primary aim of public policy is to fully utilize available resources.

Back in 2012 I discussed MMT in a number of posts on my personal blog.  Another post that strikes me as especially relevant to this discussion is entitled Understanding and Embracing the Sovereign Currency Opportunity.  It discusses a post by Dan Kervick on the New Economic Perspectives blog, which I thought did a good job of describing the nature of what I refer to as the “sovereign currency opportunity,” and its relevance to broadband and other infrastructure-related policies.

As Kervick explains:

MMT argues that [what we refer to as a federal budget deficit] should be recognized as the normal operating condition of an intelligent national government pursuing public purposes in an effective way, at least when that government is a sovereign currency issuer that lets its currency float freely on foreign exchange markets. If the government is running a deficit in its currency, then the non-governmental sectors of the economy are running a surplus in that currency and their net stock of financial assets in that currency is growing. If the government is running a surplus, on the other hand, then the net stock of financial assets in the non-governmental sectors is decreasing.  We expect a growing economy to be increasing its financial asset stocks, and so we should expect government deficits as a matter of course.

A related critique of public investment in infrastructure is that it will crowd-out private investment. But, as if often the case with special access and local broadband networks,  if the private sector entities best positioned to make that investment (mainly because they operated for decades as competitively and financially protected monopolies) require financial returns that lead to the economic harms suggested by both CFA’s and ASR’s analyses, then I’d argue that so-called crowding out of that investment is likely to be a good thing for the economy and society as a whole (I discuss factors related to the interaction between “shareholder value” and “social value” here and here).

[1]  I’ll briefly note here that several of Bernie Sanders’ key economic advisers (including Stephanie Kelton, who recently served as the Democrat’s Chief Economist on the Senate Budget Committee chaired by Sanders) appreciate the relevance of MMT to today’s policy debates, including the expanded fiscal space it opens up to federal governments that are issuers of sovereign currencies (which, btw, is sadly no longer the case for nations that use the Euro as their currency).  So, even though Sanders typically balances his ambitious infrastructure investment and other proposals with offsetting tax revenue, an understanding of MMT makes it clear that this is not necessary in the way that most politicians and voters (and still too many economists) appear to believe that it is.]

[2] For those interested in more information about MMT, I’d recommend the following, in rough descending order of sophistication and time required to digest them:  1) a recently published textbook entitled Modern Monetary Theory and Practice – an Introductory Textby economists Bill Mitchell and Randall Wray; 2) a Levy Institute working paper entitled Modern Money Theory 101, A Reply to Critics, authored by Wray and Eric Tymoigne; 3) Wray’s MMT Primer, including a link to the published version and the original blog-based discussions on which it was based; 4) my own first exposure to MMT, Seven Deadly Innocent Frauds of Economic Policy, by Warren Mosler; 5) a layperson-friendly graphics-rich e-book entitled Diagrams & Dollars, Modern Money Illustrated, by J.D. Alt (available as a Kindle e-book or a somewhat abridged two-part blog post) and, for those with only a few minutes of time; a very brief excerpt from early MMT textbook draft material that I cited in a 2012 blog post because I thought it succinctly summarized several key points)

Tags: , , , , ,


A “Public Infrastructure” Perspective on Special Access

by

As I discussed in an earlier post, the Consumer Federation of America (CFA) recently released a paper by its Director of Research, Mark Cooper, which made the case that the FCC’s decision to deregulate special access in 1999 was premature and has resulted in large-scale economic harm, including an estimated $150 billion over the past five years. Cooper’s analysis focused on two elements of harm: 1) the direct cost associated with non-competitive excess-profit-extracting pricing and; 2) the indirect economic costs associated with this pricing regime.

As it turns out, a few days after Cooper presented an overview of his analysis at a New America Foundation event, a paper was published by Economists Inc. Written by EI principal Hal Singer and, according to its cover page, funded at least in part by USTelecom, the nation’s ILEC trade association, the EI paper approached the issue from a different perspective, as explained in its executive summary:

This paper seeks to model the likely impact of the FCC’s recent effort to preserve and extend its special access rules on broadband deployment, as telcos transition from TDM-based copper networks to IP-based fiber networks to serve business broadband customers. The deployment impact of expanded special access rules can be measured as the difference between (1) how many buildings would have been lit with fiber by telcos in the absence of the rules and (2) how many buildings will be lit with fiber by telcos in the presence of the rules. With an estimate of the cost per building, the deployment impact can be converted into an investment impact. And with estimates of broadband-specific multipliers, the fiber-to-the-building network investment impact can be converted into job and output effects.

The executive summary also highlights the study’s key findings:

In the absence of any new regulation (the “Baseline Case”), an ILEC is predicted to increase business-fiber penetration… from 10 to 20 percent over the coming years…Next, we model a scenario where special-access price regulation extends to the ILECs’ fiber networks. Assuming this scenario reduces an ILEC’s expected Ethernet revenue by 30 percent—the typical price effect associated with prior episodes of price-cap regulation and unbundling—the model predicts that ILEC will increase business-fiber penetration from 10 to 14 percent (compared to 20 percent in the Baseline Case)…Thus, the special access obligations under this scenario result in a 55 percent reduction in an ILEC’s CapEx relative to the Baseline Case….Thus, expansion of special access price regulation to Ethernet services is predicted to reduce ILEC fiber-based penetration by 67,300 buildings nationwide—a result that is hard to reconcile with the FCC’s mandate to encourage broadband deployment.

Singer then considers the spillover effects of this reduced ILEC investment in fiber infrastructure. Using “a jobs multiplier of approximately 20 jobs per million dollars of broadband investment” and “a fiber-construction output multiplier of 3.12,” Singer estimates the resulting economic harm of FCC special access rules to be an annual loss of 43,560 jobs and $3.4 billion in economic output over a five-year period.

It’s worth noting that Singer’s estimate of $17 billion in economic losses over a five year period due to imposition of special access rules is considerably lower than Cooper’s estimate of $150 billion in economic harm from the unregulated status quo in today’s special access market. While Singer and others will likely take issue with Cooper’s assumptions and estimates, the latter’s paper seems to, at the very least, make a strong case that the economic benefits and harms associated with different special access regulatory regimes don’t only flow in the direction analyzed by Singer, and that policymakers would be wise to carefully consider a full array of harms and benefits associated with alternative regulatory approaches.

An opportunity to explore new policy, funding, ownership models

My sense is that both of these studies raise valid points about the types of economic harm associated with different approaches to (de)regulating special access (and other telecommunications) markets.

I also believe that valuable perspective on this issue can be gained from a review of of ASR Analytics’ estimates of economic benefits resulting from BTOP investments in fiber infrastructure (some of which I discussed in a recent post).  Not only does the ASR study do a good job of applying prior knowledge and accepted methods in analyzing broadband-related economic impacts, it also suggests to me that, rather than getting caught up in the details of the Cooper/Singer and related debates, a more useful approach is to take a step back from the quantitative details of these dueling studies, and consider broadband public policy from a “public infrastructure” perspective.

In a follow-up post I outline a research project designed to build on the knowledge base developed by ASR’s study of the Comprehensive Community Infrastructure (a.k.a., “middle mile fiber”) component of the BTOP program.

In addition, I’ve prepared several other posts that try to explain some of the threads of scholarship that inform my own view of how—especially in cases lacking sufficient competition—special access and last mile access networks can deliver the most social value if treated as public infrastructure.

An annotated list of links to these posts is provided below.  I’d encourage anyone involved and/or interested in policy debates related to issues such as special access, community broadband, network neutrality and universal service to review these posts and perhaps also explore the sources they refer to:

a)  the relevance of Modern Monetary Theory (a.k.a. Functional Finance) to policymaking related to federal financial support for investments in telecommunications and other infrastructure;

b) the demand-side analysis of infrastructure resources  laid out by Brett Frischmann in his 2012 book, Infrastructure: The Social Value of Shared Resources, and the Internet- and telecom-related policies it suggests;

c) the analytical framework developed by author Marjorie Kelly in her book Owning Our Future, which highlights key differences between what Kelly refers to as “generative” vs. “extractive” ownership models. One post reviews Kelly’s key concepts and considers AT&T as an example of extractive ownership of telecommunications infrastructure.  A second post considers how Kelly’s framework applies to the role of community-owned broadband networks in the Internet access sector, and suggests research questions related to this that I believe are worthy of further investigation.

Tags: , , , ,


Elizabeth Kirley’s Talk on Online Reputation and the Law

by

Elizabeth A. Kirley presented a talk for the Quello Center that addressed alternative approaches to protecting reputations online. Professor Adam Candeub served as a respondent. So much is said about protecting reputations online that it is brilliant to have a thoughtful and well informed discussion of international agreements on human rights, national legal doctrines, and online reputation.

Entitled ‘Trashed: A Comparative Exploration of Law’s Relevance to Online Reputation’, through case studies, Dr. Elizabeth Kirley explores the cultural and historical influences that have resulted in very distinct legal regimes and political agenda. Her central thesis is that digital speech is sufficiently different in kind from offline speech that it calls for a more 21st century response to the harms it can inflict on our reputational privacy.

Elizabeth Kirley – Trashed: A Comparative Exploration of Law’s Relevance to Online Reputation from Quello Center on Vimeo.

Dr Elizabeth Kirley is a 2015-16 Postdoctoral Fellow at the Nathanson Centre for Transnational Human Rights, Crime and Security at Osgoode Hall Law School, York University in Toronto and a frequent lecturer in issues raised by digital speech, technology crimes and robotic journalism. Recent research and presentation activities include the European University Institute, Florence; the Oxford Internet Institute, Oxford UK; the American Graduate School of Paris; Ecole des hautes etudes commerciales de Paris; Sciences-Po University in Paris; Osnabruck University in Germany; and the Limerick School of Law, Ireland. She is a barrister and solicitor and called to the Ontario bar.

Professor Adam Candeub is on the Law Faculty at Michigan State University, and a Research Associate with the Quello Center. He was an attorney-advisor for the Federal Communications Commission (FCC) in the Media Bureau and previously in the Common Carrier Bureau, Competitive Pricing Division. From 1998 to 2000, Professor Candeub was a litigation associate for the Washington D.C. firm of Jones, Day, Reavis & Pogue, in the issues and appeals practice.

Tags: , , , , , , ,


A Reminder Why the Quello Center Net Neutrality Impact Study is Important

by

In the past week or so I’ve seen several articles that remind me how important the Quello Center’s empirically-grounded study of net neutrality impacts is for clarifying what these impacts will be—especially since net neutrality is one of those policy topics where arguments are often driven by ideology and/or competing financial interests.

As far as I can tell, this series of articles began with an August 25 piece written by economist Hal Singer and published by Forbes under the following headline: Does The Tumble In Broadband Investment Spell Doom For The FCC’s Open Internet Order? Per his Forbes bio, Singer is a principal at Economists Incorporated, a senior fellow at the Progressive Policy Institute, and an adjunct professor at Georgetown University’s McDonough School of Business.

Singer’s piece was followed roughly a week later by two op-ed pieces published on the American Enterprise Institute’s web site. The title of the first AEI piece, authored by Mark Jamison, was Title II’s real-world impact on broadband investment. This was followed a day later by Bronwyn Howell’s commentary Title II is hurting investment. How will – and should – the FCC respond?

What struck me about this series of op-ed pieces published by economists and organizations whose theoretical models and policy preferences appear to favor unregulated market structures was that their claims that “Title II is hurting investment” were all empirically anchored in Singer’s references to declines in ISP capital spending during the first half of 2015. As a member of the Quello Center’s research team studying the impacts of net neutrality, I was intrigued, and eager to dig into the CapEx data and understand its significance.

While my digging has only begun, what I found reminded me how much the communication policy community needs the kind of fact-based, impartial and in-depth empirical analysis the Quello Center has embarked upon, and how risky it is to rely on the kind of ideologically-driven analysis that too often dominates public policy debates, especially on contentious issues like net neutrality.

My point here is not to argue that there are clear signs that Title II will increase ISP investment, but rather that claims by Singer and others that there are already signs that it is hurting investment are not only premature, but also based on an incomplete reading of evidence that can be uncovered by careful and unbiased review of publicly available information.

I hope to have more to say on this topic in future posts, but will make a few points here.

The crux of Singer’s argument is based on his observation that capital spending had declined fairly dramatically for a number of major ISPs during the first half of 2015, dragging down the entire sector’s spending for that period (though its not clear from the article, my sense is that Singer’s reference to “all” wireline ISPs refers to the industry’s larger players and says nothing about investment by smaller companies and the growing ranks of publicly and privately owned FTTH-based competitors). He then briefly reviews and dismisses potential alternative explanations for these declines, concluding that their only other logical cause is ISPs’ response to the FCC’s Open Internet Order (bolding is mine):

AT&T’s capital expenditure (capex) was down 29 percent in the first half of 2015 compared to the first half of 2014. Charter’s capex was down by the same percentage. Cablevision’s and Verizon’s capex were down ten and four percent, respectively. CenturyLink’s capex was down nine percent. (Update: The average decline across all wireline ISPs was 12 percent. Including wireless ISPs Sprint and T-Mobile in the sample reduces the average decline to eight percent.)..

This capital flight is remarkable considering there have been only two occasions in the history of the broadband industry when capex declined relative to the prior year: In 2001, after the dot.com meltdown, and in 2009, after the Great Recession. In every other year save 2015, broadband capex has climbed, as ISPs—like hamsters on a wheel—were forced to upgrade their networks to prevent customers from switching to rivals offering faster connections.

What changed in early 2015 besides the FCC’s Open Internet Order that can explain the ISP capex tumble? GDP grew in both the first and second quarters of 2015. Broadband capital intensity—defined as the ratio of ISP capex to revenues—decreased over the period, ruling out the possibility that falling revenues were to blame. Although cord cutting is on the rise, pay TV revenue is still growing, and the closest substitute to cable TV is broadband video. Absent compelling alternatives, the FCC’s Order is the best explanation for the capex meltdown.

I haven’t had a chance to carefully review the financial statements and related earnings material of all the companies cited by Singer, but did take a quick look at this material for AT&T and Charter since, as he notes, they experienced by far the largest percentage drop in spending.  What I found doesn’t strike me as supporting his conclusion that the decline was network neutrality-driven.  Instead, in both cases it seems to pretty clearly reflect the end of major investment projects by both companies and related industry trends that seem to have nothing to do with the FCC’s Open Internet order.

My perspective on this is based on statements made by company officials during their second quarter 2015 earnings calls, as well as capex-related data in their financial reporting.

During AT&T’s earnings call, a Wall Street analyst asked the following question: “[T]he $18 billion in CapEx this year implies a nice downtick in the U.S. spending, what’s driving that? Are you finding that you just don’t need to spend it or are you sort of pushing that out to next year?” In his response to the question, John Stephens, the company’s CFO, made no mention of network neutrality or FCC policy decisions. Instead he explained where the company was in terms of key wireless and wireline strategic network investment cycles (bolding is mine):

Well, I think a couple of things. And the simplest thing is to say [is that the] network team did a great job in getting the work done and we’ve got 300, nearly 310 million POPs with LTE right now. And we are putting our spectrum to use as opposed to building towers. And so that aspect of it is just a utilization of spectrum we own and capabilities we have that don’t require as much CapEx. Secondly, the 57 million IP broadband and what is now approximately 900,000 business customer locations passed with fiber. Once again, the network guys have done a great job in getting the Project VIP initiatives completed. And when they are done…the additional spend isn’t necessary, because the project has been concluded not for lack of anything, but for success.

Later on in the call, another analyst asked Stephens “[a]s you look out over the technology roadmap, like 5G coming down the pipeline, do you anticipate that we will see another period of elevated investment?”

While Stephens pointed to a potential future of moderated capital spending, he made no reference to network neutrality or FCC policy, focusing instead on the investment implications of the company’s (and the industry’s) evolution to software-defined networks.

I would tell you that’s kind of a longer term perspective. What we are seeing is our move to get this fiber deep into the network and getting LTE out deep into the wireless network and the solutions that we are finding in a software-defined network opportunity, we see a real opportunity to actually strive to bring investments, if you will, lower or more efficient from historical levels. Right now, I will tell you that this year’s investment is going to be in that $18 billion range, which is about 15%. We are certainly – we are not going to give any guidance with regard to next year or the year after. And we will give an update on this year’s guidance, if and when in our analyst conference if we get that opportunity. With that being said, I think there is a real opportunity with some of the activities are going on in software-defined networks on a longer term basis to actually bring that in capital intensity to a more modest level.

Charter’s large drop in capital spending appears to be driven by a similar “investment cycle” dynamic. During its 2Q15 earnings call, CFO Christopher Winfrey noted that Charter’s year-over-year decline in total CapEx “was driven by the completion of All-Digital during the fourth quarter of last year,” referring to the company’s migration of its channel lineup and other content to an all-digital format.

A review of the company’s earnings call and financial statements suggests that a large portion of the “All-Digital” capital spending was focused on deploying digital set-top boxes to Charter customers, resulting in a precipitous decline in the “customer premise equipment” (CPE) category of CapEx. According to Charter’s financial statements, first-half CPE-related CapEx fell by more than half, or $341 million, from $626 million to $285 million. Excluding this sharp falloff in CPE spending driven by the end of Charter’s All-Digital conversion, the remainder of the company’s capital spending was actually up 3% during the first half of 2015. And this included a 7% increase in spending on “line extensions,” which Charter defines as “network costs associated with entering new service areas.” It seems to me that, if Charter was concerned that the Commission’s Open Internet order would weaken its business model, it would be cutting rather than increasing its investment in expanding the geographic scope of its network.

To understand the significance of Charter’s spending decline, I think it’s important to note that its 29% decline in first half total CapEx was driven by a 54% decline in CPE spending, and that the company’s non-CPE investment—including line extensions—actually increased during that period.  I found it odd that, even as he ignored this key dynamic for Charter, Singer seemed to dismiss the significance of Comcast’s CapEx increase during the same period by noting that it was “attributed to customer premises equipment to support [Comcast’s] X1 entertainment operating system and other cloud-based initiatives.”

I also couldn’t help notice that, in his oddly brief reference to the nation’s largest ISP, Singer ignored the fact that every category of Comcast’s capital spending increased by double-digits during the first half of 2015, including its investment in growth-focused network infrastructure, which expanded 24% from 2014 levels.  Comcast’s total cable CapEx was up 18% for the first half of the year, while at Time Warner Cable, the nation’s second largest cable operator, it increased 16%.

While these increases may have nothing to do with FCC policy, they seem very difficult to reconcile with Singer’s strongly-assserted argument, especially when coupled with the above discussion of company-specific reasons for large CapEx declines for AT&T and Charter.  As that discussion suggests, the reality behind aggregated industry numbers (especially when viewed through a short-term window of time) is often more complex and situation-specific than our economic models and ideologies would like it to be.  This may make our research harder and messier to do at times, but certainly not less valuable.  It also speaks to the value of longitudinal data collection and analysis, to better understand both short-term trends and those that only become clear over a longer term.  That longitudinal component is central to the approach being taken by the Quello Center’s study of net neutrality impacts.

One last general point before closing out this post. I didn’t see any reference in Singer’s piece or the AEI-published follow-ups to spending by non-incumbent competitive providers, including municipally and privately owned fiber networks that are offering attractive combinations of speed and price in a growing number of markets around the country. While this category of spending may be far more difficult to measure than investments by large publicly-owned ISPs, it may be quite significant in relation to public policy, given its potential impact on available speeds, prices and competitive dynamics.

Expect to see more on this important topic and the Quello Center’s investigation of it in later posts, and please feel free to contribute to the discussion via comments on this and/or future posts.

Tags: , , , , ,