Between 24-28 May, thousands of communication scholars from all over the world gathered for the 68th International Communication Association Conference in Prague, Czech Republic. The College of Communication Arts & Sciences had a particularly strong presence at the conference with more than 80 faculty and students presenting their research. The Quello Center’s Assistant Director Bibi Reisdorf and Research Fellow Laleah Fernandez were among those presentations with some of the results from the Quello Search Project.
As part of the large program, the team working on the Quello Search Project, Grant Blank (Oxford Internet Institute, University of Oxford), Elizabeth Dubois (Department of Communication, University of Ottawa), Bill Dutton, Laleah Fernandez, and Bibi Reisdorf, put together a panel on “Personalization, Politics, and Policy: Cross-National Perspectives”. Despite the early morning start (8am) on the day following all the big ICA receptions, a good crowd turned up to hear about our results pertaining to how people make use of a diverse range of media to find information on political matters. The papers presented in this panel ranged from a focus on personalization of search, to a critical discussion of algorithmic literacy, from exploring “the vulnerable” (i.e. those who have low search skills and little interest in politics) to discussing the policy implications of citizens’ complex media habits. The panel presentations were followed by a critical discussion of the presented results by Cornelius Puschmann, Hans Bredow Institute for Media Research.
Immediately after this early morning panel, Bibi Reisdorf also took part in a panel on “Filter Bubbles: From Academic Debate to Robust Empirical Analysis”, which she co-organized together with Anja Bechmann, Aarhus University, and Oscar Westlund, University of Gothenburg & Volda University College. This panel paid specific attention to empirical evidence of the extent (or lack thereof) of filter bubbles around the globe. Despite different foci and datasets, all four panelists, Anja Bechmann, Aarhus University, Axel Bruns, Queensland University of Technology, Neil Thurman, LM University Munich, and Quello’s Bibi Reisdorf, presented findings that supported results from our Quello Search Project, which showed that although filter bubbles and echo chambers do exist, the magnitude is largely overstated and the resulting panics are unnecessary and unhelpful. The results were discussed and responded to by MSU’s very own newest ICA Fellow, Prof. Esther Thorson, who pointed out that this type of research needs to be more closely investigated and critically evaluated in light of existing communication theories, such as Uses and Gratifications or Confirmation Bias, to name just a few.
Overall, the conference was a great success for the Quello team, who also participated in a pre-conference workshop on survey design and survey questions on internet use organized by Prof. Eszter Hargittai, University of Zurich. In addition, we took a few hours each to enjoy beautiful Prague and the amazing culinary treats, including, of course, the fantastic beer and wine that can be found in this beautiful region of Europe.
Now, back in East Lansing, the team is busy finishing up a few book chapters and journal articles that revolve around the issues that were discussed at the ICA conference. Our next big conference will be TPRC in Washington, DC in September, where Laleah Fernandez will present some of our exciting results from the Detroit Study.
November 21st, 2017
August 7th, 2017
July 24th, 2017
One major outcome of the new faculty joining the Department of Media and Information this academic year that been a coming together of a critical mass of very strong faculty key to social scientific research on the digital age. Suddenly, the Quello Center can enjoy a dramatic rise in the strength of faculty that can inform research, policy, and practice central to the Center’s focus on policy for the digital age.
To ensure that these faculty are visible and recognized from afar, the Center has begun a new category of faculty, entitled Quello Research Fellows. The first four Fellows include three new faculty, Keith Hampton, Natascha Just, and David Ewoldsen, and one long-term member of the Quello faculty, Johannes Bauer. They bring major strengths in Internet studies, sociology, economics, social psychology, and policy into the Quello Center’s multidisciplinary team.
Together with our research team, associate faculty across the university, and graduate student researchers, these new Quello Research Fellows boost the capacity of the Quello Center to tackle an ever-wider range of research of importance to policy and practice for the digital age.
I fully expect this new class of faculty to help inform and lead debate over policy and practice that responds to the societal implications of the Internet and related digital media, communication, and information technologies.
From discussions in courses and within the Quello Center Advisory Board, the Center has been developing a set of key issues tied to media, communication and information policy and practice. We’d welcome you thoughts on issues we’ve missed or issues noted that do not merit more sustained research and debate. Your feedback on this list would be most welcome, and will be posted as comments on this post.
I. Innovation-led Policy Issues
New Developments around Robotics and Artificial Intelligence: What are the implications for individual control, privacy, and security? Security is no longer so clearly a cyber issue as cyber security increasingly shapes the physical world of autonomous vehicles, drones, and robots.
Internet of Things (IoT): With tens of billions of things moving online, how can individuals protect their privacy and safety and well being as their environments are monitored and controlled by their movement through space? There are likely to be implications for urban informatics, transportation and environmental systems, systems in the household, and worn (wearables above). A possible focus within this set would be on developments in households.
Wearables: What appears to be an incremental step in the IoT space could have major implications across many sectors, from health to privacy and surveillance.
The Future of Content Delivery: Content delivery, particularly around broadcasting of film and television, in the digital age: technology, business models, and social impact of the rapidly developing ecosystem, such as on localism, diversity, and quality.
Free (and Open Source) Software: The prominence and future of free as well as open source software continues to evolve. Are rules, licensing, and institutional support, such as around the Free Software Foundation, meeting the needs of this free software community?
Big Data: How can individuals protect their privacy in the age of computational analytics and increasing capture of personal data and mass surveillance? What policies or practices can be developed to guide data collection, analysis, and public awareness?
Encryption: Advances in encryption technologies at a time of increasing threats to the privacy of individual communications, such as email, could lead to a massive uptake of tools to keep private communications private. How can this development be accelerated and spread across all sectors of the Internet community?
Internet2: Just as the development of the Internet within academia has shaped the future of communications, so might the next generation of the Internet – so-called Internet2 – have even greater implications in shaping the future of research and educational networking in the first instance, but public communications in the longer-term. Who is tracking its development and potential implications?
Other Contending Issues: Drones, Cloud computing, …
II. Problem-led Initiatives
Transparency: Many new issues of the digital age, such as concerns over privacy and surveillance, are tied to a lack of transparency. What is being done with your data, by whom, and for what purposes? In commercial and governmental settings, many public concerns could be addressed to a degree through the provision of greater transparency, and the accountability that should follow.
Censorship and Internet Filtering: Internet filtering and censorship was limited to a few states at the turn of the century. But over the last decade, fueled by fear of radical extremist content, and associated fears of self-radicalization, censorship has spread to most nation states. Are we entering a new digital world in which Internet content filtering is the norm? What can be done to mitigate the impact on freedom of expression and freedom of connection?
Psychological Manipulation: Citizen and consumers are increasingly worried about the ways in which they can be manipulated by advertising, (fake) news, social media and more that leads them to vote, buy, protest, or otherwise act in ways that the purveyors of the new propaganda of the digital age would like. While many worried about propaganda around the mass media, should there be comparable attention given to the hacking of psychological processes by the designers of digital media content? Is this a critical focus for consumer protection?
(In)Equities in Access: Inequalities in access to communication and information services might be growing locally and globally, despite the move to digital media and ICTs. The concept of a digital divide may no longer be adequate to capture these developments.
Privacy and Surveillance: The release of documents by Edward Snowden has joined with other events to draw increasing attention to the threats of mass unwarranted surveillance. It has been an enduring issue, but it is increasingly clear that developments heretofore perceived to be impossible are increasingly feasible and being used to monitor individuals. What can be done?
ICT4D or Internet for Development: Policy and technology initiatives in communication to support developing nations and regions, both in emergency responses, such as in relation to infectious diseases, or around more explicit economic development issues.
Digital Preservation: Despite discussion over more than a decade, it merits more attention, and stronger links with policy developments, such as ‘right to forget’. ‘Our cultural and historical records are at stake.’
III. Enduring Policy Issues Reshaped by Digital Media and Information Developments
Media Concentration and the Plurality of Voices: Trends in the diversity and plurality of ownership, and sources of content, particularly around news. Early work on media concentration needs new frameworks for addressing global trends on the Web, with new media, in print media, automated text generation, and more.
Diversity of Content: In a global Internet context, how can we reasonably quantify or address issues of diversity in local and national media? Does diversity become more important in a digital age in which individuals will go online or on satellite services if the mainstream media in a nation ignore content of interest to their background?
Freedom of Expression: New and enduring challenges to expression in the digital age.
IV. Changing Media and Information Policy and Governance
Communication Policy: Rewrite of the 1934 Communications Act, last up-dated in 1996: This is unlikely to occur in the current political environment, but is nevertheless a critical focus.
Universal Access v Universal Service: With citizens and consumers dropping some traditional services, such as fixed line phones, how can universal service be best translated into the digital age of broadband services?
Network Neutrality: Should there be Internet fast lanes and more? Efforts to ensure the fair treatment of content, from multiple providers, through regulation has been one of the more contentious issues in the USA. To some, the issue has been ‘beaten to death’, but it has been brought to life again through the regulatory initiatives of FCC Chairman Wheeler, and more recently with the new Trump Administration, where the fate of net neutrality is problematic. Can we research the implications of this policy?
Internet Governance and Policy: Normative and empirical perspectives on governance of the Internet at the global and national level. Timely issue critical to future of the Internet, and a global information age, and rise of national Internet policy initiatives.
Acknowledgements: In addition to the Quello Advisory Board, special thanks to some of my students for their stimulating discussion that surfaced many of these issues. Thanks to Jingwei Cheng, Bingzhe Li, and Irem Yildirim, for their contributions to this list.
In an earlier post I described the BTOP Comprehensive Community Infrastructure (CCI) program as a “very good investment of public funds.” My reasons were twofold, the first one being that it expanded the availability of high-speed connectivity in underserved areas, including more than 42,000 miles of new and 24,000 miles of upgraded fiber infrastructure. The second was that research by ASR Analytics suggests that the CCI program accomplished this expansion in a way that addresses both forms of economic harm claimed by advocates on both sides of the special access regulation debate. As a result, I suggested “that the federal government consider expanding its CCI investment in geographic areas that the FCC’s special access data collection project indicates still face a lack of competitive options and an abundance of excess-profit-extracting prices in the special access market.”
In five related posts, I considered a number of issues and perspectives that inform this policy suggestion, including the following:
In this post I’m going to:
I’ll start with an excerpt from an earlier post:
According to Table 7 on pg. 15 of ASR’s final report, the total amount (including both federal grants and matching funds) budgeted for 109 CCI projects was $3.9 billion. The table also indicates that, at the time the study was done, these projects had connected 21,240 CAIs, at a budgeted cost of $184,141 per CAI. Assuming federal grants paid for 80% of this total cost, the average federal grant amount per CAI would be in the neighborhood of $147,300.
Table 13 on pg. 34 of the report shows the changes in subscription speeds and pricing experienced by the 86 CAI locations providing this information to ASR. The table shows very large increases in speed and, depending on the category of CAI, dramatic 94-96% average reductions in per-Mbps pricing. Table 14 on pg. 36 uses these reported changes in speed and price to extrapolate CAI cost savings from switching to CCI-provided fiber connections. Averaged across all CAI categories, the per-CAI annual savings amounted to $236,151.
This means that, in just one year, the average CAI saved 28% more in operating costs ($236,151) than the total capital cost ($184,141) required to connect it to a CCI fiber network, and 60% more than the federal government’s share of that investment ($147,300). Based only on these direct social costs and benefits, I’d consider this a good investment of public funds.
But these direct cost savings to CAIs were not the only impacts of the federally supported BTOP fiber deployment program that were considered by ASR. It also estimated economic benefits driven by increased broadband availability in areas newly reached by the BTOP fiber networks. Using matched pair county-level analysis, ASR found that CCI-impacted counties achieved broadband penetration two percentage points higher than control counties. Based on this, ASR derived estimates of economic benefits of the $3.9 billion in CCI network investments using a number of widely accepted economic impact models. These impacts included:
These findings of the ASR Analytics study suggest that:
Building on the ASR Analytics evaluation study
As noted above, ASR’s BTOP evaluation study used matched pair analysis of CCI-impacted counties to compare their growth in broadband availability to that of counties that were comparable on key control variables. ASR used NTIA availability data for multiple time periods to measure and compare these changes in availability (for more details see Appendix D of the ASR final BTOP evaluation report).
As discussed above, ASR found that, on average, the increase in broadband availability for CCI-impacted counties was two percentage points higher than in control counties, using the then-current broadband speed threshold of 3Mbps downstream service. ASR then used this differential to estimate and extrapolate economic impact variables (e.g., GDP, job growth, income) using the broadband impact models referenced above.
In light of ASR’s well-documented research and its promising though preliminary findings, an effort to update and expand on the strong foundation it and NTIA have built strikes me as timely, especially with special access policy questions getting focused attention from the FCC. More specifically, what I’d propose is to:
1. Use updated FCC availability data to explore how the matched-county broadband availability differential has evolved over a longer period of time.
2. Examine this broadband availability differential using speed thresholds higher than the 3 Mbps downstream level used by ASR, including the FCC’s current threshold of 25 Mbps downstream and 3 Mbps upstream.
3. For counties for which data is available, add to the matched pair comparison an analysis of broadband adoption data derived from the Census Bureau’s American Community Survey (ACS). Beginning with 2013 data, this data is being released annually for geographies with populations greater than 65,000, and should be available for virtually all counties on a blended five-year basis starting in 2017.
4. Examine and compare actual county-level economic indicators (e.g., County Business Patterns and other datasets available from the Census Bureau and other sources) for the matched pair counties. The goal here would be to explore the extent to which the economic impacts predicted by the models used by ASR actually occurred and/or whether there were other impacts suggested by these economic indicators.
5. Where notably large variations are found among the matched pair differentials in broadband availability and/or penetration, and/or in actual economic impact variables, explore potential reasons for these differentials based on qualitative and/or quantitative analysis of CCI projects and CCI-impacted counties exhibiting these large variations. The goal here would be to extract additional insights and lessons learned regarding how CCI networks can best deliver social value, as well as the contextual factors impacting how effective different approaches are in achieving that value.
Factors to be considered in #5 might include the ownership and management models employed by CCI grant recipients, the specific approach they take to providing “open access” to their fiber networks, as well as other policies and strategies they employ in relation to wholesale and last mile providers, CAIs, local community development programs, and local economic, demographic and institutional factors.
To help expand service within unserved and underserved areas…[e]ach of the grantees in the evaluation study sample implemented at least one strategy, and in many cases a combination of strategies, to ensure open access to the BTOP-funded network by third-party service providers. For example, the research and education network and the healthcare network in Arkansas established a partnership to deploy new and upgraded fiber and colocation facilities. Merit Network in Michigan offered indefeasible right-of-use agreements to private third- party service providers. MassTech fostered competition by helping CAIs compare services and prices offered by third-party providers that use the BTOP-funded network.
Similarly, different CCI grantees adopted different usage and pricing policies to support positive impacts of their network investments. For example, as described on pages 3-4 of its case study of Merit Network, a Michigan-based CCI grantee owned by member institutions of higher learning, ASR explained that:
The Merit network connects institutions of higher learning and facilitates collaboration by allowing them to freely connect to other institutions on the network, or access on-net services at speeds up to 1 Gbps. This allows institutions to collaborate on research, and to cut costs by sharing services, including hosting. Merit provides some content over this network as well, including Internet2. These services give faculty, staff, and students fast and reliable access to educational and research opportunities…The free on-net services provide incentive for CAIs to create wide area networks (WAN) using Merit fiber…[and] cost-savings [and greater efficiency] for any CAI organization with multiple locations.
Merit is an example of the Research and Education Network (REN) category of CCI grantee. Owned by member universities, it exhibits a range of generative characteristics, including support for training, collaboration and feedback among its user community, including:
[Regular] opportunities for Members to learn from each other and share best practices in the networking arena. Forums include the Michigan Information Technology Executive (MITE) Forum, Merit Joint Technical Staff (MJTS), Networking Summit, Bring Your Own Device (BYOD) Summit, and the Merit Member Conference (MMC).
The Merit Advisory Council (MAC) has a direct voice to our Board of Directors and leadership through which feedback and recommendations are provided.
The Merit Services Innovation Group enables Members to provide suggestions and feedback regarding current and future services.
Merit facilitates collaboration between Members and regularly contributes staff and resources to educational and research activities.
Professional Learning events are tailored to the needs of our Members and are offered at reduced cost.
One starting point for considering the impacts of differential policies, structures, strategies and programs among CCI grantees would be a careful review of the dozen CCI case studies conducted by ASR. Another would be using the expanded matched pair county analysis described above to identify differences in the availability, penetration and economic impact variables across CCI projects.
In my view, a research project along these general lines would: 1) help maximize the ongoing value provided by existing CCI projects; 2) provide valuable guidance for consideration of future programs designed to build on the success of and lessons learned from the BTOP program and; 3) shed light on policy debates and options related to special access and perhaps other communication and infrastructure policy issues.
April 16th, 2016
In an earlier post I discussed Marjorie Kelly’s framework for distinguishing “generative” vs. “extractive” ownership models. In this post, I’ll try to further clarify this distinction by considering some key characteristics of community-owned local access networks in relation to Kelly’s framework (in my next post I’ll shift my focus to middle mile and special access fiber).
To get started, I’ll reiterate one of my key working premises, that when an extractive ownership model is combined with a lack of competitive pressure or corrective regulation (as is currently the situation in much of the nation’s local access and special access markets), service providers can achieve high levels of financial extraction. This can lead to broad and substantial economic harm, especially in the case of core infrastructure like the Internet and telecommunications in general. That’s because these infrastructure resources tend to be “spillover rich” when managed in a non-discriminatory way, but less so when constrained by dominant ISPs’ internal monetization priorities, as I discussed in the latter section of an earlier post, and which Brett Frischmann addressed in far more depth in his book, Infrastructure: The Social Value of Shared Resources.
As discussed in my last post, I view the longstanding focus on shareholder returns at the expense of network upgrades by AT&T, the nation’s largest ILEC, as one example of this dynamic. Another relates to the nation’s leading cable operators. On one hand these companies have been able—thanks to their much higher speeds relative to DSL—to capture nearly all net broadband customer growth in recent years (see the table here for 2015 data). But, at the same time, these same companies have consistently been ranked at or near the bottom among all U.S. industries in customer satisfaction surveys. Simple economic logic tells us that, if there was minimally healthy competition in the market for these higher-speed broadband connections, this combination of strongly positive market share gains with strongly negative customer satisfaction would be a very unlikely outcome. As the title of Susan Crawford’s 2013 book points out, these growing ranks of customers signing up for and retaining cable modem service are, in a very practical sense, a Captive Audience.
Later on in this post I’ll suggest some research questions I believe are worth pursuing related to the operation and impacts of community-owned networks and the relevance of Kelly’s ownership framework to the broadband access market. But before I do, I want to consider how some of Kelly’s “generative” characteristics apply in theory and practice to community broadband.
In applying Kelly’s framework to local access ownership models, it makes sense to start with Purpose, which Kelly describes as the most fundamental design element.
Having studied a number of community-owned broadband networks, I’d say that all or most were undertaken with a purpose along the following lines: to provide local households, businesses and public service organizations (e.g., schools, healthcare providers, public safety, etc.) with affordable, reliable, symmetrical, high-capacity broadband connectivity and related services; to support their ability to thrive in an increasingly competitive and knowledge-based global economy and; to provide decent paying jobs to local citizens who, in turn, provide high quality customer service to the network’s customers.
While privately owned networks, including those owned by publicly-traded corporations, might claim to have the same or a very similar purpose, years of listening to earnings calls of these publicly-traded companies tells me that such goals are, at best, secondary priorities to the overriding goal of maximizing shareholder value, and ones that will be jettisoned if they conflict with the latter goal. And if shareholder value-maximizing decisions are not made with sufficient speed, vigor and clarity of (extractive) purpose, a publicly-traded firm’s management is likely to face intense pressure from investors, particularly those focused on a relatively short time horizon for measuring shareholder value.
It’s also clear to me that compared with most publicly-owned cablecos and telcos, community-owned networks tend to have locally Rooted Membership (vs. the Absentee Ownership characterized by publicly-traded stocks); Mission-Controlled Governance focused on achieving benefits for the local community (vs. Governance by Markets, stock price and related measures of financial profitability); and tend to function as part of local and national Ethical Networks focused on supporting sustainable community development (vs. Commodity Networks geared toward maximizing financial extraction). And, as discussed further below, while the capital intensive nature of last mile access networks makes reliance on Stakeholder Finance challenging, there are models developing on this front as well.
A mix of generative characteristics and results
The general family of community-owned networks includes both municipal ownership (often through an existing municipal power utility) and end-user cooperatives (often through an existing rural electric co-op). While there are legal, organizational and financial differences between these ownership models, my preliminary research suggests these aren’t large enough to significantly alter the fundamental ways in which they differ from ownership by publicly-traded cable and telephone companies.
That being said, different combinations of generative characteristics may yield somewhat different sets of strengths and weaknesses, with these sometimes impacted by other factors, including relevant state laws and local regulations, existing institutional relationships and expertise, local market dynamics, and the mix of stakeholders supporting the project.
The mix of generative characteristics and situational factors can also impact how a community network evolves over time, and whether its particular model is sustainable within the environment in which it has taken root.
For example, in the case of municipally-owned networks, the departure of strong founding project leadership has sometimes led to a migration of decision-making to political leaders lacking appropriate industry expertise, particularly if the network was deployed in a community that lacked an existing public utility that was well-managed and enjoyed the loyalty and respect of its local customer base. In my view this type of management transition led to problems for the community-owned network in Burlington, VT, and highlights a potential point of design weakness for municipally-owned networks lacking a well-established and sufficiently independent, professional, non-political utility management unit.
In other cases, management of a municipally-owned network has experienced disruptive discontinuities when local political leadership has changed. This seems to be most likely if such change occurs when the network and/or city is facing financial or other challenges, and/or when candidates or newly elected officials view the shortcomings of their predecessors’ network project as a useful rallying point for mobilizing political support. This also supports the notion that, once launched, the ongoing management of a community network needs to be sufficiently sheltered from day-to-day and election-to-election political pressures. My research suggests that this is most readily achieved in communities that already have some form of municipal utility infrastructure (or set up a strong-enough one when a network project is launched).
Similarly, the interaction of political dynamics among multiple jurisdictions has sometimes complicated and/or delayed decision-making, adding to the challenges of addressing unexpected financial or operational problems. My sense is that this has been a factor for the multi-city UTOPIA fiber network in Utah, one that was made even more difficult by state restrictions on the network’s ability to provide retail services.
This latter point highlights the significance of state regulation as a factor that can influence which generative characteristics are likely to be most effective (or even legal) in achieving a project’s internal and external goals. In the case of UTOPIA, the retail-service prohibition presented serious and not-well-understood challenges and risks related to marketing, finances, technology and overall management of a wholesale-only local access enterprise. As those risks became more clear with time, the task of addressing them was made even more challenging by the project’s multi-city management and financial structure. As a result, this pioneering and arguably over-ambitious project, launched a dozen years ago, has become the perennial poster boy used by critics to make their case against community broadband.
In my view, the risks related to local political dynamics suggest that communities without existing public utilities take these risks very seriously. While establishing a strong and sufficiently independent municipal utility structure is one option, another would be to adopt a form of cooperative structure (e.g., similar to the rural telephone and electric cooperatives common in very rural areas). This would help ensure that the network is responsive to end-users rather than to local politicians with many and often conflicting priorities and, in some cases, focused too much on the next election and lacking appreciation for the management requirements of the network. That being said, cooperative managers are not immune from losing touch with their members needs, nor can these members always be relied upon to wisely exercise the rights and responsibilities of their membership.
These challenges highlight the value of initial and sustainable stakeholder “buy-in” for a community network to succeed, a factor that relates to Kelly’s concepts of Rooted Membership and Mission-Controlled Governance.
The value of building and maintaining stakeholder buy-in seems especially important during a network’s early years, when the bulk of construction is underway. This is because costs are especially high during this startup phase, while revenues are just beginning to ramp up. Case study research suggests that it’s during this startup period that incumbents have the most leverage to mobilize their considerable resources to weaken both the economic viability and credibility of community networks.
Given its role as core communication infrastructure, a community-owned broadband network is a resource likely to impact virtually every organization in a community, including local government, public safety, education, healthcare, non-profits, and businesses both large and small. This suggests that important elements of generative structure (e.g., Rooted Membership and Mission-Controlled Governance) will be closely tied to how these various stakeholders (as well as residential users) are involved in setting goals and priorities and the decision-making processes related to network management, build-out plans, resource allocation, service development, pricing, etc. This governance issue also relates to the Ethical Network element of generative ownership, as reflected in the relationships among community leaders, their local constituencies and their counterparts in other communities that have invested in a community-owned network or are considering such an investment.
Stakeholder finance: challenging but potentially fortifying
Deploying a community broadband network that relies on Stakeholder Finance strikes me as more challenging than Kelly’s other generative design elements, in large part because the high upfront cost and capital-intensive nature of communication networks has typically required access to public debt markets.
One of the clearest and apparently successful examples of local stakeholder financing is Vermont’s ECFiber, now officially known as the East Central Vermont Telecommunications District.
As the following excerpt from ECFiber’s website indicates, its decision to initially rely on local stakeholder financing was made out of necessity:
On Town Meeting Day 2008, 24 towns voted to join ECFiber…In August, 2008 23 towns signed the Inter-local Contract and by early September, the initiative’s underwriter, Oppenheimer & Co., had pledges of $70 million. One week later the international financial markets collapsed taking ECFiber’s initial funding effort with it.
ECFiber then submitted several funding proposals under the American Recovery and Reinvestment ACT (stimulus program), but with no operating history at that point, we were edged out by competing proposals from other local companies.
But undaunted, ECFiber returned to the Vermont roots of self-reliance and initiated our current program of grass-roots funding. With the advice of local counsel, ECFiber developed a program of issuing promissory notes in a private placement offering. The notes are offered in $2500 units. The first round of financing, in January,2011, raised $912,000, which enabled us to build our first 20+ mile loop… Additional rounds of financing have brought total investment to nearly $5 million…It is ECFiber’s intention, at some suitable point, to return to the capital markets to seek sufficient funding to build out the entire network in all member towns.
According to a March 11, 2016 press release, ECFiber has reached the point where it is ready to augment its initial base of local stakeholder financing via institutional capital markets:
ECFiber…plans to activate 110 miles of network in 2016 and build an additional 250 miles in 2017. “Working with bond underwriters, we believe ECFiber has reached the point in its financial development that allows us to access institutional capital markets for the first time in 2016,” says Irv Thomae, District Chairman.
While the financing model pioneered by ECFiber may lead to slower network buildout, it may strengthen the Stakeholder Financing element of its generative ownership design, since most of its initial funding came from community members with a three-pronged interest in its success—as customers, as local community members, and as direct financial investors.
This underscores a broader and important point: community broadband planners are likely to increase their chances for success if, from the beginning, they keep in mind all elements of Kelly’s generative ownership structure, including how they interact with each other to support the project’s Living Purpose.
To a large extent, all of Kelly’s ownership characteristics relate to the effective and sustainable harnessing of stakeholder support and participation. As both successful and unsuccessful community broadband projects have demonstrated, strong and sustained support from community stakeholders provides a solid—and perhaps the most essential— foundation upon which to build a community network.
As noted above, this foundation has proven to be especially important during a project’s startup phase, when learning curves and financial pressures abound, and when community networks are likely to be most vulnerable to well-financed political, legal and predatory pricing attacks by incumbent service providers. The more fully and firmly that stakeholder support is embedded in a community network’s design, the more likely it is to weather these startup storms and any squalls that might follow in later years. And the more likely it will be to remain focused on prioritizing the social benefits and community development goals that give it the Living Purpose that distinguishes it from the financial priorities of extractive ownership models.
More research can help
The existing body of research focused on community broadband networks tends to be heavily polarized and somewhat anecdotal, with proponents focusing on success stories and opponents on the sector’s most notable failures, even if, as with UTOPIA, they were launched many years ago and were subject to a unique mix of situations and constraints that virtually guaranteed they would illustrate painful lessons for others to learn from. To a large extent the tone and content of existing research reflects the often intense political battles at the state, local and national levels regarding restrictions on community broadband projects. As with most politically charged policy-related research, the result is a strong tendency toward cherry-picking of projects to study and data to analyze.
My own view is that case study-oriented research by myself and others provides clear evidence that community-owned broadband networks can and often do succeed in terms of both their internal economics and in bringing to their communities lower prices, faster speeds, better customer service and more robust support for the potentially large but difficult-to-internally-monetize social goods discussed in Frischmann’s book.
That being said, I also believe that state and local policymakers, local decision-makers and communication scholars could benefit from additional and less agenda-driven research in this area, perhaps conducted by a team of researchers representing a range of perspectives and expertise, and well sheltered from bias based on the source of their funding. Among the questions I view as worthy of such research are the following:
Given the intensity of debate surrounding state laws restricting the ability of communities to finance and control their local broadband networks; the FCC’s efforts to preempt such state restrictions and; the expansion of both publicly-owned and privately-owned (e.g., Google Fiber) competitive network models, I believe research focused on these questions can help local leaders, state and federal policymakers, and private sector players make better-informed decisions about how best to leverage the power of high-speed Internet access to benefit our nation’s citizens, businesses and public institutions.
While the focus of this post has been the relevance of Kelly’s ownership framework to the local broadband access market, I believe it is also relevant to policy issues and research questions related to the special access market, which is the focus of the final blog post in this series.
April 16th, 2016
In her 2012 book Owning Our Future: The Emerging Ownership Revolution, Marjorie Kelly, Executive Vice President and a Senior Fellow with The Democracy Collaborative, provides a framework for understanding and distinguishing what she describes as “generative” vs. “extractive” ownership designs. In key respects, the book builds on Kelly’s first book, The Divine Right of Capital: Dethroning the Corporate Aristocracy, published more than a decade ago (you can read the latter’s introduction here).
Drawing an apt and powerful parallel to the divine right of kings, Kelly’s first book does a masterful job of opening readers’ minds to the arbitrary and distorting nature of the ownership and control model embodied in today’s publicly-traded corporations. In Owning our Future, she does an equally impressive job helping readers understand and appreciate the significance of the range of alternative ownership structures emerging across the economy. A clear, succinct and enjoyable read, Owning Our Future clarifies:
In a May 17, 2012 talk entitled From the Fringe to the Leading Edge: Generative Design Goes to Scale, at the annual conference of the Business Alliance for Local Living Economies (BALLE), Kelly highlighted the fundamental importance of ownership in our economy and our world, and the problems caused by today’s dominant form of ownership:
Every economy is built on the foundation of ownership… Questions about who owns the wealth-producing infrastructure of an economy, whose interests it serves, these are among the largest issues any society can face…The crises we face today, ecologically and financially, are tangled at their root with the particular form of ownership that dominates our world – the publicly traded corporation, where ownership shares trade in public stock markets. The revenues of the 1,000 largest of these corporations represents roughly 80% of global GDP.
Kelly then briefly reviewed what her years of research have led her to understand about “generative” alternatives to the dominant “extractive” form of ownership. “The first and most important difference” she says is a “Living Purpose.”
…the many ownership alternatives – from community land trusts and cooperatives to social enterprises and community ownership of the commons – these alternatives represent a single, coherent school of design. It’s a family of generative ownership designs. Together, they form the foundation of a generative economy.
Generative means the carrying on of life, and generative design is about the institutional framework for doing so. In their basic purpose, and in their living impact, these designs have an aim of generating the conditions where all life can thrive. They are built around a Living Purpose.
This is in contrast to the dominant ownership designs of today, which we might call extractive. Their aim is maximum extraction of financial wealth. They are built around a single-minded Financial Purpose.
But, according to Kelly, “purpose alone isn’t enough.” Also needed, she says, is “the presence of at least one other structural element that holds that purpose in place.” These additional elements of generative design are:
Membership. Who’s part of the enterprise? Who has a right to a say in profits, and who takes the risk of ownership? Corporations today have Absentee Ownership. Generative ownership has Rooted Membership, with ownership held in human hands.
Governance. Extractive ownership involves Governance by Markets, where control is linked to share price. Generative ownership involves Mission-Controlled Governance, with control held in mission-oriented hands.
Finance. Instead of the Casino Finance of traditional stock market ownership, generative approaches involve Stakeholder Finance, where capital becomes a long-term friend.
Networks…If traditional approaches use Commodity Networks, where goods trade based solely on price, generative economies use Ethical Networks, which offer collective support for social and ecological norms.
Kelly then notes that, while “[n]ot every ownership model has every one of these design elements…the more elements that are used, the more effective the design.”
How does this apply to the telecom sector?
Having listened to many an earnings call and followed the telecom industry for nearly three decades, it seems pretty clear to me that the dominant publicly-traded cable and telephone companies have an overriding Financial Purpose, as expressed by management’s intense focus on cash flow, stock price, profits, market share, average revenue per unit, pricing power, and other financial metrics.
Related to these metrics is an intense focus (in statements made to Wall Street analysts as well as actual financial decision-making) on “return of capital to shareholders,” largely in the form of dividends and stock buybacks.
While this is perfectly legal and very understandable from the perspective of corporate management (whose compensation is often based on stock price), the fact is that these returns to shareholders are allocations of cash flow that might otherwise be used to deliver more value to customers. For example, this cash flow could be invested in network upgrades and/or improved customer service. The latter is particularly notable, since both the cable and telephone industries have longstanding and well-earned reputations for poor customer service, as reflected in virtually all national surveys (and plenty of anecdotes shared among friends and posted on the web). This, I believe, is largely because, as monopolists or duopolists with substantial market power and extractive ownership designs, these companies tend to be more focused on satisfying the desires of shareholders and Wall Street analysts than those of largely-captive customers with limited options for taking their business elsewhere.
These same industry dynamics are also clear evidence that, in addition to Financial Purpose, the nation’s large publicly-traded cable and telecom giants are also characterized by what Kelly refers to as Absentee Ownership, Governance by Markets, Casino Finance and Commodity Networks, and that their managements are heavily influenced by pressure from Wall Street analysts and traders, whose work takes place even more deeply at the core of our economy’s financial extraction machinery.
AT&T as an example of extractive ownership
In my view, AT&T, the nation’s largest ILEC, is a good example of the kind of financially extractive ownership model that currently dominates the top tier of telecom companies, virtually all of which have their historic roots in a monopoly or near-monopoly market environments. Below is a brief review of key elements of the company’s history over the past decade or so, to illustrate what I mean.
In 2004 AT&T (then SBC Communications) announced that its next-generation network upgrade strategy would rely mainly on fiber-to-the-node (FTTN) technology, which uses a form of DSL technology to deliver both Internet and TV services over the final stretch of copper wires that connect customer locations. The initial budget allocated $6 billion to deploy a FTTN network (dubbed U-verse) that passed 18 million premises.
By late 2012, AT&T’s U-verse footprint had expanded to 24.5 million premises, which suggests a total U-verse investment up to that point of about $8 billion. At that time, AT&T announced Project Velocity IP (VIP), which was to invest another $6 billion over three years to expand U-verse availability to 33 million premises (roughly 43% of its total footprint), while deploying next-generation DSL technology to boost Internet speeds for another 24 million premises, suggesting a total next-generation wireline investment of roughly $14 billion over the course of a decade.
It’s important to note that, based on AT&T’s announced plan, when this second major upgrade program was to be completed, roughly 19 million premises (one of every four passed by its networks) would not have access to ANY wireline broadband service from AT&T.
To put AT&T’s network upgrade strategy in a financial and corporate strategy context, consider that, between 2006 and 2015, AT&T returned an average of nearly $14 billion per year to its shareholders in the form of dividends and stock buybacks. This means the company has returned roughly as much money to shareholders in an average year as it has allocated over a decade to its next-generation wireline network upgrade. In total, between 2000 and 2015 the company returned nearly $172 billion to shareholders in the form of dividends and stock buybacks. That’s equal to 73% of its total capital spending during this period, with two years, 2012 and 2013, seeing shareholder returns exceed CapEx (117% of CapEx in 2012 and 107% in 2013). By my estimation, if AT&T had reallocated just half of these shareholder returns to full “fiber-to-the-premise” network upgrades, it could have extended state-of-the-art all-fiber networks to nearly 90% of the roughly 76 million premises passed by its network, assuming construction costs comparable to those budgeted by Verizon, the nation’s second largest telco, nearly a decade ago. Boost that CapEx reallocation percentage a bit more, and virtually all of AT&T’s customers and the communities in which they live and work would by now be enjoying the direct and spillover benefits enabled by fiber’s symmetrical gigabit-level speeds and superior reliability.
A related indicator of AT&T’s priorities is its 2015 acquisition of DirecTV for $67 billion, about $48.5 billion of which was financed by equity and the remainder by debt. While this may prove to be a smart move for AT&T from a competitive and financial perspective, given the competitive weakness of its underfunded but once much-touted U-verse upgrade strategy, it’s worth noting that this massive M&A investment: 1) generated virtually no new infrastructure or competitive entry; 2) involved an additional debt burden greater than the total amount AT&T had invested in the initial U-verse and follow-on Project VIP phases of its wireline network upgrade, and a total acquisition-related investment nearly five times the financial magnitude of these wireline upgrade investments.
One need not be a technology or financial expert to get a sense from the above that the strategic priorities driving AT&T’s investment decisions focus more on share price and profits than on maximizing the social value of the Internet’s “spillover rich” infrastructure within the service territory originally granted to Ma Bell as a protected monopoly, and later inherited (and largely reconsolidated) by the corporate entity we now know as AT&T. While this strategy may make very good sense from the perspective of AT&T’s management and shareholders, my point here is that, given the company’s (and its peer group’s) dominant role in the communication sector and our national political economy, “what’s good for AT&T,” may not be so good for the nation as a whole. And, if that’s the case, perhaps there’s more that we, as a society, can and should do to shift the focus of public policy toward an approach to the Internet that does, in fact, focus on maximizing its positive (yet difficult to quantify and even more difficult to internally monetize) direct and indirect spillover effects.
In the next post in this series I consider Kelly’s framework in the context of the local broadband access market.
April 16th, 2016
A key source that informs my perspective on special access policy—and telecom policy in general—is Brett Frischmann’s 2012 book, Infrastructure: The Social Value of Shared Resources. Selected chapters from the book can be found here, and Frischmann provides a good overview of the book, including its Internet-related sections, in this talk at Harvard’s Berkman Center). Frischmann is a professor and co-Director of the Intellectual Property and Information Law program at Cardozo Law School in New York City.
On page five of the book, Frischmann notes that “most economists recognize that infrastructure resources are important to society precisely because infrastructure resources give rise to large social gains.” But he also notes that infrastructure’s “externalities are sufficiently difficult to observe or measure quantitatively, much less capture in economic transactions, and the benefits may be diffuse and sufficiently small in magnitude to escape the attention of individual beneficiaries.” On page 6 he cites the “comedy of the commons” concept developed by Carol Rose which, he explains, “arises where open access to a resource leads to scale returns—greater social value with greater use of the resource.”
The book includes a chapter focused specifically on the Internet, which Frischmann describes on page 345 as “a spillover-rich environment because of the basic user capabilities it provides and the incredibly wide variety of user activities that generate and share public and social goods.”
But, as with other infrastructure, it is not easy to quantify the net value of Internet-supported social goods, or even identify what all of those social goods are. As Frischmann explains on pg. 347, Internet connectivity involves “a very high degree of social value uncertainty.”
It is impossible to predict with any degree of confidence who or what will be the sources of social value in the future. Accordingly, there is no reason to defer to private firms in this context. First, there is no reason to believe that firms are better informed, capable of maximizing social value, or likely to resist the pressure to discriminate, prioritize, or optimize the infrastructure based on foreseeable and appropriable private returns. Second, there is no reason to trust that markets will correct misallocations…[F]irms may be strongly biased in their estimation of the future market value to favor services that they currently offer or expect to offer, sponsor, or otherwise control, and to disfavor those that they do not.
One recent example of the tendency of private access network owners with market power to “discriminate, prioritize, or optimize the infrastructure based on…appropriable private returns” is Comcast’s recently launched Stream TV service. The element of discrimination is clear in that Stream TV, unlike competing services like Netflix, does not count against the data caps that Comcast customers are or will be subject to.
On page 346 Frischmann points to two demand-driven problems that need to be considered by Internet-related policies:
The Internet infrastructure is a mixed infrastructure, and as such, it faces the two types of demand-driven problems discussed throughout this book: First, it faces concerns about undersupply and underuse of infrastructure to produce infrastructure-dependent public and social goods, which leads to underproduction of those goods. Second, it faces concerns that infrastructure development may be skewed in socially undesirable directions. For example, if private infrastructure owners prematurely optimize infrastructure for uses that they expect will maximize their private returns, and in doing so choose a path that forecloses production of various public or social goods that would yield greater net social returns, the social option value of the Internet is reduced. This latter concern may involve dynamic shifts in the nature of the Internet infrastructure, such as optimizing networks in a manner that shifts from mixed infrastructure toward commercial infrastructure.
In key respects, these two problems correspond to the social harms of primary concern to Singer (undersupply by companies subject to regulation) and Cooper (actions by private infrastructure owners to “maximize their financial returns” in ways “that foreclose production of various public or social goods that would yield greater net social returns”).
Frischmann’s analysis of infrastructure and Internet dynamics leads him to conclude that the abundant but difficult to predict and quantify social benefits of the Internet are best supported via commons management. On page 7 he describes commons management as “the situation in which a resource is accessible to all members of a community on nondiscriminatory terms, meaning terms that do not depend on the users’ identity or intended use.” This, he says, “can be implemented through a variety of public and private institutions, including open access, common property, and other resource management or governance regimes.
Frischmann acknowledges that “grouping ‘open access’ and ‘commons’ under the ‘commons management’ umbrella will be troublesome to some property scholars:”
Open access typically implies no ownership or property rights. No entity possesses the right to exclude others from the resource; all who want access can get access, typically for free. Commons typically involves some form of communal ownership (community property rights, public property rights, joint ownership rights), such that members of the relevant community obtain access “under rules that may range from ‘anything goes’ to quite crisply articulated formal rules that are effectively enforced” and nonmembers can be excluded.
There are at least three dimensions of distinction between open access and commons as traditionally understood: first, ownership (none vs. communal/group); second, the definition of community (public at large vs. a more narrowly defined and circumscribed group with some boundary between members and nonmembers); and third, the degree of exclusion (none vs. exclusion of nonmembers). These distinctions are important, especially for understanding different institutions and how social arrangements operate at different scales.
In two other posts (see here and here) I consider key issues related to infrastructure ownership in more detail, from the perspective of the “generative” vs. “extractive” ownership framework developed by Marjorie Kelly in her book, Owning our Future.