Wednesday, November 22nd, 2017
AT&T Goes Hollywood
A. Michael Noll
© Copyright 2017 AMN
AT&T wants to purchase Time Warner — the White House and the Justice Department correctly oppose the acquisition. The acquisition would create a huge vertical integration of content and conduit that would not benefit consumers, in my opinion. But the local telephone companies have a long history of lusting after content and Hollywood.
Today’s AT&T is really a former local Bell company: the past Southwestern Bell that then became SBC Communications which then acquired AT&T and then wrapped itself in the AT&T identity.
Over two decades ago, the local Bell companies chased after the entertainment industry. And now again one of the remaining of the two super Bells – AT&T – is again inflicted with Hollywood fever.
AT&T is a conduit company, providing the cables and wireless paths over which consumer access various services. In 2015, AT&T extended its control over conduit through its acquisition of DirecTV for nearly $50 billion, delivering video over satellite to homes. But throughout history, the old Bell operating companies have lusted after also providing the content that their customers want to access over the conduits.
The telecommunication conduit business in the United States has become mostly a duopoly. AT&T and Verizon dominate wireless. Either AT&T or Verizon and a CATV company dominate wired access. Duopolies inherently adjust “competition” so that markets are shared and profits maximized, without attracting government attention. In the late 1940s, the studios were forced to divest their vertical integration of movie theaters. So today If AT&T wants to become a content company, it should be required to divest its wireless and wireline conduit businesses.
AT&T knows little of Hollywood and the news and entertainment businesses. It should stick with its strengths in providing wired and wireless conduits, as I wrote in 1993.* One might argue that if AT&T wants to lose its shirt chasing Hollywood, then let it. However, like decades ago, now is still not the time for AT&T to go Hollywood.** “Hollywood” might well end up as “Follywood” for AT&T.
*“Baby Bells Should Stick With Strengths,” by A. Michael Noll, Los Angeles Times, October 22, 19933, p. B15.
**“The phone company has gone Hollywood,” by A. Michael Noll, Morris County Daily Record, January 7, 1994, p. A11.
November 22, 2017
Wednesday, November 15th, 2017
The Department of Media and Information (MI) at Michigan State University invites applications for a tenure-system faculty position at the rank of Associate or Full Professor in the area of media and information policy. We seek a visionary leader with an innovative research program and/or industry or policy-making experience who will develop the Quello Center to the next level of prominence, addressing critical issues of media and information policy in a digital economy. The successful candidate will have a strong record of obtaining grants, contracts, and/or other types of external funding in support of research and outreach.
A terminal degree in a discipline related to media and information policy is required, including but not limited to many disciplines in the social sciences, engineering, and law. We value experience in public policy or industry and a willingness to engage with stakeholders outside the academy. Teaching will include undergraduate and graduate courses in a vibrant multi-disciplinary environment.
The successful candidate will hold the endowed chair associated with the Quello Center and provide strategic direction and leadership for the Center. The Quello Center was established in 1998 to be a world-wide focal point for excellence in research, teaching, and the development and application of expertise in telecommunication management and policy. It has since evolved to policy issues in the digital economy, more broadly focused. It is dedicated to original research and outreach on current issues of information and communication management, law, and policy.
The Center is associated with the MI department, home to a world-class faculty known for its cutting-edge research on the design, uses, and implications of information and communication technologies (ICTs). Important MI research foci include communication economics and policy, social media, human computer interaction, digital games and meaningful play, ICT for development (ICT4D), and health and technology. MI faculty members also design media and develop socio-technical systems.
To apply, please visit the Michigan State University Employment Opportunities website (http://careers.msu.edu), refer to Posting #477204, and complete an electronic submission. Applicants should submit the following materials electronically: (1) a cover letter indicating the position you are interested in and summarizing your qualifications for it, (2) a current vita, (3) if appropriate, a URL to a website describing your current research/outreach activity, and (4) the names and contact information for three individuals willing to serve as your recommenders to the search committee. The search committee will begin considering applications on January 30, 2018. The search closes when a suitable candidate is hired.
Please direct any questions to Professor Charles Steinfield, Search Committee Chair, Department of Media and Information at Michigan State University, at firstname.lastname@example.org.
MSU is an affirmative action, equal opportunity employer. MSU is committed to achieving excellence through cultural diversity. The university actively encourages applications and/or nominations of women, persons of color, veterans and persons with disabilities.
Tuesday, November 14th, 2017
Global symposium on AI & Inclusion in beautiful Rio de Janeiro
Last week, I had the immense pleasure of participating in the Global AI & Inclusion Symposium at the Museum of Tomorrow in Rio de Janeiro, Brazil. The Global Network of Internet & Society Centers (NoC) invited a wide range of stakeholders toRio during November 8-10, 2017. Spearheaded and organized by the Berkman Klein Center for Internet and Society at Harvard University and the Institute of Technology and Society in Rio, the symposium brought together researchers, industry, NGOs, and other entities to discuss issues around inclusion and artificial intelligence (AI).
One of the key aspects of this symposium was the inclusion of perspectives fromnot only a wide range of areas and disciplines, but also from all regions across the globe. Each region was represented—however, more inclusion of underrepresented areas was noted as an area of action for future activities, as the discourse still saw a larger number of perspectives from Western backgrounds. As an example, although China is one of the key players regarding AI, only a small number of representatives were from China or provided a background on AI and inclusion in China.
The symposium was jam-packed with high-caliber talks, discussions, and activities. The symposium program can be found here. Whereas the first day focused on creating a common understanding of AI and inclusion as concepts and frameworks, the second day identified opportunities, challenges, and possible approaches and solutions to increase inclusion in AI, and the third day focused on areas for future research, education and interface building.
All speakers provided impressive background and knowledge on AI and inclusion to a multidisciplinary and multifaceted audience, which created a steep learning curve for me as a social scientist with (previously) little background in the technologies behind AI. However, the design of the symposium talks and activities facilitated a deep understanding of the issues around AI and inclusion for individuals from any disciplinary background.
Key issues in AI and inclusion
One of the key issues that stood out at this symposium is the bias and the exclusionary nature of AI through the way that AI is created and trained. For example, algorithms, which are an inherent part of AI, that are created through training datasets are only as good as those datasets. This means, if a training dataset—created by a human—is biased, the algorithm will be biased too. This became apparent quickly through a variety of examples, that included work from Desabafo Social, a non-profit that promotes social justice and youth participation in Brazil, which showed videos that revealed racist bias in search algorithms for a variety of photo sharing pages. An impressive example of their enlightening videos can be found here.
These issues of bias and exclusion at the creation stage do not just include race as a factor, but any underrepresented group. For example, the technology created for airport security prompts the security agents to choose whether a person is male or female before entering the millimeter wave scanner. Based on training datasets of typical male and female bodies, the scanner then decides whether there could be any objects hidden on those bodies. However, this AI technology (Automatic Target Recognition, ATR) only differentiates two genders, meaning that anyone who does not fall into these two categories will be marked as suspicious and will have to go through a secondary security hand search.
Another striking takeaway from the conference was the missing legal definition of AI and the absence of global standards in AI. For example, AI accuracy in face recognition is very high for white males, but low for black females. A good practice standard, for example a minimum accuracy requirement, does not currently exist, although a number of entities, such as the Mozilla Foundation, are aiming to create such standards as a “fair AI” badge—similar to the fair-trade badge—to remedy these issues.
Another area of concern in AI is privacy and surveillance, as AI relies on copious amounts of data to learn and improve its algorithms. However, users are often unsure of when, where, and how their data are collected and used for which purposes. Although some regulations have been passed to protect users’ privacy, these regulations are not global, and different regions apply different laws and regulations. Accordingly, there were calls for—first of all—a global legal definition of AI, which
would provide the basis for creating global regulations on inclusion, privacy, and other areas affected by AI. Again, the Mozilla Foundation made a number of suggestions on “fair AI” and they provide a “holiday buyer’s guide” on technology that will “snoop” on you—i.e., presents that you should probably not give to your loved ones… unless you’d like them to be snooped on…
Future Event on AI, bias, and inclusion at the Quello Center
Overall, the symposium left me personally with more questions than answers, but I am consoled by the fact that every single participant I spoke with felt invigorated and motivated to do something to move forward the cause of increasing inclusion in AI. For one, we all agreed to help make these issues a public conversation topic—this blog post is only the start. At the Quello Center, I will be organizing a discussion roundtable concerning issues around artificial intelligence, bias, and social exclusion, that will delve deeper into these issues based on the work that is happening here at MSU. Watch this space for a time and date during the spring semester 2018.
Tuesday, November 14th, 2017
The Quello Center’s Broadband to the Neighborhood Project is surveying residents in three areas of Detroit. We are delighted to be collaborating with the Center for Urban Studies at Wayne State University on the fielding of survey and putting their CATI system to work. Yesterday, prior to some focus groups in Detroit, we were able to visit the Center for Urban Studies and meet the team conducting our field research, led by Charo Hulleza (far left in photo), and her research assistant, John Jakary (far right in photo). Our thanks to them for their professional team work and collaboration on this project. They are an excellent team, see below.
Friday, November 10th, 2017
My colleagues and I had a wonderful conversation with Tommy Edison, host of The Blind Film Critic, yesterday afternoon, following his presentation at UARC’s (MSU Usability/Accessibility Research and Consulting) World Usability Day conference. Blind from birth, Tommy’s website describes him as the ‘Blind Film Critic, YouTuber, Radio Personality, Public Speaker’, and he truly is a master of all. We organized this conversation to discuss his life and work and particularly the lessons he has learned about disabilities and access to the Internet. As Tommy said, ‘too few people have any experience with a blind person’, and even fewer with how a blind person uses the Internet.
The most important insight he provided was on the centrality of the mobile smartphone for enabling better access to the Internet for the blind. As he argued, computers, such as laptops, and the Internet have become more accessible since the early days for those born blind or having lost their eyesight, but there are still major hurdles. He had always found it difficult to deal with the computer screen, for example, even though the graphical user interface has of course been one of the key breakthroughs in helping sighted people use the Internet.
A breakthrough on the computer-based Internet has been text-to-voice advances, which he uses. But in this respect, he has found the smartphone to be the most major breakthrough as he can envision the keyboard of a smart phone through touch and therefore navigate the Internet far more easily. And he can touch a key once to hear the function, and twice to complete it.
I asked about the use of voice search, and whether this provided a similar breakthrough for him. However, his concerns over privacy trumped the value of voice search. So, as we increasingly design Web sites and blogs for mobile first access, we are often making the Internet more usable for those with impaired sight.
Tommy Edison has been blind since birth and now producing videos online that reveal a glimpse into his life and the funny challenges that he faces daily. Tommy has showed us what it’s like for someone who is blind to use an ATM for the time and how some people who are visually impaired may organize their money. Plus, Tommy is living his dream of reviewing movies as the Blind Film Critic. With his unique and interesting perspective, Tommy says “I watch movies and pay attention to them in a different way than sighted people do. I’m not distracted by all the beautiful shots and attractive people. I watch a movie for the writing and acting.” In addition to being the Blind Film Critic, Tommy has been a radio professional for nearly 25 years, having spent the last 19 at STAR 99.9 FM in Connecticut as a traffic reporter. Tommy’s engaging personality, along with his on-air excellence and entertaining demeanor has garnered him much media attention.
The Center thanks the Quello Center’s Valeta Wensloff and Graham Pierce, the Assistant Director of Usability/Accessibility Research and Consulting at MSU for helping to bring this conversation together.
Friday, November 3rd, 2017
Professor Sandi Smith in the Department of Communication of the College of Communication Arts & Sciences at MSU was named of the University’s few Distinguished Professors at a ceremony yesterday at the University Club. She joins Professor Bradley Greenberg, one of her mentors, who received this recognition in 1990.
Sandi and the other newly elected professors featured in a video about their research and teaching. I think everyone in the audience was ready to declare a new major and return to university to work with scholar-teachers like Sandi and the others honored yesterday. They were all seriously inspirational, talented, and dedicated academics.
Here is a photo of Sandi with Dean Prabu David and Professor Kami Silk, the College’s Associate Dean of Research. Sorry about the shading – the room was dark – but you can clearly see how pleased everyone was with the awards.
Tuesday, October 31st, 2017
We are delighted to announce that Vincent Curren, principal of Breakthrough Public Media Consulting, Inc., has accepted our invitation to join the Quello Center’s Advisory Board. Given his experience in public broadcasting and his current focus on the future of broadcasting standards and their implications for the industry, his appointment helps reinforce the Center’s broadcast legacy tied to James H. Quello.
Recently, Vinnie visited the Quello Center and provided his perspective on the future of public broadcasting. He focused on the new IP-based standard created by the Advanced Television Systems Committee (ATSC), called ATSC 3.0. As he argues, this new standard is likely to enable real synergies between the Internet and broadcasting, and much much more, even helping to usher in the next generation of television.
As principal of his firm, Breakthrough Public Media Consulting, Vinnie is helping public media companies navigate today’s dynamic and competitive media world. More concretely, he is working with the Public Media Company to help public television stations leverage the power of ATSC 3.0, the next generation, broadcast television standard.
Before leaving to start his own firm, Vinnie served as Chief Operating Officer of the Corporation for Public Broadcasting (CPB), a position that he held for nearly a decade. While at CPB, Vincent Curren had overall responsibility for managing station policy, grant-making and station support activities, ensuring that all Americans receive robust public media services for free and commercial-free. Prior to being named Chief Operating Officer, Vinnie was the Senior Vice President for Radio at CPB.
Vinnie has been a major market station general manager (WXPN, Philadelphia), has held programming, fundraising, and engineering positions in radio, been a commercial television producer/director, and has served on the boards of the Development Exchange (now Greater Public) and the Station Resource Group.
Vinnie holds a BA from SUNY Buffalo (Psychology) and an MS from the University of Pennsylvania in Organizational Dynamics. After Vinnie was invited to accept our invitation to join the Board, and had a chance to review its members, he spoke of the quality of the Board. He added that, coincidentally, he happened to have been a fellow graduate student at the University of Wisconsin-Madison in the 1970s, with another member of our Board, Bob Pepper, now at Facebook, but formerly at Cisco, and who was a major figure at the FCC. Vinnie said Bob was the ‘star Larry Lichty student’, referring to Professor Lawrence W. Lichty, one of the foremost scholars of the history of broadcasting. In fact, when I first met Dr Pepper, he was a professor at the University of Iowa, and focused on the history of public broadcasting.
So it is wonderful to have Vinnie Curren, one of the nation’s leading thinkers about the future of public broadcasting, as well as his former colleague at the University of Wisconsin-Madison, Bob Pepper, along with all the other prominent figures on the Quello Center’s Advisory Board. We are honored.
Director and Professor of Media and Information Policy
Tuesday, October 24th, 2017
Clear evidence of the transfer of knowledge across universities is illustrated by an innovation in the Department of Media and Information that will bring a coffee & cakes event this Friday, 3:30pm in the MI Conference Room. Coffee and cakes will be available to all MI staff, graduate students, and faculty who attend.
Tech transfer? Well, this innovation comes via Dr Bibi Reisdorf, Assistant Professor & Assistant Director of the Quello Center, who received her DPhil from the Oxford Internet Institute (OII), where there is some claim to beginning a tradition of coffee and cakes late on Friday afternoons.
We thank Bibi and the OII for fostering an innovation at MSU that is sure to be a hit and help bring colleagues together in ways that will stimulate collaboration in more ways than enjoying desserts 🙂
Wednesday, October 11th, 2017
Bill Dutton will present the findings of the Quello Search Project to kick off a workshop on fake news and filter bubbles at Bruegel, a European think tank, specializing in economics, that is based in Brussels. Background on the Quello Search Project can be found in the initial report of the project, Search and Politics: The Uses and Impacts of Search in Britain, France, Germany, Italy, Poland, Spain, and the United States. A short blog about the thrust of our findings is also online, entitled “Fake News, Echo Chambers and Filter Bubbles: Underresearched and Overhyped“.
Monday, October 2nd, 2017
Last week, Vincent (Vinnie) Curren, Principal at Breakthrough Public Media Consulting, Inc., gave an insightful Quello Center presentation about the technological and market potential of ATSC 3.0, an IP-based standard created by the Advanced Television Systems Committee (ATSC). As CNET put it, this standard was created with the idea that most devices would be Internet-connected, enabling a hybrid system whereby the main content (audio and video) would be sent over the air, but other content (advertisements) would be sent over broadband and integrated into the program. This creates some very interesting opportunities for individualized marketing, though as ATSC touts in a somewhat cutesy promotional video, ATSC 3.0 is capable of a lot more.
The conversation with Vinnie took an interesting turn (to me anyhow), when he contrasted the state of public broadcasting in Michigan with that in Arkansas. According to Vinnie, public broadcast station management in Michigan is highly balkanized, whereas in Arkansas, it is largely centralized. This implied far fewer individual station engineers and managers in Arkansas, where budget savings from having a smaller bureaucracy are instead applied toward better local news coverage. Effectively, Vinnie was touting the benefits of merger to (state level) monopoly.
This statement immediately set off my antitrust alarm (which sounds like this). After all, even if a merger between two firms that preserves both firms’ products (e.g., broadcast stations) can reduce costs, monopolistic ownership could still raise prices above that in a duopoly by internalizing competition between the firms. More specifically, when one firm in a duopoly raises its price, some of its customers will switch to its competitor’s product and vice versa. This competitive threat puts downward pressure on prices relative to what happens under a monopoly. When a monopolist sells both products, a rise in the price of one positively impacts demand for the other, inducing the monopolist to set higher prices unless a merger to monopoly lowers costs sufficiently to offset this anti-competitive effect. The fact that antitrust practitioners seldom consent to a merger to monopoly suggests that the anti-competitive effect usually dominates.
However, the broadcasting market is different! Broadcasters operate in a multi-sided market that is likely to become even more complicated by the spread of ATSC 3.0. First, consumers of content do not pay broadcasters to watch television. Instead, broadcasters subsidize consumers, but charge advertisers for airing commercials (though in the case of public broadcasting, this is largely supplemented by contributions from viewers like you). Broadcasters may also charge retransmission fees to cable operators who carry broadcasting content and do charge consumers for content generally. Moreover, with ATS 3.0, Internet service providers will have to be involved in this market if advertisements are to be integrated via broadband. This means that the effect of merger operates through a mechanism that is far more complex than the “internalization of competition.”
After Vinnie’s presentation I considered whether economists have attempted to tackle the issue of merger in a multi-sided market. The issue is relatively understudied, but two papers stood out in my literature search:
Chandra and Collard‐Wexler (2009) theoretically explore a two-sided merger from duopoly to monopoly and then use difference-in-differences approaches to empirically investigate mergers by newspaper publishers. As in many other two-sided markets, newspaper publishers offer one side (consumers) a subsidy by charging below cost. This is because newspapers not only value readers’ circulation revenue, but also the value that advertisers place on consumers. In the model of Chandra and Collard‐Wexler (2009), the key factor that determines how newspaper mergers affect prices is how newspapers value the marginal consumer who is indifferent between two competing newspapers.
If the revenue that this consumer indirectly brings in through advertisement consumption is lower than the loss to the newspaper of subsidizing the consumer’s newspaper purchase, then competing duopolists will set higher circulation prices in equilibrium than a monopoly owner of the two papers (even absent any cost reduction by the monopolist). This result is driven in large part by the authors’ assumption that consumers who are indifferent between the two papers will turn out to be less valuable to advertisers, and hence will bring in advertising revenues that are lower than the subsidy they enjoy on the paper. The assumption is well motivated in the paper, but may not necessarily apply in broadcasting. Moreover, if the reader provides a positive value to the newspaper, then mergers can still increase prices (unless cost reduction is sufficient to counteract market power).
Tremblay (2017) sets up a relatively general multi-sided platform model that he uses to measure platform market power and to assess the effect of platform mergers. In this model, multiple platforms that facilitate interactions between distinct groups (e.g., broadcasters might serve consumers of content and advertisers) compete by pricing for each interaction facilitated by the platform.
The model highlights the complexity of analyzing multi-sided markets by recognizing that demand for any interaction is a function of not only the vector of prices involved in that interaction—as in a “one-sided” market—but also of the vector of all other interaction types! Thus, not only must we consider the demand response to a change in price for that interaction, but also the demand response to the numerous potential externalities that might exist (e.g., a negative network externality can occur on media platforms where greater consumer advertisements diminish consumer usage on the platform).
As such, in addition to consisting of the usual marginal cost and demand elasticity contingent markup, the equilibrium price for a specific interaction is also dictated by what Tremblay refers to as “marginal profit elsewhere,” which consists of the marginal changes that the interaction in question engenders on all other interactions. Moreover, in the case of a multi-platform seller (e.g., broadcaster that owns multiple stations), as might follow post-merger, the equilibrium price is impacted not only by the standard diversion term that gauges the extent to which a merger can internalize competition, but also by “diversion elsewhere,” which results from multi-sidedness. This “diversion elsewhere” means that some platform prices may decrease post-merger, suggesting that even without cost-reduction benefits, a horizontal platform merger may be efficient.
Certain factors complicate matters even further in broadcasting. As Vinnie pointed out, a significant part of a local television station’s advertising revenue comes from national advertisers, especially in the larger markets. In many cases, prices are not set unilaterally, but are determined through negotiations with advertisers. A larger multiple-market footprint gives larger broadcast groups leverage when they negotiate pricing for national clients. The effect of a broadcasting merger surely depends on this countervailing bargaining power as well as on whether content consumers view advertising as a good or a bad.
Additionally, a significant part of local station revenue comes from “retransmission consent fees.” If it opts for retransmission consent, a cable service provider is not required to carry the broadcaster’s channel, but if the cable operator chooses to do so, the broadcaster can demand “retransmission” or rights fees. A large station owner like Sinclair, which operates hundreds of stations, has additional leverage when negotiating retransmission consent fees with a large cable operator like Comcast. Of course, cable companies may pass these fees down in the form of higher prices to consumers. The additional revenue on the broadcaster side may lead to better content, but that will probably come at a higher price for cable service.
After reading this post, a former colleague who is very knowledgeable in this area pointed out that there has been some research on the trade-offs of consolidation in two-sided markets and related issues that predates the modern multi-sided market literature.
An early two-sided market analysis by Robert Masson, Ram Mudambi, and Robert Reynolds (1990) shows that competition can sometimes lead to a price increase. Moreover, in the model, competition either makes advertisers better off while making media-consumers worse off or the other way around. An even older related piece by James Rosse (1970) seeks to estimate cost functions in the newspaper industry without cost data. Yet another article concerning the newspaper industry by Roger Blair and Richard Romano (1993), looks at newspaper monopolists, which as the authors point out, nevertheless frequently sold newspapers at below cost. I suspect that the two-sided logic for this to occur is a lot more clear to economists today than it was in 1993.
 ATSC is an international, non-profit organization developing voluntary standards for digital television. Member organizations represent the broadcast, broadcast equipment, motion picture, consumer electronics, computer, cable, satellite, and semiconductor industries. See https://www.atsc.org/about-us/about-atsc/.
 Other related work includes Filistrucchi et al. (2012) and Song (2013). See Filistrucchi, L., Klein, T. J., & Michielsen, T. O. (2012). Assessing unilateral merger effects in a two-sided market: an application to the Dutch daily newspaper market. Journal of Competition Law and Economics, 8(2), 297-329; Song, M. (2013). Estimating platform market power in two-sided markets with an application to magazine advertising. Available at https://ssrn.com/abstract=1908621.
 Note that I have not discussed the impact of merger on the price of advertising. The authors find that the effect on advertising price is indirect: if there is an increase in a newspaper’s circulation price, this will increase the average value to advertisers of that newspaper.
 In the words of Tremblay, demand contains an infinite feedback loop because demand for an interaction by platform X is a function of demand for an interaction Y and vice versa.
 Commercial stations have a choice between two options with respect to making their programming available to cable and satellite systems. They can exercise “must carry.” If they do this, the cable service provider is required to carry the broadcaster’s primary channel but does not have to pay the broadcaster any rights fees for carrying the channel. Alternatively, cable service providers can exercise “retransmission consent.”