Wednesday, April 17th, 2019
Dr. Laura DeNardis studies the invisible. In her new book, The Internet in Everything: Freedom and Governance in the Age of Smart Devices, DeNardis lifts the veil of the Internet to make visible the arrangements of embedded power across cyber-physical systems. At a recent talk hosted by the MSU Quello Center in the Department of Media & Information in the College of Communication Arts & Sciences, DeNardis outlined the importance of Internet governance, and the misperceptions surrounding the term. “The term Internet governance is actually an oxymoron,” she said, adding “it’s not just about governments …. with the Internet, almost all of the infrastructure is run and owned by the private sector.”
See her full talk here.
DeNardis is globally recognized as one of the most read scholars in Internet governance. She has written six books including: The Global War for Internet Governance; Opening Standards: The Global Politics of Interoperability;and Protocol Politics: The Globalization of Internet Governance. She also serves as Faculty Director of the Internet Governance Lab and is a tenured Professor in the School of Communication at American University in Washington, DC.
Her new book addresses the fact that more things, than people, are now connected on the Internet. In her Quello talk, DeNardis described how this is a major transformation, considering the Internet is no longer just a communication system, rather it is a control network in which battles over control of the infrastructure serve as a proxy for political power. At the same time, the “Internet of Things,” connects everything from industrial energy sensors to cardiac monitors to home appliances, thereby elevating the significance of the power structure behind the Internet. As such, she warns “today, an outage in cyberspace is no longer about loss of communication but about loss of life.”
While human life may depend on the stability of the Internet, DeNardis points out that the Internet has a life of its own. In her talk, DeNardis explained as long as electricity exists, the Internet lives. To illustrate how autonomous the Internet has become, she asks the question: What would the Internet being doing if humans suddenly left the earth? A lot would be happening, according to DeNardis. For example, automatic mortgage payments would be paid, home thermometers would adjust home systems, robots in an Amazon fulfillment center would be fulfilling orders — until electricity stopped flowing. In fact, “everything that’s important to us in society, depends on the stability and security of the Internet,” said DeNardis.
Armed with an Engineering Science degree from Dartmouth College, an MEng from Cornell University, a PhD in Science and Technology Studies from Virginia Tech, and a postdoctoral fellowship from Yale Law School, DeNardis has spent her career examining how these hidden technical governance arrangements will become the constitution of our future. While this system of governance is invisible to most, the implications of infrastructure and design decisions impact everyone. She remains diligent in her quest to make these arrangements of power visible because, in her view, Internet governance issues are essentially issues of human rights. See her Quello interview here.
Thursday, September 20th, 2018
Digital transformation and the digital economy are high on the agenda of policy-makers worldwide. Seeking to secure global leadership in future growth industries, such as 5G wireless communications, artificial intelligence, and the Internet of Things (IoT), an increasing number of countries are reassessing prevailing communications laws and policies. The urgency of the discussion is amplified by recent experiences in the digital economy that reveal some of its fundamental flaws and shortcomings. Increasing concerns about fake news, data security, privacy, winner-takes all effects, and the tilting of markets in an algorithm-driven economy in favor of large technology companies all invite intense controversy. Contrary to the trend toward more open market and less rigid and detailed regulation that permeated recent decades, responses to these present challenges are more varied and rebalance the roles or market players, government, and non-market players in a fresh way.
For example, in emerging 5G markets, three principal models are emerging: entrepreneurship (e.g., United States), regulated competition (e.g., Europe, with national variations), and state-driven (e.g., China, South Korea). Each model has strengths and weaknesses. Much of what we know suggests that the entrepreneurship model is best suited to explore the vast innovation space offered by 5G and to accelerate the rollout of network infrastructure. However, some Internet-based innovations are facilitated by non-discrimination and network openness. One challenge therefore will be to safeguard these goals in ways that do not quench entrepreneurship. Another is to overcome the cost of having to obtain rights of way from a multitude of local bodies with divergent interests that may delay network investment. Pending U.S. initiatives such as the STREAMLINE Small Cell Deployment Act and a pending FCC Order may mitigate this issue if passed.
As important as these issues are, the unfolding digital transformation requires more fundamental reassessment and redesign of information and communication technology policy. Three particularly important areas in the United States are a revision of communications law, the gradual introduction of new regulatory practices that are better aligned with the digital ecosystem, and a reconsideration of which complementary policies will be required to fully harness the potential benefits of advanced information and communication technologies. None of thee will be easy to tackle and in the current environment of partisan politics it will be difficult to find feasible and sustainable solutions. Even under the best policy conditions, it is not feasible, to achieve such broad reforms in a big sweep (and it might be risky to do so). However, it will be important to work in a piecemeal fashion toward such a framework.
It is most important to adopt a coherent policy model that is aligned with the overarching goals of a country and with the economic and technological conditions of the sector. The new value system is built around multi-sided platform markets and several types of complementary innovation processes. Modular and architectural innovations thrive under different conditions and the legal and regulatory framework needs to be sufficiently elastic to allow both. In this interdependent value systems, almost any policy intervention has direct and indirect effects not just on the regulated players but also on non-regulated players. Unless policies are carefully designed, these indirect effects may undermine achieving the envisioned policy goal. Recent examples of poorly designed policies with potentially far-reaching economic consequences are unbundling rules in Europe that delayed upgrades to next-generation network infrastructure or present net neutrality regulations that are either too stringent (Europe) or too lax (United States) to support the diversity of innovation characteristic of the Internet.
At the legislative level, it would be desirable to overhaul the Communications Act of 1934 as amended. Value generation in the in the new digital ecosystem requires a new balance between securing non-discrimination and interoperability and the ability to differentiate network qualities of service and cooperate across network and application layers. This will support different types of innovation to interact in a synergistic, virtuous cycle that allows orchestrating the different elements of complementary innovations in areas such as logistics, health, and energy. Preserving the desirable goals of non-discrimination in a framework requires a new legislative model that avoids the complications of traditional common carrier regulation. It would allow network operators to offer innovative and differentiated services while safeguarding complementary innovations that benefit from an interoperable, open and transparent network infrastructure. The current choice between Title I and Title II regulation does not offer an effective set of options to achieve these goals and the effectiveness of the Federal Trade Commission (FTC) and antitrust policy is questionable and largely untested in digital markets.
The past three decades of reforms have generated clear evidence that public policy provides an important complement to private sector activity. Markets do not exist in a vacuum but need to be supported by formal and non-formal institutional arrangements to fully unfold their considerable benefits. In addition to updating the legal and regulatory framework of communication markets, recent observations also show that the public and non-profit sectors have important roles that often cannot be accomplished by market players but enhance the working of markets. A primary example is basic research organized with the goal to address some of humanities biggest challenges, all of which will require significant contributions from advanced communications technology (as, for example, evidenced in the 17 Sustainable Development Goals). During recent years, the United States has only demonstrated lukewarm and stagnant commitment to investing into basic research and higher funding would be desirable.
A second area are innovations with high public good benefits that cannot easily be monetized and, therefore, will not be pursued by market players. This may include extending infrastructures to disadvantaged areas, it may involve community informatics services that build stronger civic engagement, or it will include educational measure across the lifespan to allow inclusive digital participation. Non-profit organizations also play an important role in advancing these objectives. Policy needs to create an environment that deliberately supports, rather than quenches, institutional and organizational diversity in the provision of ICT services. Rather than prohibiting the involvement of such players (e.g., of municipalities in the provision of broadband access), policy would be well advised to allow for-profit, non-profit, and public sector players to co-exist. This will allow harnessing the benefits of advanced communications more fully than reliance on any single one of them. Alfred E. Kahn, one of the great visionaries of regulation, clearly recognized the need to foster such institutional diversity and it may be time to heed this advice.
Thursday, June 28th, 2018
A new experimental broadcast license for WKAR-TV opens the door for broadcast innovation and research at the MSU College of Communication Arts & Sciences. Michael O’Rielly, commissioner of the U.S. Federal Communications Commission (FCC), recently visited Comm Arts and WKAR studios to show support for the deployment of ATSC 3.0 technology and announce the new license. The FCC issued license for WKAR studios allows for the creation of a Next Gen Media Innovation Lab.
As part of this announcement, Quello Center Director Bill Dutton highlighted some unique opportunities for research and innovation using ATSC 3.0. Dutton’s presentation followed an overview of the capabilities of ATSC 3.0 by WKAR’s Technical Services Manager Gary Blievernicht. See an overview here.
Some call it ATSC 3.0, Dutton calls it Next Generation Broadcasting. ATSC 3.0 may have an unfortunate name, according to Dutton, but the potential of this broadcast innovation is generating excitement among public broadcasters, policy makers, College of Communication Arts & Sciences administrators and faculty. Dutton explained the hype and history behind this ambitious initiative to help welcome Commissioner O’Rielly and catch faculty and staff up to speed on ATSC 3.0.
ATSC 3.0 is the merging of broadcasting and the Internet. This new broadcast platform offers the affordances of the Internet, such as customized content and more viewing options (e.g. choosing from various camera angles during a live game), while using a broadcast signal. This allows flexible, adaptable and future focused programming for broadcast television, including public stations like WKAR.
O’Rielly toured WKAR studios and the College of Communication Arts & Sciences before joining the presentations. During his visit, O’Reilly announced that WKAR-TV is the first public broadcasting station awarded an experimental license to use ATSC 3.0 over the airways. Only a handful of broadcasters across the nation will have this unique opportunity, he explained, and WKAR is the only broadcasting station to explore and develop this next generation broadcasting for public television.
With the experimental license, WKAR studios and College of Com Arts will continue to build a state of the art ATSC 3.0 Media Innovation Lab. Dutton, who was part of a strong group that advocated to save MSU’s broadcast spectrum and establish a center at MSU to experiment with ATSC 3.0, explained the potential behind this cutting-edge broadcast system and reflected on how the university considered auctioning off WKAR-TV spectrum at the FCC Incentive Auction in 2016.
“There was financial incentive, potentially over $206 million” Dutton explained. However, the potential loss of WKAR was met with public backlash when hundreds of people gathered for a forum on the issue in January of 2016. Ultimately, the university decided that the end of over-the-air public television in Lansing, the deepening divides in access to broadcasting and the lost potential for broadcast innovations was not worth the money. With the decision to keep WKAR on the airwaves, thought leaders and advocates like Prabu David, Dean of the College of Communication Arts & Sciences, decided that a partnership between the college and WKAR could help shape the future of broadcast.
The potential crisis was averted when MSU pulled out of the auction, Dutton said, now we have decisions to make about the potential for research and policy. Among other things, ATSC 3.0 will require policy considerations surrounding issues of localism, diversity, privacy and security. Research is required to determine best practices and inform policy decisions.
The potential for the Media Innovation Lab is immense, Dutton continued, “we can do technical experiments to improve reception in rural areas and distressed areas of Lansing, and we can figure out different approaches to providing two way interactive digital content as well as targeted content.”
Dutton listed other capabilities and considerations for the lab as a testbed for personalization and new applications and services including alerts and information related to health, medical, emergency or public service announcements. The Next Gen Media Innovation lab can serve as a platform for user behavior research related to user adoption of ATSC 3.0, patterns of use and impacts of the technology. Dutton believes that such a lab can help improve public broadcasting in Lansing and attract students to the college who value being at the cutting edge of broadcast innovations.
Commissioner O’Rielly expressed his gratitude to WKAR and other public broadcasters for leading the way in television and broadcast research, saying “commercial broadcasters are not very good at doing research, because public broadcasters are so good at it.” He explained, how commercial entities are able to use the research of public media broadcasters, such as WKAR, and modify approaches for commercial use. O’Reilly expressed excitement and awe of WKAR studios and Com Arts, admitting that in his 25-years of public service he had never visited a public broadcasting station.
Monday, June 18th, 2018
The Natural Stupidity of Artificial Intelligence
A. Michael Noll
June 17, 2018
© Copyright 2018 AMN
Clearly, the future is coming, but at times we seem mostly to be chasing the past. Artificial intelligence is today’s “new” rage. But I think it is mostly hype and faith, coupled with a blind, and perhaps deliberate, ignorance of what was done decades ago.
In the 1960’s, digital computers were programmed and used at Bell Telephone Laboratories (Bell Labs) to “compose” music. Today the same algorithmic approach is called artificial intelligence. Digital computers were also programmed in the early 1960s at Bell Labs to create art. And today this too is called artificial intelligence. Back then decades ago, the intelligence was the human who wrote the program and also the human who chose which computer-generated music and art was most liked.
A modern jetliner can fly itself. But is this artificial intelligence, or simply computer control following algorithms? The human pilots are just there to take over in case of an emergency.
What is “artificial intelligence?” “Artificial” means false, fake, not natural. “Intelligence” is the ability to process information and then to perform appropriate actions. It seems to imply some sort of innate human ability. Clearly, a machine is not human and thus cannot possess human qualities, such as intelligence. The “intelligence” of a machine consists of programmed algorithms that the machine carries out. It is not a human quality – it is fake.
I am reminded of decades ago when we were told that the human brain was like a digital computer, and that neurons rather than bits were involved. Well, this theory went nowhere and the human brain is still much of a mystery. There was decades ago the computer program ELIZA created by Joseph Weizenbaum that could act as a psychotherapist.* Weizenbaum explored in his book the human fascination with autonomous machines. – and this was over four decades ago. I expressed concern in 1961 about computers that could learn and act.**
Today there clearly is considerable hype and publicity being given to artificial intelligence. It promises much, but seems mostly to attract investors and big companies that hope to cash in on it all (or the next “new” thing). The ignorance of what went on in the past, coupled with the lust of greed, is the natural stupidity of artificial intelligence.
* Joseph Weizenbaum, Computer Power and Human Reason, W. H. Freeman and Company (New York), 1976.
** A. Michael Noll, “Electronic Computer – Friend or Foe?” the Orbit, Vol. 5, No.3 (March 1961), Newark College of Engineering, pp. 8 & 16.
Friday, June 8th, 2018
True to news in the digital age, we’ve been scooped by the Internet and Twitter, specifically. But if you have not seen the news yet, we are most pleased to report that Johannes M. Bauer has accepted an offer from the College of Communication Arts & Sciences to serve as the next Director of the Quello Center, and the new Quello Chair in the Department of Media and Information. His appointment is effective August 16, 2018.
Professor Bauer has been affiliated with the Quello Center since its inception in 1998, and during his tenure as Chair of the Department of Media & Information. Both through his involvement in Center research and as chair of our home department, Johannes has been closely tied to the Quello legacy, making his appointment a strong move in support of continuity with the Center’s mission. As Advisory Board Member Marjory Blumenthal with the President’s Council of Advisors on Science and Technology (PCAST) put it: “The Center will be in excellent hands with Johannes.” And as Board Member Richard E. Wiley, Chairman of Wiley Rein LLP, said: “a great choice”.
Professor Prabu David, Dean of the College of Communication Arts & Sciences, said in announcing Professor Bauer’s appointment, that Johannes “is an accomplished scholar with an exemplary record in communication policy research and an ideal fit for this position.”
Indeed, Professor Bauer is an economist with an interdisciplinary perspective and a focus on the digital economy, having recently edited The Handbook on the Economics of the Internet (2016) with Michael Latzer at the University of Zurich. His work has been funded by major research organizations, including the US National Science Foundation and the Ford Foundation, as well as by industry and government, such as the US Department of Commerce. He has been on the boards of major journals and associations in his fields of expertise, and as an associate editor of a key journal in the field, Telecommunications Policy, and as a board member of the Research Conference on Communications, Information and Internet Policy, formerly the Telecommunications Policy Research Conference (TPRC).
With Johannes Bauer’s appointment, the Quello Center is set for a smooth transition to its next phase. Professor Bauer said he was honored to have been offered this opportunity and noted: “Communication policy faces important and often contentious issues. I will work hard on growing the reputation of the Center as a place conducting high-quality research and a forum for stakeholders to find common ground for good, forward-looking policy solutions.”
His new colleagues in the Center stand ready to support his transition as Director, and want to thank the faculty, the Quello Advisory Board, and the search committee, for drawing the search process to a successful conclusion. To quote the Chair of the Quello Advisory Board and CEO of the National Emergency Number Association, Brian Fontes: “Congratulations to the search committee for an excellent selection.”
More information about Johannes M. Bauer: https://msu.edu/~bauerj/
Information about the James H. and Mary B. Quello Center: http://quello.msu.edu/
Members of the Quello Advisory Board: http://quello.msu.edu/people/advisory-board/
Friday, June 1st, 2018
Between 24-28 May, thousands of communication scholars from all over the world gathered for the 68th International Communication Association Conference in Prague, Czech Republic. The College of Communication Arts & Sciences had a particularly strong presence at the conference with more than 80 faculty and students presenting their research. The Quello Center’s Assistant Director Bibi Reisdorf and Research Fellow Laleah Fernandez were among those presentations with some of the results from the Quello Search Project.
As part of the large program, the team working on the Quello Search Project, Grant Blank (Oxford Internet Institute, University of Oxford), Elizabeth Dubois (Department of Communication, University of Ottawa), Bill Dutton, Laleah Fernandez, and Bibi Reisdorf, put together a panel on “Personalization, Politics, and Policy: Cross-National Perspectives”. Despite the early morning start (8am) on the day following all the big ICA receptions, a good crowd turned up to hear about our results pertaining to how people make use of a diverse range of media to find information on political matters. The papers presented in this panel ranged from a focus on personalization of search, to a critical discussion of algorithmic literacy, from exploring “the vulnerable” (i.e. those who have low search skills and little interest in politics) to discussing the policy implications of citizens’ complex media habits. The panel presentations were followed by a critical discussion of the presented results by Cornelius Puschmann, Hans Bredow Institute for Media Research.
Immediately after this early morning panel, Bibi Reisdorf also took part in a panel on “Filter Bubbles: From Academic Debate to Robust Empirical Analysis”, which she co-organized together with Anja Bechmann, Aarhus University, and Oscar Westlund, University of Gothenburg & Volda University College. This panel paid specific attention to empirical evidence of the extent (or lack thereof) of filter bubbles around the globe. Despite different foci and datasets, all four panelists, Anja Bechmann, Aarhus University, Axel Bruns, Queensland University of Technology, Neil Thurman, LM University Munich, and Quello’s Bibi Reisdorf, presented findings that supported results from our Quello Search Project, which showed that although filter bubbles and echo chambers do exist, the magnitude is largely overstated and the resulting panics are unnecessary and unhelpful. The results were discussed and responded to by MSU’s very own newest ICA Fellow, Prof. Esther Thorson, who pointed out that this type of research needs to be more closely investigated and critically evaluated in light of existing communication theories, such as Uses and Gratifications or Confirmation Bias, to name just a few.
Overall, the conference was a great success for the Quello team, who also participated in a pre-conference workshop on survey design and survey questions on internet use organized by Prof. Eszter Hargittai, University of Zurich. In addition, we took a few hours each to enjoy beautiful Prague and the amazing culinary treats, including, of course, the fantastic beer and wine that can be found in this beautiful region of Europe.
Now, back in East Lansing, the team is busy finishing up a few book chapters and journal articles that revolve around the issues that were discussed at the ICA conference. Our next big conference will be TPRC in Washington, DC in September, where Laleah Fernandez will present some of our exciting results from the Detroit Study.
Wednesday, May 16th, 2018
Discussion of so-called fake news is gradually – maybe rapidly – shifting to the concept of ‘junk news’, and I fear this could be a dangerous move. I agree that the concept of fake news has many downsides, not the least of which is the degree it has been politicized. However, the shift to junk news might have even worse implications. My main concern is that it provides more of a rationale for blocking or filtering news, as if it were spam, for example.
This evening, the science reporter at PBS, Miles O’Brien, delivered an informative story about ‘Junk News‘. It was well produced, but it captured a concern of mine that has been growing throughout the debate over misinformation. It also gradually moved into a discussion of work at Facebook designed to move so called junk news off the screens of more users. Facebook representatives were thankfully adverse to agreeing they should edit the news, as if they were a newspaper, but they felt justified in looking for algorithms to diminish the visibility of news they viewed of low quality.
I for one am worried about this drive, as it will clearly do more than accomplish its stated objective. It will also be likely to promote mainstream news outlets even more than presently favored, since they will be safe sources. It will downgrade blogs and the opinions and views of networked individuals, which are at the heart of a more democratic collective intelligence.
So I will stop using junk news except as a target of criticism. I lean towards crowd sourced ratings of blogs and posts as I’d prefer the wisdom of the crowd over the wisdom of Facebook monitors, but that is what search seeks to accomplish. At least fake news is understood to be a politically charged concept, but junk news is a concept which also has serious political implications, if my fears are justified.
Wednesday, May 16th, 2018
On May 10th, 2018, Google’s ‘father of the Internet’, Vint Cerf gave the Quello Lecture at MSU, entitled ‘The Unfinished Internet’. Before the interview, he was interviewed by Scott Pohl of WKAR public radio. Here is that interview: http://wkar.org/post/googles-father-internet#stream/0
Wednesday, May 16th, 2018
AT&T’s Tarnished Brand
A. Michael Noll
May 16, 2018
© Copyright 2018 AMN
The payment by AT&T of over $1/2 million to President Trump’s attorney Michael Cohen has tarnished AT&T’s reputation and brand. It also raises concerns about the wisdom and competence of AT&T’s senior management.
Decades ago, the AT&T brand meant a lot to most consumers in the United States. AT&T owned the Bell System, which supplied telecommunication service as a regulated monopoly. In those old days, AT&T took its responsibility to the public strongly to supply quality service at affordable prices. However, AT&T was broken up and went through various divestitures, until in 2005, what was left of AT&T was acquired by Southwestern Bell. In effect, Southwestern Bell, a former Bell telephone company, cloaked itself in the AT&T identity (its former parent). And now the AT&T brand has been tarnished, as it has been revealed that AT&T paid over a $1/2 million to Michael Cohen, virtually as a gift, with no real work expected or received.
The two large remaining Baby Bells today are AT&T and Verizon, and they usually act in concert. If AT&T paid off Cohen and Verizon did not, a plausible explanation is that AT&T clearly was hoping that Cohen would exert influence to obtain government approval of AT&T’s proposed acquisition of Time Warner.
The proposed acquisition of Time Warner by ATT is fraught with questions. Would this acquisition create far too much control over content and the network conduit? Would it be too much vertical integration with no benefit to consumers? What does AT&T know about the entertainment business, other than that its antics certainty seem entertaining?
A. Michael Noll is a retired professor emeritus of communications. His earlier opinion of the AT&T proposed acquisition of Time Warner is at: http://quello.msu.edu/att-goes-hollywood/
Tuesday, May 1st, 2018
THE QUELLO CENTER PRESENTS
INTERNET PAST, PRESENT & FUTURE
BY VINTON CERF
THURSDAY, MAY 10TH @ 3:30 PM // COMM ARTS RM. 147
The Internet grew out of a successful US Defense Department experiment in packet switching and became a platform upon which a wide range of new applications have evolved. New technologies such as smart phones have reinforced the utility of the Internet by spreading access to it at increasing bandwidths and geographic scope. The Internet is estimated to have reached about 50% of the world’s population. As this decade comes to a close, what challenges remain and what new ideas may be pursued? Security, safety, reliability, misinformation, botnets, privacy, and a host of other concerns clamor for attention. Powerful machine learning tools and collaborative technologies are increasing our capacity to solve problems and ask new and challenging questions.
This talk raises questions and poses problems that need attention if we are to make of the Internet the tool it has the capacity to become.
Vinton G. Cerf is vice president and Chief Internet Evangelist for Google. He contributes to global policy development and continued spread of the Internet. Widely known as one of the “Fathers of the Internet,” Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. He has served in executive positions at MCI, the Corporation for National Research Initiatives and the Defense Advanced Research Projects Agency and on the faculty of Stanford University.
Vint Cerf served as chairman of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) from 2000-2007 and has been a Visiting Scientist at the Jet Propulsion Laboratory since 1998. Cerf served as founding president of the Internet Society (ISOC) from 1992-1995. Cerf is a Foreign Member of the British Royal Society and Swedish Academy of Engineering, and Fellow of IEEE, ACM, and American Association for the Advancement of Science, the American Academy of Arts and Sciences, the International Engineering Consortium, the Computer History Museum, the British Computer Society, the Worshipful Company of Information Technologists, the Worshipful Company of Stationers and a member of the National Academy of Engineering. He has served as President of the Association for Computing Machinery, chairman of the American Registry for Internet Numbers (ARIN) and completed a term as Chairman of the Visiting Committee on Advanced Technology for the US National Institute of Standards and Technology. President Obama appointed him to the National Science Board in 2012.
Cerf is a recipient of numerous awards and commendations in connection with his work on the Internet, including the US Presidential Medal of Freedom, US National Medal of Technology, the Queen Elizabeth Prize for Engineering, the Prince of Asturias Award, the Tunisian National Medal of Science, the Japan Prize, the Charles Stark Draper award, the ACM Turing Award, Officer of the Legion d’Honneur and 29 honorary degrees. In December 1994, People magazine identified Cerf as one of that year’s “25 Most Intriguing People.”
His personal interests include fine wine, gourmet cooking and science fiction. Cerf and his wife, Sigrid, were married in 1966 and have two sons, David and Bennett.