Tuesday, November 14th, 2017
Global symposium on AI & Inclusion in beautiful Rio de Janeiro
Last week, I had the immense pleasure of participating in the Global AI & Inclusion Symposium at the Museum of Tomorrow in Rio de Janeiro, Brazil. The Global Network of Internet & Society Centers (NoC) invited a wide range of stakeholders toRio during November 8-10, 2017. Spearheaded and organized by the Berkman Klein Center for Internet and Society at Harvard University and the Institute of Technology and Society in Rio, the symposium brought together researchers, industry, NGOs, and other entities to discuss issues around inclusion and artificial intelligence (AI).
One of the key aspects of this symposium was the inclusion of perspectives fromnot only a wide range of areas and disciplines, but also from all regions across the globe. Each region was represented—however, more inclusion of underrepresented areas was noted as an area of action for future activities, as the discourse still saw a larger number of perspectives from Western backgrounds. As an example, although China is one of the key players regarding AI, only a small number of representatives were from China or provided a background on AI and inclusion in China.
The symposium was jam-packed with high-caliber talks, discussions, and activities. The symposium program can be found here. Whereas the first day focused on creating a common understanding of AI and inclusion as concepts and frameworks, the second day identified opportunities, challenges, and possible approaches and solutions to increase inclusion in AI, and the third day focused on areas for future research, education and interface building.
All speakers provided impressive background and knowledge on AI and inclusion to a multidisciplinary and multifaceted audience, which created a steep learning curve for me as a social scientist with (previously) little background in the technologies behind AI. However, the design of the symposium talks and activities facilitated a deep understanding of the issues around AI and inclusion for individuals from any disciplinary background.
Key issues in AI and inclusion
One of the key issues that stood out at this symposium is the bias and the exclusionary nature of AI through the way that AI is created and trained. For example, algorithms, which are an inherent part of AI, that are created through training datasets are only as good as those datasets. This means, if a training dataset—created by a human—is biased, the algorithm will be biased too. This became apparent quickly through a variety of examples, that included work from Desabafo Social, a non-profit that promotes social justice and youth participation in Brazil, which showed videos that revealed racist bias in search algorithms for a variety of photo sharing pages. An impressive example of their enlightening videos can be found here.
These issues of bias and exclusion at the creation stage do not just include race as a factor, but any underrepresented group. For example, the technology created for airport security prompts the security agents to choose whether a person is male or female before entering the millimeter wave scanner. Based on training datasets of typical male and female bodies, the scanner then decides whether there could be any objects hidden on those bodies. However, this AI technology (Automatic Target Recognition, ATR) only differentiates two genders, meaning that anyone who does not fall into these two categories will be marked as suspicious and will have to go through a secondary security hand search.
Another striking takeaway from the conference was the missing legal definition of AI and the absence of global standards in AI. For example, AI accuracy in face recognition is very high for white males, but low for black females. A good practice standard, for example a minimum accuracy requirement, does not currently exist, although a number of entities, such as the Mozilla Foundation, are aiming to create such standards as a “fair AI” badge—similar to the fair-trade badge—to remedy these issues.
Another area of concern in AI is privacy and surveillance, as AI relies on copious amounts of data to learn and improve its algorithms. However, users are often unsure of when, where, and how their data are collected and used for which purposes. Although some regulations have been passed to protect users’ privacy, these regulations are not global, and different regions apply different laws and regulations. Accordingly, there were calls for—first of all—a global legal definition of AI, which
would provide the basis for creating global regulations on inclusion, privacy, and other areas affected by AI. Again, the Mozilla Foundation made a number of suggestions on “fair AI” and they provide a “holiday buyer’s guide” on technology that will “snoop” on you—i.e., presents that you should probably not give to your loved ones… unless you’d like them to be snooped on…
Future Event on AI, bias, and inclusion at the Quello Center
Overall, the symposium left me personally with more questions than answers, but I am consoled by the fact that every single participant I spoke with felt invigorated and motivated to do something to move forward the cause of increasing inclusion in AI. For one, we all agreed to help make these issues a public conversation topic—this blog post is only the start. At the Quello Center, I will be organizing a discussion roundtable concerning issues around artificial intelligence, bias, and social exclusion, that will delve deeper into these issues based on the work that is happening here at MSU. Watch this space for a time and date during the spring semester 2018.
Monday, September 18th, 2017
We are thrilled to welcome Dr Laleah Fernandez to our research team at the Quello Center. Laleah joined us in early September as the Quello Postdoctoral Research Fellow and hit the ground running as we finalize contracts for some new and exciting research projects. As an MSU alumna who earned her Ph.D. in Media and Information Studies, her M.A. in Advertising and her B.A in Journalism, Laleah is a true Spartan and a great asset to the Center.
With her strong background in policy work and media research, Laleah will play a key role in the Rocket Fiber project on access to the Internet in Detroit, as well as the Google search project. She will also be developing strategies for better connecting the Quello Center with the state policy communities of greatest relevance to our work.
Previous to coming to the Quello Center, Laleah was an Assistant Professor in the Department of Information and Computing Science at the University of Wisconsin – Green Bay. Her research interests include network analysis and the role of new and emerging media in community-level and global mobilization efforts. Laleah has published research and reviews in the areas of advertising, economic development, mobilization, and science communication.
We are excited to have her on board, and we look forward to working with her! Welcome, Laleah!
Wednesday, August 23rd, 2017
Dr. Bianca (Bibi) Reisdorf, Quello Assistant Director and Assistant Professor in Media and Information, has been invited to present her research findings on race and digital inequalities at the TPRC Capitol Hill Briefing on Thursday, September 7, 2017. Each year, the TPRC (The 45th Research Conference on Communications, Information, and Internet Policy) panel invites four conference presenters to discuss how their research affects policies at a briefing on Capitol Hill on the day prior to the main conference.
This year’s discussion will be moderated by Dr. Carleen Maitland (Pennsylvania State University), who is also the current chair of the TPRC. Speakers include Professor Michelle P. Connolly (Duke University), who will discuss U.S. Spectrum; Dr. Jonathan Cave (University of Warwick), who will present on Privacy andSecurity; and Professor Philip M. Napoli (Duke University), who will present his work on the First Amendment and Fake News. Dr. Reisdorf will present findings from her work with Dr. Colin Rhinesmith, who is an Assistant Professor at Simmons College, and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. In their paper, titled Race and Digital Inequality: Policy Implications, they combined quantitative data analyses using Pew data, American Community Survey data, and FCC Form 477 data with qualitative data from a Benton Foundation study on digital inclusion initiatives in several cities across the US. The combination of these rich data sources brought forward deeper insights into what is keeping some of the economically hardest-hit communities offline and how policy can help increase digital equity. For example, quantitative analyses of data on Kansas City, MO, and Kansas City, KS, emphasized existing digital inequalities along factors such as race, income, and education, and showed that fewer fixed broadband providers offer their services in poor urban neighborhoods. The qualitative case study of digital inclusion initiatives across these neighborhoods, however, showed that local, well-designed digital equity programs have a positive impact in mitigating these inequalities. While federal policies can help to provide more infrastructure and service to hard-hit neighborhoods through programs such as Lifeline, local organizations and policymakers can provide context-specific on-the-ground support that builds on the resources and assets already available in the communities to allow meaningful broadband adoption.
The TPRC Capitol Hill Briefing takes place at the 2075 Rayburn House Office Building on Thursday, September 7, 2017, from 3:30-5:00 P.M. and is open to the public. Please register at https://www.eventbrite.com/e/telecom-policy-congressional-briefing-2017-tickets-36809648650 if you would like to attend this talk.