In an earlier post I discussed the FCC’s recent decision to open up 150 MHz of spectrum in the 3550-3700 MHz band for unlicensed “General Authorized Access” usage as the lowest-priority usage category in a new three-tier model that includes protections for incumbent government users and provides for “Priority Access Licenses” assigned via auction.
In this post I’m going to briefly review two other spectrum bands that the FCC has recently moved to make more available for unlicensed use. Together these changes have potential to increase unlicensed spectrum capacity and flexibility in terms of network designs and business models.
In March 2014 the Commission adopted a Report and Order modifying the rules governing the operation of Unlicensed National Information Infrastructure (U-NII) devices operating in the 5 GHz band. The goal of the changes was to “significantly increase the utility of the 100 megahertz of spectrum” in the 5.150-5.250 GHz band and “streamline existing
rules and equipment authorization procedures for devices throughout the 5 GHz band.”
As a FCC’s March 31 press release explained:
Currently U-NII devices operate in 555 megahertz of spectrum in the 5 GHz band, and are used for Wi-Fi and other high-speed wireless connections…The rules adopted today remove the current restriction on indoor-only use and increase the permissible power which will provide more robust access in the 5.150-5.250 GHz band. This in turn will allow U-NII devices to better integrate with other unlicensed portions of the 5 GHz band to offer faster speeds and reduce congestion at crowded Wi-Fi hot spots such as airports and convention centers.
Broadcast White Space
While the 3.5 GHz and 5 GHz bands can provide substantial amounts of spectrum to augment the overcrowded 2.4 GHz and 900 MHz bands used for Wi-Fi and other unlicensed technologies, the propagation characteristics of these higher-frequency bands translate into limited geographic coverage per base station.
This contrasts with the so-called “White Space” spectrum available for unlicensed use in the sub-700 MHz broadcast band. Often referred to as “prime spectrum real estate,” the broadcast band enjoys relatively strong propagation characteristics. But, at the same time (not surprisingly, given its much-coveted status) it has relatively little free spectrum available for unlicensed use, especially in high-demand metro areas, which are served by relatively large numbers of broadcast stations.
The advancement of unlicensed White Space has been a slow process, dating back to 2002. In 2008 the FCC finally issued a set of rules for White Space operation in the broadcast band. This was followed by a series of refinements, including the authorization of “TV bands database systems” to support non-interfering White Space usage, starting with the Commission’s first authorization of a database operated by Spectrum Bridge in late 2011.
In early 2012 the White Space saga took another turn, when Congress passed a law authorizing the FCC to conduct spectrum auctions to reclaim parts of the TV spectrum for wireless users. This was followed in May 2014 by FCC rules for conducting an “incentive auction” designed to motivate broadcasters to voluntarily give up their spectrum in exchange for a portion of auction revenues. Given the complexity and sensitivity of this ambitious auction plan, it’s not too surprising that its scheduled date has been pushed back twice and is now planned for earlier 2016.
The planned incentive auction and related “repacking” of the broadcast band raised the prospect of a reduction in spectrum available for unlicensed White Space devices (WSD). In response to this, the FCC’s incentive auction rules revisited the Commission’s earlier plan for allocating spectrum to unlicensed use. Though less spectrum will now be available for this purpose, the new plan is designed to ensure that at least three or four 6 MHz channels are available on a nationwide basis, with significantly more spectrum likely to be available in some smaller and more rural markets, where existing high-speed connectivity options are particularly scarce.
This focus on providing a minimum amount of bandwidth nationwide reflects the view expressed by multiple commenters (e.g., see Reply Comments from the Open Technology Institute and Public Knowledge) that “the emergence of a mass market for unlicensed chips, devices and services” based on the 802.11af (“White-Fi” or “Super Wi-Fi”) standard will require the nationwide availability of at least three 6 MHz channels.
The May Incentive Auction Report and Order was followed in late September by a Notice of Proposed Rulemaking revising the Commission’s Part 15 rules governing unlicensed use. These rules loosened some restrictions on power levels and guard band requirements, a change welcomed by White Space advocates, but not by licensed users in adjacent spectrum.
Taken together, Commission’s actions to significantly expand the amount of unlicensed spectrum in frequency bands with diverse propagation characteristics should provide important technical capabilities to support the new generations of unlicensed providers and services discussed in this series of blog posts.
While the 3.5 GHz and 5 GHz bands will provide a substantial amount of new capacity for small-cell deployments, the more limited amount of White Space spectrum will support larger cells, reach longer distances, and provide much-improved in-building penetration by outdoor base stations. And, when combined, this mix of spectrum options should enable unlicensed service providers to architect next-generation networks that cost-effectively deliver significantly faster speeds and more extensive and reliable coverage.
In a series of posts over the past two months I’ve discussed a range of initiatives aimed at using unlicensed spectrum to support the growing demand for wireless connectivity. To put these efforts in a forward-looking context, it’s helpful to get a sense of what changes are in the works in terms of expanding the amount of spectrum available for unlicensed use.
In this post I’m going to focus on the most recent development on this front: the FCC’s April 17 decision to make 150 MHz of spectrum (3550-3700 MHz) available for new licensed and unlicensed commercial use, while retaining protections for existing military and other incumbent users of this spectrum.
First, we are leveraging advances in computing technology to rely on an innovative Spectrum Access System to automatically coordinate access to the band. It’s the traditional frequency coordination role, but modernized using advanced technologies to maximize efficiency.
Second, we are using auctions to grant exclusionary interference protections only when the spectrum is actually scarce. Under our rules, anyone with a certified device can use the spectrum, sharing it with others. In areas where the spectrum is scarce, users can participate in an auction to seek a license to gain priority access to the band.
Third, in cooperation with our federal partners, we are creating a new way to share spectrum with federal users. By leveraging the Spectrum Access System and technologies to monitor and sense when a federal user is present, we can move toward true dynamic sharing of the band between federal and non-federal users.
As a reflection of this cooperation, the Commission, working with NTIA and Department of Defense spectrum users, reduced the latter’s coastal protection zones by roughly 77%.
In her statement, Commissioner Mignon Clyburn pointed to a “paradigm shift [in] the [FCC’s] move away from highly fragmented long term exclusive use licenses to shorter term Priority Access Licenses [PAL] with a rule to use it or share it with General Authorized Access users.”
These new regulatory approaches will create enough certainty to fuel investment in equipment for the 3.5 GHz band and the new PAL license will have lower administrative costs and allow for micro-targeted network deployments. Service providers will have flexibility in designing networks to address unique challenges posed by rural and other areas, and by using a Spectrum Access System database to dynamically assign frequencies in the band for both PAL licenses and GAA users, there will be more efficient use of spectrum in heavily populated areas.
Though the ruling was approved in part and concurred in part by the agency’s two Republican Commissioners, their statements described it not as a “paradigm shift” but rather as an “experiment” that may or may not succeed, and could have been improved in several respects.
For example, Commissioner Michael O’Rielly appeared to disagree with Clyburn about whether the new rules provided enough clarity and incentives for potential PAL licenses to invest:
I am concerned that some rules may hinder development of the Priority Access Licenses, known as PALs. I question whether auctioning PALs for three year terms with no renewal expectancy will create a meaningful incentive to entice auction participants. Similarly, while I thank the Chairman for agreeing to changes that facilitate PALs in areas where there is more than one auction bidder, I had hoped our rules would include a mechanism whereby any entity could receive a PAL even if mutually exclusive applications, which are necessary to trigger an auction, are not filed in a particular census tract. The Commission ought to encourage a diverse array of business models. Many entrepreneurs, even those living in rural communities, have told me of their strong preference for PALs, which they explain would ensure better reliability and quality of service. Our rules must not foreclose these prospective licensees from obtaining PALs just because they are the only one in a given census tract wanting priority access. We need to fix this in the near term.
And, according to Commissioner Ajit Pai:
This Order leaves many important details and complex questions to be resolved, including whether technologies will develop that can manage the complicated and dynamic interference scenarios that will result from our approach. It therefore remains to be seen whether we can turn today’s spectrum theory into a working reality. Moreover, exclusion zones still cover about 40% of the U.S. population, and we leave the door open for the introduction of new federal uses across the country, neither of which is ideal.
Regardless of which description—“paradigm shift” or “experiment”—is most apt, the Commission’s new approach to the 3.5 GHz band strikes me as a worthy effort to move beyond the longstanding spectrum management status-quo, and creatively use technology to explore new models that enable both licensed and unlicensed users to deliver more value from existing spectrum. And, even if some aspects of this new model do prove problematic, it should at least provide valuable lessons to inform future efforts to craft spectrum policy appropriate for the 21st century.
And some aspects of the 3.5 GHz rules remain subject to further refinement, pursuant to a Further Notice of Proposed Rulemaking also issued by the Commission.
In a series of posts over the past two months, I’ve looked at efforts by private companies and city governments to use unlicensed spectrum to improve choice, affordability, innovation and service quality in the communications sector.
In this post I’ll add another type of entity to the mix of unlicensed spectrum innovators: local neighborhoods, where issues, interactions and initiatives tend to be more personal and place-based.
One focal point for this kind of neighborhood-driven network initiative is Detroit, a city facing severe financial constraints and one of the nation’s lowest levels of Internet penetration (see tables in this earlier post). In this highly challenging environment, a community-based organization called Detroit Digital Stewards, working closely with the Open Technology Institute (OTI), has been developing human and technical systems to support low-cost wireless mesh networks that support local needs. The open-source OTI technology, called Commotion, is also being used in Red Hook, NY following Hurricane Sandy and in projects overseas.
According to an April 2014 report in the New York Times, the State Department has provided financial support for Commotion’s development, as a means to help dissidents use decentralized mesh networks to bypass government surveillance and censorship (ironically this occurred around the same time the NSA was developing surveillance technology later exposed by Edward Snowden).
The State Department provided $2.8 million to a team of American hackers, community activists and software geeks to develop the system, called a mesh network, as a way for dissidents abroad to communicate more freely and securely than they can on the open Internet.
I recently had the pleasure of speaking with Diana Nucera, Director of the Detroit Community Technology Project (DCTP), which coordinates the Digital Stewards project. That conversation helped me appreciate that, while it may lack the scale (and certainly the funding) of New York’s LinkNYC project or Google’s Project Fi, the Digital Stewards program (one of multiple Allied Media Projects) has some unique strengths worthy of study, support and sharing. For example:
My sense is that there’s much to learn from the work of the Detroit Digital Stewards team, OTI and Commotion projects in other locations, as they break new ground in bringing affordable and empowering connectivity to underserved communities.
Yesterday Google officially announced Project Fi, its much anticipated wireless service, which I’ve previously blogged and tweeted about during its pre-announcement rumor/leak phase. Now that more details, including pricing, are available directly from Google, an updated post seems in order, especially following recent posts about newly launched municipal Wi-Fi services in NYC and Boston (later on this post I’ll consider how these two developments may be related and synergistic).
As expected, Google’s wireless service will route user traffic over a mix of Wi-Fi connections and, via MVNO arrangements with Sprint and T-Mobile, the two carriers’ cellular networks. This “three network” approach alone makes the service pretty unique. In a blog post yesterday, VP of Communications Products Nick Fox explains:
As you go about your day, Project Fi automatically connects you to more than a million free, open Wi-Fi hotspots we’ve verified as fast and reliable. Once you’re connected, we help secure your data through encryption. When you’re not on Wi-Fi, we move you between whichever of our partner networks is delivering the fastest speed, so you get 4G LTE in more places…If you leave an area of Wi-Fi coverage, your call will seamlessly transition from Wi-Fi to cell networks so your conversation doesn’t skip a beat.
The Project Fi FAQ page explains further that its “software is optimized to not put extra strain on your battery by only moving you between networks when absolutely necessary.”
Also as expected, Project Fi’s multiple-network functionality will initially only be available on Nexus 6 smartphones, via a special SIM card. The device costs $649-$699, depending on its storage capacity, with a no-interest, no-fee option to pay for it over 24 months (an approach similar to what most cellular carriers now offer to their customers). Under this plan, the monthly cost for the device is $27.04-$29.12.
Though the multi-network functionality will initially only be available on the Nexus 6, Google aims to make it easier to switch not only between networks, but also between devices. As its Project Fi announcement explains:
Talk, text, and check voicemail with the screen nearest you. Your phone number now works with more than just your phone. Connect any device that supports Google Hangouts (Android, iOS, Windows, Mac, or Chromebook) to your number. Then, talk and text with anyone—it doesn’t matter what device they’re using.
While much of the above was previously confirmed via rumors and leaks, I hadn’t seen anything about Project Fi pricing until Google officially announced the service, which has a pretty simple fee structure.
For a base price of $20 per month, Project Fi customers receive the following:
The above graphic from arstechnica illustrates that, although Google’s pricing becomes less competitive the higher the monthly data allowance, it is the least expensive service for a 1 GB tier of service.
In my last post I described Boston’s recently-launched Wicked Free Wi-Fi as a new generation of municipal wireless networks likely to be more successful than the first generation of projects launched a decade earlier.
Another member of this new generation is LinkNYC, a recently announced Wi-Fi network that will be deployed in New York City starting later this year. While they have some things in common (i.e., a focus on free outdoor nomadic service), the NYC project is, in key respects, different, more ambitious and perhaps more controversial than Boston’s Wicked Free.
As Matthew Flamm put it in the lead paragraph of a piece in Crain’s New York Business:
Gigabit Internet is finally coming to New York City, and from the unlikeliest of sources: the city’s pay-phone network. And even more remarkable, the service—and all U.S. phone calls on these new Wi-Fi kiosks—will be free
[Note: as explained below, the gigabit speed is per kiosk, and would be shared by all users accessing the kiosk it at any given time].
The project’s media kit explains the kiosks, known as “Links” this way:
Links are iconically designed connection points that house state-of-the-art wireless technology, interactive systems and digital advertising displays, which will offer 24/7 free Internet access at up to gigabit speeds…as well as a range of other services including free phone calls to anywhere in the U.S., a touchscreen tablet interface to access City services, wayfinding, 911 and 311 calls, free charging stations and digital displays for advertising and public service announcements.
That’s a lot of free services in exchange for a new layer of high-tech display advertising (see example above) in a city that’s already pretty saturated with display ads.
And, instead of costing NYC taxpapers money, the project aims to generate revenue for the city. According to the media kit, the project “will be funded through advertising revenues, will be built at no cost to taxpayers and will generate more than $500 million in revenue for the City over the next 12 years.” And, according to Flamm, “[t]he contract…guarantees payment of $20 million in advertising revenue to the city in the first year of operation.”
The project is being undertaken by a for-profit consortium of four companies. As Kif Leswing explains at GigaOm:
CityBridge is a partnership between four companies: Titan, the New York display advertising giant; Comark, which will be fabricating the actual kiosks; Control Group, which is providing most of the strategy for the concern; and chipmaker Qualcomm. They all own about a quarter of the partnership, which entered into a 12-year, $200 million contract with New York City to build and administer Links.
Transit Wireless, currently providing wireless technology for NYC’s underground subway stations through a partnership with the Metropolitan Transportation Authority (MTA), and Antenna Design, which specializes in people-centered industrial design, will also be involved in the LinkNYC project. The former will support the network’s fiber infrastructure, while the latter will design the Link kiosks.
As to the timetable, Flamm reports that:
[I]t won’t be until late 2015 before the first 500-plus units are installed, according to Stanley Shor, assistant commissioner at the Department of Information Technology and Telecommunications, which administers the franchise. CityBridge has four years to complete installation of the first 4,000 structures. The RFP called for eventual construction of 10,000.
Each link will supply a Wi-Fi network within a 150-foot radius…[and] must be capable of supporting up to 256 devices with a total aggregate throughput of 1Gbps” and “simultaneous dual spectrum 2.4 GHz 802.11 b/g/n, and 5GHz a/n/ac services.” So if you’re the only one connected to a Link, you might be able to pull down gigabit speeds.
Plus, there’s a requirement…for CityBridge to upgrade its Link design every four years…to stave off obsolescence…[and] there’s a pilot program planned in the Bronx for a partially solar-powered Link that might be incorporated into the next design.
Advertising is at the heart of the project’s revenue model and economic viability. As Leswing notes “CityBridge won’t be able to introduce a new premium tier of service later,…so it will have to make its money through advertising.”
“Major brands will flock to advertise on the LinkNYC network because the structures look beautiful,” Dave Etherington, chief strategy officer at Titan said. “This also means they could customize their message from Link to Link.”
Lots of endorsements, but also some concerns
In my last post I briefly reviewed the less-than-stellar history of municipal Wi-Fi networks that were deployed roughly a decade ago. As I noted in that post, these projects employed earlier generations of technology and often-poorly-conceived “public-private partnerships.” And, importantly, they were launched well before the combination of smartphones/tablets and data-capped LTE/4G mobile services had turbocharged demand for nomadic Wi-Fi connectivity.
In this post I’m going to focus on an example of what I consider a new generation of municipal Wi-Fi networks, Boston’s Wicked Free Wi-Fi service, which the city formally launched in April.
As reported by Michael Farrell in the Boston Globe:
Boston has switched on a free public Wi-Fi network for about 30,000 residents living in the Grove Hall neighborhood as the first step to blanketing much of the city with wireless Internet service. Dubbed Wicked Free Wi-Fi, the network of outdoor Wi-Fi hotspots will provide Internet coverage over an area of about 1.5 square miles.
In contrast to the first generation of municipal Wi-Fi projects, Boston’s network is focused on the fast-growing demand for nomadic high-speed connectivity rather than the more mature and more technically and financially challenging market for in-home access:
The city stresses that the new Wi-Fi network isn’t designed to be used inside homes as a replacement for wired services that residents can buy from commercial providers. Rather it will work best for mobile users, and is largely intended to be accessed outdoors or in the restaurants and cafes around the neighborhood… making it easier for users with smartphones and tablets to access the Internet on the go.
While “first generation” projects tended to underestimate the technical and economic challenges they faced in targeting the in-home market, the Boston project sets a more realistic yet socially valuable goal of expanding mainly-outdoor coverage. As Farrell explains:
Boston had Wi-Fi hotspots scattered around the city before this rollout — about 70 access points in a few tourist districts or at municipal properties. However, those hotspots reach just a short distance, and do not have the kind of wide-area blanket coverage as the Grove Hall network should provide.
And while first generation projects often looked to a private company like Earthlink to bear the financial risk and deployment costs in exchange for a large measure of control and future profit potential, Boston is taking a different approach. For backhaul—a key component of network cost and performance—it is using its existing fiber network. And, with help from HUD community development grants, it appears to be funding the project itself rather than looking to a private service provider to bear the financial risk. As the Globe article explains:
The Grove Hall project was born during [former mayor Thomas] Menino’s administration, which was awarded a $20.5 million federal grant from the Department of Housing and Urban Development in 2011 that set aside money for redevelopment projects in Dorchester. About $300,000 was used for the Grove Hall build-out… Grove Hall was selected as the launch site because of its large number of low-income families who may not be able to afford the high cost of speedy broadband service.
According to an April 10, 2015 press release issued by current mayor Marty Walsh’s office, the Wi-Fi project is “using resources from the City and its partners, as well as [HUD’s] Choice Neighborhoods program.”
“HUD is very excited about Boston’s innovative use of Choice Neighborhood funding for the Grove Hall Wi-Fi project,” said Barbara Fields, HUD New England Regional Administrator. “This project is opening the door to opportunity for Boston residents, in particular students, and we are proud to be a part of this ‘out of the box’ thinking that is improving lives.”
In terms of the network’s future expansion, Farrell reports that:
[O]ver the next two years, the Walsh administration plans to extend the Wi-Fi network to all 20 commercial districts that are part of the city’s Main Streets neighborhoods program. Those areas are eligible for federal grant money to fund community development projects.
According to the city’s press release, when it was issued the Wicked Free network was attracting 14,559 visitors per month, including 79% repeat visitors. More information on the project, including an interactive network map, is available here, along with an invitation to “[d]ownload the Citizens Connect app to alert the City of neighborhood issues such as potholes, damaged signs, and graffiti.”
More than just a physical network
The Wicked Free web site’s invitation to download the Citizens Connect app is a reminder that, when considering current-generation municipal network projects, it’s important to keep in mind that they are increasingly viewed as part of a broader strategy aimed at creating what authors Stephen Goldsmith and Susan Crawford refer to as “The Responsive City.”
According to its web site MONUM:
[P]ilot[s] experiments that offer the potential to significantly improve the quality of City services…[and] focuses on four major issue areas: Education, Engagement, the Streetscape and Economic Development. To design, conduct and evaluate pilot projects in these areas, MONUM builds partnerships between constituents, academics, entrepreneurs, non-profits and City staff.
Below are some links with additional information about Boston’s effort to use technology (including the Wicked Free Wi-Fi network) to become a Responsive City:
So far, this series of blog posts has focused on what private companies are doing in the unlicensed spectrum space—including startups, cable operators, Google and mobile carriers. In the next few posts I’ll consider what cities and local neighborhoods have done, are doing, and are planning to do with unlicensed spectrum.
As a first step in considering current and future “community WiFi” projects, it’s worth taking a look back at an earlier wave of municipal WiFi networks.
These date back roughly a decade, with one of the most ambitious early projects, in Philadelphia, being announced in April 2005, just a few months before the first iPhone was shipped. The Philly project was one of multiple urban deployments involving Earthlink which, at that time, was grappling with the transition from dial-up to broadband as the dominant form of Internet connectivity. Frustrated with its limited and not-very-profitable access to cable and DSL networks (which weren’t subject to the same network-sharing requirements that applied to dial-up service), Earthlink viewed these muni-WiFi projects as a way to offer Internet access services independent of wholesale arrangements with cable and telco network operators that were not only its main competitors, but increasingly refused to offer Earthlink and other dial-up ISPs access to their networks.
The problem with this effort to use WiFi as a competitive “bypass” technology was that the public-private partnership model embraced by Earthlink, its city partners and similar ventures, was flawed in multiple respects. Though some projects were relatively successful (the largest one probably being the Minneapolis network operated by USI Wireless) Earthlink eventually abandoned its WiFi ambitions after launching several high-profile projects in major U.S. cities. Other players, including MetroFi, which launched several networks in the Bay Area and in Portland, also exited the business around the same time.
Among the problems faced by these early network deployments was that they attempted to offer a service that could compete with wireline broadband services, at least at the low end of the market. But the inability of WiFi signals to reliably penetrate walls made in-home service a serious challenge, especially for the earlier generations of equipment used in these networks.
Another issue was that, ten years ago (as the June 2005 iPhone launch date suggests), there was nothing comparable to today’s nearly insatiable demand for nomadic (but not necessarily ubiquitous) outdoor Internet connectivity. Yes, there were plenty of laptops being lugged around and used in coffee houses, etc., but today’s increasingly universal presence and intensive use of high-performance WiFi-capable smartphones and tablets was, at that time, nothing more than a twinkle in Steve Jobs’ visionary eye.
As discussed in prior posts, today’s dramatically increased demand for nomadic Internet connectivity is spurring a range of efforts by private service providers, including startups, cable operators, Google and cellular providers (as well as restaurants, cafes and other venues providing WiFi hot spots as a customer amenity) to satisfy that demand.
At the same time, some municipalities and neighborhood groups are discovering unmet needs and exploring ways to address them. These efforts will be the focus of the next few posts in this series.
In a post yesterday I discussed the disruptive potential of Google’s Project Nova. Having just discovered an article by Christopher Williams published last weekend in the UK’s Telegraph, I thought I should add an update on international aspects of Nova’s ambitions and potential impacts.
Williams reports that, according to industry sources, “Google is in talks towards a deal with Hutchison Whampoa, the owner of the mobile operator Three.” He also notes that “Google and Three declined to comment.”
The two giants are discussing a wholesale access agreement that would become an important part of Google’s planned attempt to shake-up the US mobile market with its own network. It is understood that Google aims to create a global network that will cost the same to use for calls, texts and data no matter where a customer is located. By linking up with Hutchison, it could gain wholesale access to mobile service in the UK, Ireland, Italy and several more countries where the Hong Kong conglomerate owns mobile networks. Sources said Hutchison was a natural partner for Google in the plan, because it has also sought to eliminate roaming charges for Three customers.
According to CNET:
Hutchison Whampoa would be a potentially powerful global partner to help Google cut roaming fees. It operates the UK’s Three network and is trying to acquire the UK’s O2 network from Telefonica. It also operates networks in Hong Kong, Macau, Indonesia, Vietnam, Sri Lanka, Italy, Sweden, Denmark, Austria and Ireland.
Though Google has not revealed much in the way of details, the Internet search giant is expected to launch a WiFi/MVNO wireless service sometime in the near future. Based on limited comments from company executives and reports in the Wall Street Journal (see here and here; subscription may be required) and elsewhere, it seems that the service will:
Speaking at the Mobile World Congress held March 2-5 in Barcelona Spain, Sundar Pichai, Google senior VP of products, appeared to downplay the scope and disruptive impact of the MVNO service, known internally as Project Nova.
As reported by TechCunch, Pichai said “We don’t intend to be a network operator at scale. We are actually working with carriers.” And according to Wired, Pichai also pointed out that “[c]arriers in the US are what powers most of our Android phones [and]…[t]hat model works really well for us.”
Google may, in fact, have limited ambitions for Project Nova. But I suspect the cautious nature of Pichai’s comments reflects a desire to avoid prematurely upsetting the industry’s dominant carriers—whose customers purchase huge numbers of Android devices—more than it does a lack of Google-scale ambition for Project Nova. And the fact that Google won’t be operating its own network and will be “working with carrier partners” doesn’t mean Nova doesn’t have potential to seriously disrupt the mobile industry’s status quo.
I expect Google to approach Nova the way it approaches most new product introductions: start small in “beta” mode, then adapt to market developments. Sometimes this leads to products being killed, revised or merged with others (see a partial list here), while at other times it leads to aggressive expansion, as was the case with the Android operating system.
While there are parallels between Project Nova and Google Fiber, the company’s investment in local fiber optic networks, there are also important differences that could translate into much faster growth potential for Nova. The key difference is that Project Nova doesn’t require Google to undertake the time-, labor- and dollar-intensive task of building fiber networks city by city and block by block. As a result, while Google Fiber is intended to be a profitable business and is gradually expanding to more cities, Project Nova could allow Google to move very quickly and relatively inexpensively to deploy a nationwide service using other companies’ physical networks.
Having read a mix of pre-launch speculation available online (see excerpts below), I’m inclined to believe that:
Below are excerpts from online commentary that have helped inform this point of view. As always, comments are welcome, especially from those who see things differently.
After writing two posts on potential carrier use of LTE technology in unlicensed spectrum (see here and here), I came across some information that helps clarify the functionality of and relationship between LTE Unlicensed (LTE-U) and License-Assisted Access (LAA). In those posts I referred to these as if they were different names for the same technology. A more accurate statement would be that:
1) LTE-U is an earlier iteration of “LTE in unlicensed spectrum” technology that conforms to the less stringent spectrum sharing requirements of countries like the U.S., South Korea, China and India;
2) LAA will provide a standardized technology that goes further than LTE-U by satisfying the more demanding spectrum-sharing requirements in other markets, including Europe and Japan.
The clarification comes courtesy of the Qualcomm web site, which provides a summary of LTE-U here, and of LAA here. As noted in an earlier post, Qualcomm is a leading advocate of carrier deployment of LTE in unlicensed spectrum, having introduced the idea in late 2013. Selected excerpts from both descriptions are below.