Will mobile phones transform the lives of the world’s poor? Pick up a recent issue of The Economist and it seems like a fait accompli. With over 90% of the developing world now receiving coverage, and the advent of innovative applications like telemedicine, ICT-based agricultural extensions and Mobile Money (a form of phone-based electronic currency akin to PayPal), mobile phones have the potential to provide a game-changing platform for international development.

Yet for all the hype, we know little about whether and how mobile phones will have a lasting impact. Countries with high rates of mobile phone penetration tend to be better off in a number of ways, but figuring out what caused what is no simple matter.

This is where “Big Data” can make a difference.

In a recent paper I wrote with Nathan Eagle and Marcel Fafchamps, we found patterns in ten terabytes of Rwandan mobile phone use data that help illuminate the role of mobile phones in Rwanda’s economy. Focusing on the micro-level behavior of individuals captured in the data enabled us to avoid many of the common pitfalls of making causal inferences based on trends in macroeconomic indicators.

We first noticed that after major shocks like earthquakes and natural disasters, individuals in Rwanda sent a significant volume of “Mobile Money” to the people affected.  In the below video, you can see how dramatically the Lake Kivu earthquake affected patterns of mobile phone traffic in the country:
Using econometric models, we quantified this response. Following the 2008 earthquake, a total of $85 was sent to the earthquake victims over the mobile phone network. Because of increased adoption over the past few years, we estimate that roughly $30,000 would be sent in response to an earthquake today. In a small country where most people earn only a few dollars a day, this is a sizable amount of money.

Perhaps more importantly, when we analyze the dynamics of the social network captured in the billions of phone calls and transfers, we find that the pattern of activity is most consistent with a model of risk sharing—i.e., John helps Jane when Jane is in trouble because he expects Jane to reciprocate in John’s time of need. In places like Rwanda, where banks are rare and people tend to have limiting savings, such risk sharing makes a major difference in insuring individuals against income volatility and protecting against the “poverty traps” that prevent people from moving out of poverty.

This is good news for advocates of expanding of mobile networks and services in developing countries. To the extent that Mobile Money facilitates risk sharing, results indicate that the technology can be a positive force for economic development.

However, not everything is positive.  We observe that it is chiefly wealthy mobile phone owners, and the ones with the strongest social networks, who receive the lion’s share of the risk-sharing transfers. Taken together, these results indicate that the mobile network can improve the welfare of some but, absent intervention, the benefits may not reach those with the greatest need.

Josh Blumenstock is a PhD candidate at UC Berkeley’s School of Information.

A much talked about innovation in public policy recently has been the push to achieve greater transparency and accountability through open government strategies, where the public has access to government information and can participate in co-producing public services. At the Transparency Policy Project we have been investigating the dynamics behind one of the most successful implementations of open government: the disclosure of data by transit agencies in the United States. In just a few years, a rich community has developed around this data, with visionary champions for disclosure inside transit agencies collaborating with eager software developers to deliver multiple ways for riders to access real-time information about transit.

Transit agencies have long used intelligent systems for scheduling and monitoring the location of their vehicles. However, this real-time information had previously been available only to engineers inside agencies, leaving riders with printed timetables and maps that, at best, represent the stated intentions of a complex system that can be disturbed by traffic, weather, personnel issues and even riders themselves.

Recognizing the need for access to this information on-the-go and in digital format, Bibiana McHugh of Portland’s TriMet agency worked with Google in 2006 to integrate timetable data into Google Maps, eventually becoming Google Transit. McHugh went further, publicly releasing TriMet’s operations data: first the static timetables, and eventually real-time, dynamic data feeds of vehicle locations and arrival predictions. Local programmers have responded with great ingenuity, building 44 different consumer-facing applications for the TriMet system, at no cost to the agency.

Other transit agencies have adopted this open data approach with varying outcomes. The most successful agencies work closely with local programmers to understand which data is in demand, troubleshoot and improve the quality of the data feeds. Programmers also make the link between end users and transit agencies by filtering up comments from app users. This iterative feedback loop relies on a champion within the agency to build strong relationships with the local developer community. Of the five transit agencies we studied, Portland’s TriMet and Boston’s MBTA exemplify this approach and have generated the highest ratio of apps per transit rider (see table). Meanwhile, the most reluctant agency to adopt open data, Washington DC’s WMATA, only had 11 apps serving its customers in 2011.

Table: Transit Apps and Ridership by City

The number of apps built by independent developers is important, indicating the variety of options riders have in selecting which interfaces (mobile, desktop, map-based, text, audio) and platforms best fit their needs to access transit information. As we learned from our research on what makes transparency effective, simply providing information is not enough. Format and content matter, and should address the needs of a targeted audience. What we have seen in our study of transit transparency is that local programmers have been the critical intermediaries, taking raw data and generating a variety of information tools that transit agencies could not have imagined on their own. For other open government initiatives to spark this level of innovation and public benefit, they must identify their audience of information intermediaries and foster those relationships.

Posted by Francisca Rojas, research director at the Harvard Kennedy School’s Transparency Policy Project



Over the past year and a half Google has invested significant resources in studying the impact of the Internet on economies around the world, the highlights of which are illustrated at www.valueoftheweb.com.

But we didn't have any insight into the Internet's impact on less developed economies. For example, how does the Internet’s impact on the economy in Turkey or Mexico compare with its impact in France? McKinsey & Company recently analyzed the economic impact of the Internet in 30 “aspiring” countries with both the scale and dynamism to be significant global players in the near future.

Roughly half the world’s Internet users are in aspiring countries according to McKinsey. But 64% of the people in these countries aren’t even online yet! That means there is enormous opportunity to increase Internet penetration in these markets, and in fact, penetration has been growing at 25% per year for the past five years. This growth potential is not limited to users, either—143,000 Internet-related businesses are created every year in these countries.

What should policymakers in these countries do to support the growth of the Internet economy? First and foremost, focus on getting small businesses online. 1.9 million jobs are already associated with the Internet in aspiring countries, but it’s astounding how quickly job growth occurs in comparison to more developed markets. Whereas in European countries we see 2.4 jobs created per job lost, this statistic jumps to 3.2 jobs created per job lost in the SME sector in aspiring countries. In a survey of SMEs across eight aspiring countries, McKinsey found that those spending the most on Web technologies have grown nine times as fast as those spending the least over the past three years.

A few minor policy changes would make it easier to get SMEs online in aspiring countries. First, governments should take steps to reduce the cost of doing business online. It cost $143 to register a domain in Malaysia, compared to $24 in the United States. In Nigeria, it takes 31 days to start a business, compared with seven days in Egypt. Making it more affordable for small businesses and entrepreneurs to get started and online quickly will have an immediate impact. Second, we need to encourage the development of human capital and improve access to financial capital. Digital literacy is low in countries like Morocco and Hungary, while Argentina lags behind its peers in access to loans and venture capital—barriers that lead to fewer people starting businesses online. Finally, access to the Internet in general needs to be more affordable and open. The baseline cost of access to the Internet in Turkey is almost twice the Central and Western European average, while half of Mexicans surveyed cite the cost of access and hardware as the primary barrier to getting online.

It’s clear that the Internet is quickly becoming a critical part of growing the economy for both developed or aspiring countries.  Adopting the right policies to facilitate innovation should be top of mind for anyone considering Internet regulation.

posted by Betsy Masiello, Policy Manager at Google



Urban populations are currently increasing by more than one million people every week, and will continue to do so every week until 2050. Figuring out how to meet the needs of this massive influx of people is a major challenge of our time. Physicist Geoffrey West argues that cities are networks of people, and that scaling such a network requires analyzing universal mathematical frameworks that demonstrate how networks of life generally scale. Check out his fascinating TED talk, where he explains the formulas in plain-speak:


West observes that as organisms double in size, they only need 75 percent more energy to be sustained, so the pace of life decreases as animals get bigger, meaning there is an economy of scale. Meanwhile, as cities double, the pace of life increases.  As a community, people produce 15 percent more income, crime, patents, AIDS cases and so-on, irrespective of differences in location or culture.

If biology says that we’re multiplying more quickly than we can scale, how do we prevent system overload?  

The answer is some innovative discovery, like the steam engine or the Internet, that transforms our way of life. Each time we approach a collapse, some major innovation saves the day, resetting how our networks function. The catch, West says, is that as cities grow more quickly, innovation must accelerate. For policymakers, this means that fostering innovation isn’t just about getting ahead—it’s about survival.

Even with continuous innovations that keep urban populations going, West asks how we might avoid a “heart attack” after forcibly accelerating on a continual basis for an extended period of time. Futurist Paul Saffo writes that we often focus on the inflection point of an “S” curve—when the innovation takes off—rather than on the “inevitable precursors” that facilitate environments for world-changing events. Maybe if governments studied these instances and developed innovation-inducing policies accordingly, we’d be able trump the laws of biology for good.

Let us know what you think in the comments. Can West’s observation plausibly explain how we’ve transitioned from feudalism to capitalism or from an industrial to information society?  

posted by Dorothy Chou, Senior Policy Analyst at Google

It’s been around four decades since I began working with the nascent Internet on the first ARPANET site, located at UCLA.  Since then, it’s been remarkable how our capacity to store data has grown exponentially.  Every day, we’re filling up our phones, cloud-based services and personal hard drives with enormous amounts of data from Internet activities, medical research, climate analysis, sensor arrays and so much more.  And now, the rapid development of new data analysis techniques, visualization tools and other related systems have the potential to address enormously important real world issues and problems.

Yet increasingly, we face a crucial quandary.

Talk is everywhere about the concept of "Big Data" potentials in an array of contexts, ranging from Web businesses to global warming researchers and beyond.  But the collection, storage, and analysis of data interplays directly with many important privacy, social and increasingly political aspects of our cultures. Lucid consideration of legitimate concerns in these areas is now all too frequently being hijacked by a lack of understanding and by our increasingly toxic, polarized political environment.

The underlying areas of concern are often indeed legitimate, including matters like data anonymization, tracking, personal choice and others.  But rather than approaching each such issue with a logical, levelheaded analysis of costs and benefits, of appropriate trade offs and responsible compromises, we instead are frequently faced with "my way or the highway" demands of the same flavor that have driven so many other aspects of our political systems into paralysis.

In the hope of encouraging a rational approach toward this entire spectrum of related issues, I'm very pleased to announce DWEL—the Data Wisdom Explorers League—founded in association with Google, which is providing funding support for this effort.

The goal of DWEL is to serve as a global resource for discussions, educational outreach and a range of other relevant services in this essential sphere.  Our efforts will focus on helping us all move toward the best possible uses of data in responsible manners for solving problems, providing services and improving our lives and planet.

Through the analysis of data we can gain knowledge, and from knowledge we may achieve wisdom.  And wisdom, after all, is one of the more important goals to which we can aspire!

No matter where you are, regardless of how you may feel about any of these matters today, I hope you'll visit the DWEL website at www.dwel.org and consider joining one or more of the DWEL announcement and discussion mailing lists, as we begin this endeavor together.

posted by Lauren Weinstein, Co-Founder, People For Internet Responsibility


Today we’re launching a website called Value of the Web to collect research that sheds new light on how the Internet affects our world. It’s available in English, French, German, Russian and Spanish and currently features studies that focus on 17 different regions, the value of cloud computing in Europe and the value of search around the world. While we can’t use industrial metrics to fully capture the Web’s contributions to our information society, as my teammate Jonathan pointed out, these reports are the best existing efforts to quantify the Internet’s contributions to the economy and society thus far.

The value that is calculated in the reports ranges from the GDP contribution of the firms who provide the essential hardware and software to power the Internet, to jobs that are created due to the low cost of IT for small businesses enabled by cloud computing.





With two billion people online today and another five billion set to join them in the next 20 years, studies predict that the Internet’s contributions will be large, increasing and distributed across sectors and people in the global economy. For example, McKinsey found that Internet search, in its broadest form, accounts for $780 billion in value across the globe each year. And only 4% of that total goes to search companies—the rest goes to consumers and corporations who harness search in order to improve the way they find and use information every day.

In some cases, the reports show the enormous potential of getting more businesses online if governments take steps to encourage commercial use of the Internet or increasing access to broadband.





In other cases, the findings project exponential growth for economies that are already engaging in e-commerce online. The Boston Consulting Group predicts that by 2015, at least 10% of the British economy will be Internet-based. Universal broadband access and creating new business models that capture consumer surplus could increase the value added by the Internet by roughly £43 billion, which is just less than half what the British government spends on education today. If similar measures are adopted by the Japanese government, small businesses alone will contribute an additional ¥5 trillion to the Japanese economy in the next five years—not to mention the ripple effects.





We hope the site will become a central repository for insight derived from new measurements and data that move toward a more complete understanding of the Web’s impact. In order to fully harness the power of this medium, we need to start using these numbers to illuminate policy decisions and light a pathway for innovation.

We’ll continue developing the site by adding more improvements over time, including more languages and content. Check back frequently for updates or choose to subscribe for alerts via email.

posted by Dorothy Chou, Senior Policy Analyst at Google




You can also see more of Neurosky’s devices in action in this CNet article, “Robotic cat ears for humans, an ears-on test.”

Posted by Derek Slater, Policy Manager at Google
This week, we introduced our Jack and Jill the Innovator video series -- which puts a spotlight on innovators of all shapes and sizes and gives them an opportunity to tell their stories—from start-ups building cool new gadgets to moms and dads using the cloud for more efficient carpools.

Our next episode features Malcolm Collins, who among other things, works at Neurosky, a company that makes “brainwave sensors for everybody.”



You can also see more of Neurosky’s devices in action in this CNet article, “Robotic cat ears for humans, an ears-on test.”

Posted by Derek Slater, Policy Manager at Google


SOPA and PIPA would censor the web and impose burdensome regulations on American businesses. A study in 2009 found that 3.1 million Americans are employed thanks to the interactive Internet ecosystem, the very same ecosystem whose fundamental structure would radically change if this legislation passes.
Engine Advocacy—the leading advocacy group for small businesses and other innovators that build on the Web—posted a video highlighting the dangers of  two U.S. bills currently making their way through congress: the PROTECT IP Act (PIPA) and the Stop Online Piracy Act (SOPA). The video looks at these bills from the perspective of everyday innovators, "Jack and Jill the innovators” as we've come to call them.

SOPA and PIPA would censor the web and impose burdensome regulations on American businesses. A study in 2009 found that 3.1 million Americans are employed thanks to the interactive Internet ecosystem, the very same ecosystem whose fundamental structure would radically change if this legislation passes.

Around 7,000 sites are on strike today, and millions of people have already reached out to Congress through phone calls, letters and petitions asking them to rethink SOPA and PIPA. We hope you will too: google.com/takeaction.



Posted by Brittany Smith, Policy Associate at Google

Public policy should be about putting numbers in action—combining the best data and insights to forge policies that help our communities grow and change together.  As a citizen, the number that matters most is you.  Democracies are governed by the people, and it’s their votes, their values and their hopes that should determine what policies get implemented.

That’s why we want to start sharing this new video series on Policy by the Numbers; the first three episodes uploaded to our YouTube channel. We want to help innovators of all shapes and sizes tell their stories—from start-ups building cool new gadgets to moms and dads using the cloud for more efficient carpools. It’s about their biggest hopes, deepest fears, greatest successes, most troublesome failures—everything.

To do that, we’re asking a diverse array of people the same set of questions about their lives and sharing their responses on YouTube so that everyone can get to know your neighborhood “Jack and Jill the Innovator.” We hope that sharing these stories will inspire people to think big, be optimistic about the future, and consider the huge variety of ways to drive decentralized, bottom-up innovation. In order to foster data-driven, pro-innovation public policy that embraces the future, we need to build understanding by telling stories just like these.


We’re starting off our blog series close to home with Danny Kim, the founder of LitMotors

Full disclosure: After I met with Danny and spent time learning about him and his business, I decided to make a small investment in his company.

Posted by Derek Slater, Policy Manager at Google

Several years ago, Nobel Prize winning economist Joseph Stiglitz suggested that land reform was the single most important thing governments could do to confront global poverty.  He noted that democratizing the means of production was the path historically taken by every country that had climbed out of poverty, including those in Western Europe and East Asia. In the same way that land was the means of production of the past, knowledge will be the means of production of the future.

When e-commerce began in the mid-1990s, development experts swooned at its prospects of “leveling the playing field for the little guy.” Even Bill Gates gushed about the promise of “friction free capitalism” enabling small and medium enterprises (SMEs) to bypass the long chain of middlemen that take the lion's share of income.

But the truth was: even SMEs that managed to build a catalog discovered that buyers had difficulty finding it among the billions of websites on the Internet. And even if they did, they wouldn’t trust it. So besides the technical difficulties, SMEs also found it hard to establish visibility, credibility and trust.

Recently, the following dramatic technical advances have changed everything:
  1. Major corporations now offer powerful "cloud computing" services.
  2. A proliferation of low cost devices ranging from mobile phones to netbooks to tablets has greatly expanded the hardware choices for accessing the Internet.
  3. Social networking services demonstrated how to leverage the power of trust.
My organization, OpenEntry.com, built an e-commerce platform offering free catalogs operating on Google’s cloud computing technology, serving more than 2,400 SME users in 44 countries. Catalogs can be built with a smartphone and instructions are in 57 languages.  It also enables any business network to aggregate all the catalogs of their members—even those built with other systems—into a branded “network market” to generate visibility, credibility and trust.

Because of companies and organizations like ours, young entrepreneurs of modest means who traditionally would not be able to find jobs can exercise their skills and excel. For example, women who face cultural restrictions around their public activities can help local SMEs create their catalogs from product images sent to them electronically. And location isn’t a factor -- the United Nations Development Program documented OpenEntry’s role in generating 3,918 jobs for youth and artisan women in Nepal. With the huge volume of demand created by the estimated 100 million SMEs that will take their businesses online in the next ten years, the scope is now open for self-taught entrepreneurs to emulate Bill Gates, Steve Jobs, Larry Ellison, Michael Dell and Mark Zuckerberg, none of whom finished college.

Past history demonstrates that international trade has boosted the development of nations that escaped poverty. Current history confirms that a country’s growth is directly related to its adoption of information and communication technologies. And future history will substantiate how democratizing the confluence of global trade and the Internet will help alleviate poverty and speed the general development of national and global economies.

posted by Daniel Salcedo, founder and CEO of OpenEntry.com


In 2010, I began a year-long mixed-methods study of MusicBrainz, a community music metadatabase with companion open source software that cleans up the metadata on digital music files. Studies have been conducted on various aspects of commons-based, peer-produced projects, notably free and open source software and Wikipedia. But MusicBrainz is unique: MusicBrainz contributors play the role of information scientists for this data commons, working as digital librarians, standards-setters and catalogers of music.
In 2010, I began a year-long mixed-methods study of MusicBrainz, a community music metadatabase with companion open source software that cleans up the metadata on digital music files. Studies have been conducted on various aspects of commons-based, peer-produced projects, notably free and open source software and Wikipedia. But MusicBrainz is unique: MusicBrainz contributors play the role of information scientists for this data commons, working as digital librarians, standards-setters and catalogers of music.

Understanding what drives people to voluntarily curate and contribute to a data commons benefits our overall understanding of how these commons work. If we find common characteristics among a few successful data communities, we can inform the design of data commons for other domains so that they are more likely to thrive. What follows is a snapshot of some of the more interesting findings. Full report with methodology is here, and a quick presentation deck is available here.

The importance of open source
MusicBrainz editors believe that information resources and music metadata should be free (see table). In interviews, they discussed openness as a “philosophy,” and knowing that they are building a bigger and better project useful to others works as an intrinsic motivation. 




One editor describes the process of using and adding data as a “virtuous circle.” He contributes to a variety of peer-produced projects, including Wikipedia, and called himself “selfish” when asked why he contributes to open source projects. He said, “I think people who really value things will want to ensure they continue. And there are two ways you can do it. One, you can use your wallet. The other one is, if it's an option, you can contribute and make it a better thing.” Another editor echoed that sentiment: “I get to benefit from MusicBrainz and this is somewhat of my payback to the community.”

Discovery through Contribution
Several of the editors I spoke with told me stories about having discovered an artist or a collaboration in the process of interacting with the data, whether by browsing or editing data. One of my hypotheses was that because of the patterns of exposure, editors who have discovered an artist through MusicBrainz are likely to have entered on average more edits—spending more time with the database—than those who have not.


To test this, I compared the means of those who answered “Yes” and those who answered “No” against the log of the number of edits entered. Results, shown above, support the hypothesis that those who have discovered an artist through MusicBrainz have made more edits on average than those who have not. Engaging with the data benefits contributors beyond just providing accurate music metadata.

Where are the women?

My sample was overwhelmingly male, and follow-up interviews indicated that most of the editors active in communication channels are male. This gender disparity is not unique to MusicBrainz. This NetworkWorld article cites several sources that showed lower female participation in F/OSS projects and in Wikipedia. This is definitely an area to explore further.


by Jess Hemerly, Senior Policy Analyst at Google


Summary; Full article published in December, 2011 issue of Science

When people took to the streets across the U.K. last summer, the Prime Minister suggested restricting access to the Internet to limit protestors’ ability to organize. The resulting debate complemented speculation on the effects of social media in the Arab Spring and the widespread critique of President Mubarak’s decision to shut off the Internet and mobile phone systems in Egypt.

Decisions about when and how to regulate activities online have a profound societal impact. Debates underlying such decisions touch upon fundamental problems related to economics, free expression and privacy. Their outcomes will influence the structure of the Internet, how data can flow across it and who will pay to build and maintain it. Most striking about these debates are the paucity of data available to guide policy and the extent to which policymakers ignore the good data we do have.

The best approach is neither to make ill-informed decisions based on too little data nor to avoid state regulation simply because of the absence of decent data. Instead, we should begin a concerted push for highly reliable and publicly available forms of measurement of the Internet and how we use it. Better data would not only help the state meet its regulatory obligations, but also improve self-regulation by private sector players and empower individuals to make better decisions. In the meantime, we as researchers need to work harder to translate the data we have into terms that can directly inform policymakers.

First, we need to know more about the architecture of the network and how it is changing. For example, is the Web becoming more or less centralized over time? How much are unrelated content and services converging to common hosting within a comparative handful of cloud providers? Second, we need to know more about how information flows or stutters across the network. Where are there blockages? From what sources do they arise? And third, we need to know more about human practices in these digitally mediated environments.

We need to commit to systematic, longitudinal studies of how digitally mediated communications are changing behavior everywhere across the networked world, such as disclosure of personally identifiable information. For example, debates in the U.S. over amending the Children’s Online Privacy Protection Act (COPPA), which is intended to protect children under 13 from privacy risks, is poorly informed. We have not figured out if children are actually better off as a result of COPPA or even how to start gathering data to answer that question. Studies by the Pew Internet and American Life project and ethnographic work pioneered by danah boyd get too little attention in policy discussions about digital education and child safety, resulting in both over- and under-regulation. But through long-term studies, our findings can be translated into better policymaking and consumer-facing technology design.

The open and responsive nature of a new class of engaged research projects will help policymakers in government and corporate settings remain nimble and make better decisions in the fast-moving world of digital technology.

posted by John Palfrey and Jonathan Zittrain

John Palfrey is the Henry N. Ess Professor of Law and Vice Dean for Library and Information Resources at Harvard Law School and a faculty co-director of the Berkman Center for Internet & Society at Harvard University.

Jonathan Zittrain is a Professor of Law and Computer Science at Harvard University, and a co-founder of the Berkman Center for Internet & Society.


Google NGram is a database that permits statistical analysis of the frequency of use of specific words and phrases in books. The database draws on nearly 5.2 million books from a period between 1500 and 2000 A.D. that have been digitized by the the Google Library Project. With use of the web-based NGram Viewer, it is then possible to create a graphical year-by-year representation of how often a phrase has been used in books.   

In our recent work, The PII Problem, we drew on the NGram viewer to gain a sense of peaks and valleys in policymakers’ attention to “information privacy” from 1950 to 2000.


In this article, we find that this graphic analysis of references to “information privacy” largely correlates with our sense of the development of this area of law. Early use of the term was driven by concern about mainframe computers and their ability to change how data could be organized, accessed and searched.

How did this story then develop during the latter part of the 1970s? After a decline in interest in privacy after enactment of the Privacy Act of 1974, a renewed societal focus in the United States about information privacy began in the early 1980s. Part of this attention was driven, in turn, by the arrival of George Orwell’s titular year, 1984. A flurry of media reports and articles marked this occasion with an analysis of new threats to privacy.

Perhaps most importantly, however, cable operators’ collection of personal information at this time created the same kinds of issues that the Internet would later raise. Even as early as the 1980s, observers noted that coaxial cable technology would permit a user not only to receive information, as broadcast television had allowed, but also to respond to information on the screen and make programming choices. New privacy threats were anticipated as a consequence of the resulting detailed profiles about individual cable consumers, and the use of the term “information privacy” began to rise again.

From the 1990s on, the continuing use of the attention to “information privacy” reflected society’s growing concern with privacy in the PC and then Internet era.

Other topics and techniques can be identified for drawing on Google NGram’s potential as a legal research tool. Legal scholars might draw up a list of core terms in information privacy law, and other legal fields, such as copyright law and constitutional law. The data can be used to explore a variety of questions: How did the use of these core terms develop over time? Did certain legal terms come to supplant others? Can comparisons of the relative frequency of the use of various terms reveal something about the development of legal concepts in a given substantive or doctrinal area? Using Google NGram data, scholars can seek answers to these questions in order to inform current research and fuel new areas of academic inquiry.

Posted by Paul M. Schwartz & Daniel J. Solove

Paul M. Schwartz is a Professor of Law at the University of California, Berkeley School of Law and a Director of the Berkeley Center for Law & Technology.  

Daniel Solove is the John Marshall Harlan Research Professor of Law at the George Washington University Law School. He is also a Senior Policy Advisor to the law firm of Hogan Lovells.


Earlier this year, Senators Mark Warner (D-Va.) and Olympia Snowe (R-Me.) introduced bipartisan legislation that would strengthen the technical expertise of the Federal Communications Commission. The bill, known as the “FCC Technical Expertise Capacity Heightening Act,” would permit each of the Commissioners’ offices to hire technology advisors to assist with the myriad engineering challenges confronting the FCC, whether it be spectrum reallocation, broadband deployment, or network neutrality. As Senator Snowe stated, "At a time when citizens are demanding more effective and efficient government, this legislation will ensure the FCC is sufficiently equipped, both legally and technically, to craft sound policy.”

This is a brilliant move. While the FCC’s Bureaus each have many extremely smart and talented engineers on staff, some with the prized combination of an engineering degree and a JD, the Commissioners have traditionally relied upon the advice of legal advisors for almost all policy matters. As bright and talented as these advisors may be, most, if not all, lack formal technical training. As the Senators stated when introducing this legislation, such a technology knowledge gap, "if left unaddressed, could continue to hamper American innovation and competitiveness." 

It is imperative that Chairman, the Commissioners, and the FCC in general have access to accurate, reliable, and objective technical advice and information. Industry lawyers and lobbyists oftentimes bring in their own engineers and technical staff to persuade the Commission to adopt a certain policy or take a particular action. The Commissioners must have the ability to thoughtfully engage in technical debates and be able to separate engineering facts from engineering opinions.

However, the proposed legislation does not go far enough. It should be broadened to cover not only the FCC, but other Federal government bodies as well. According to data collected in March 2010, there are nearly 85,000 engineers, in various capacities, employed by the Federal government, with the majority classified as general engineers and most working for the Department of Defense. This is a relatively small percentage of the 2.65 million people working for the government, yet their technical and operational expertise is increasingly important as technology becomes an even more influential part of our society.

The Federal Trade Commission, the Department of Justice and its Antitrust Division, the U.S. Patent and Trademark Office, and similarly situated organizations should also be permitted to hire more technical experts with engineering and computer science degrees. They, too, need to have solid advice as they address novel and complex challenges arising daily in the Internet ecosystem. Moreover, the Legislative Branch itself should hire more young men and women with technical expertise to staff important committees that have jurisdiction over telecommunications and the Internet.

There is no time in the history of American society when technology has been as important as it is today. More technical knowledge about spectrum usage, Internet infrastructure, broadband, and how digital communications systems work would certainly lead to better public policy. No doubt, a well-informed government is absolutely vital for innovation, competitiveness, and democracy.
Posted by Ben Golant