The past year has seen a range of public debates about the roles and responsibilities of technology companies. As 2019 begins, I’d like to share my thoughts on these important discussions and why Google supports smart regulation and other innovative ways to address emerging issues.
We’ve always been (and still are) fundamentally optimistic about the power of innovative technology. We’re proud that Google’s products and services empower billions of people, drive economic growth and offer important tools for your everyday life. This takes many forms, whether it’s instant access to the world’s information, an infinite gallery of sortable photos, tools that let you share documents and calendars with friends, directions that help you avoid traffic jams, or whatever Google tool you find most helpful.
But this optimism doesn’t obscure the challenges we face—including those posed by misuse of new technologies. New tools inevitably affect not just the people and businesses who use them, but also cultures, economies and societies as a whole. We’ve come a long way from our days as a scrappy startup, and with billions of people using our services every day, we recognize the need to confront tough issues regarding technology’s broader impacts.
The scrutiny of lawmakers and others often improves our products and the policies that govern them. It’s sometimes claimed that the internet is an unregulated “wild west,” but that’s just not the case. Many laws and regulations have contributed to the internet’s vitality: competition and consumer protection laws, advertising regulations, and copyright, to name just a few. Existing legal frameworks reflect trade-offs that help everyone reap the benefits of modern technologies, minimize social costs, and respect fundamental rights. As technology evolves, we need to stay attuned to how best to improve those rules.
In some cases, laws do need updates, as we laid out in our recent post on data protection and our proposal regarding law enforcement access to data. In other cases, collaboration among industry, government, and civil society may lead to complementary approaches, like joint industry efforts to fight online terrorist content, child sexual abuse material, and copyright piracy. Shared concerns can also lead to ways to empower people with new tools and choices, like helping people control and move their data—that’s why we have been a leader since 2007 in developing data portability tools and last year helped launch the cross-company Data Transfer Project.
We don’t see smart regulation as a singular end state; it must develop and evolve. In an era (and a sector) of rapid change, one-size-fits-all solutions are unlikely to work out well. Instead, it’s important to start with a focus on a specific problem and seek well-tailored and well-informed solutions, thinking through the benefits, the second-order impacts, and the potential for unintended side-effects.
Efforts to address illegal and harmful online content illustrate how tech companies can help support this process:
First, to support constructive transparency, we launched our Transparency Report more than eight years ago, and we have continued to extend our transparency efforts over time, most recently with YouTube’s Community Guidelines enforcement report.
Second, to cultivate best practices for responsible content removals, we’ve supported initiatives like the Global Internet Forum to Counter Terrorism, where tech companies, governments and civil society have worked together to stop exploitation of online services.
Finally, we have participated in government-overseen systems of accountability. For instance, the EU’s Hate Speech Code of Conduct includes an audit process to monitor how platforms are meeting our commitments. And in the recent EU Code of Practice On Disinformation, we agreed to help researchers study this topic and to engage in regular reporting and assessment of our next steps in this fight.
While the world is no longer at the start of the Information Revolution, the most important and exciting chapters are still to come. Google has pioneered a number of new artificial intelligence (AI) tools, and published a set of principles to guide our work and inform the larger public debate about the use of these remarkable technologies. We’ll have more to say about issues in AI governance in the coming weeks. Of course, every new breakthrough will raise its own set of new issues—and we look forward to hearing from others and sharing our own thoughts and ideas.