In the past decade alone, more than 110 countries have initiated identification schemes. These programs are often implemented with an initial use case, such as the delivery of subsidies. Over time, however, the issued ID is used for many other uses such as paying taxes, opening bank accounts, entering the education system, accessing critical medication, or even participating in sports competitions.
Expansion in use cases depends on a number of factors such as ease of use, whether the ID is seen as credible, etc. Such expansion of use cases also creates a risk of mission creep — i.e. that the ID is used for purposes different from the original intent, and sometimes in a way that is disempowering for individuals. In the absence of adequate legislative or judicial oversight, mission creep can create risks for those very individuals that an identity is supposed to empower.
By their very nature, digital identity systems collect some data about individuals in order to provide access to certain services. This immediately raises two interrelated questions. First, how much data should the system collect? Second, what services should it be tied to? While courts have sometimes stepped in and used the legal principle of proportionality to address these two issues, the appropriate use of digital identity has yet to be thought through satisfactorily. This decision is inherently difficult to make, and no single approach — be it economic, legal, or ethical — is likely to provide an adequate answer. For example, a purely economic approach might end up including too many use cases, because it ignores the unquantifiable risks to privacy and state power.
In the absence of any widely-accepted thinking on this issue, we run the risk of digital identity systems suffering from mission creep, that is being made mandatory or being used for an ever-expanding set of services. We believe this creates several risks. First, people may be excluded from services if they do not have a digital identity or because it malfunctions. Second, this approach creates a wider digital footprint that can be used to create a profile of an individual, sometimes without consent. This can increase privacy risk. Third, this approach increases the power of institutions versus individuals and can be used as rationale to intentionally deny services, especially to vulnerable or persecuted groups.
Three exceptional research groups have undertaken the effort of answering this complex and important question. Over the next six months, these think tanks will conduct independent research, as well as involve experts from across the globe. Based in South America, Africa, and Asia, these institutions represent the collective wisdom and experiences of three very distinct geographies in emerging markets. While drawing on their local context, this research effort is globally oriented. The think tanks will create a set of recommendations and tools that can be used by stakeholders to engage with digital identity systems in any part of the world.
We are grateful that the following organizations will help develop our collective understanding on the appropriate use of digital identity and its limits through a multi-faceted approach:
This research will use a collaborative and iterative process. The researchers will put out some ideas every few weeks, with the objective of seeking thoughts, questions, and feedback from various stakeholders. They will participate in several digital rights and identity events across the globe over the next several months. They will also organize webinars to seek input from and present their interim findings to interested communities from across the globe. Each of these provide an opportunity for you to provide your thoughts and help this research program provide an independent, rigorous, transparent, and holistic answer to the question of when it’s appropriate for digital identity to be used. We need a diversity of viewpoints and collaborative dissent to help solve the most pressing issues of our times.
At the conclusion, the research alliance will present its findings and point of view online for all to access, accompanied by a set of tools that can assist with decision making in digital identity design and implementation.
This work aligns with Omidyar Network’s continued commitment to help norm Good ID. We believe all ID must be Good ID — inclusive, offers significant personal value, and empowers individuals with privacy, security, and control. We invest in research alliances like this one to bring new evidence about what constitutes Good ID in practice into the decision-making process and to address exclusion, discrimination, surveillance, consent, and other key issues of our time.