Building India’s Responsible AI Capabilities - Where should impact-investors allocate their capital?

Building India’s Responsible AI Capabilities - Where should impact-investors allocate their capital?

As the world grows more sophisticated around generative AI, conversations have become more nuanced, and the frenzy has taken a little breather. At ONI we have been reflecting to separate the signal from the noise. And identify what the highest use of impact capital is when it comes to AI innovation in India.

In this blog, we try to articulate how impact-investors can shape the development of an AI innovation ecosystem that serves the Next Half Billion (NHB), one where inclusion, trust, and safety are built by design, and not an afterthought.

AI can raise standards of living for the Next Half Billion (NHB). This could be through four pathways - democratised access to knowledge, personalised and affordable professional services, inclusive products, and proactive responses to critical developmental challenges.

1. Democratised Access to Knowledge: Users have often found it hard to access critical information related to public services and benefits. Initiatives such as OpenNyAI’s Jugalbandi WhatsApp chatbot provides tailored information on ~50 schemes across 50 Indian languages, could help address this challenge and enable the NHB to access welfare benefits with ease.

2. Personalised and Affordable Professional Services: Professional expertise in areas ranging from healthcare and education to law and finance have always been scarce in emerging market economies; AI could help address this. For example, Wadhwani AI’s partnership with the Central TB division is helping interpretation of Lipoprotein A tests and also preventing drop off in follow up on high risk cases. This has helped bring down the turn-around-time for these tests by half a day and has prevented more than a 1,100 deaths.

3. Inclusive Products: AI has the potential to make existing products more inclusive and accessible to previously excluded communities such as persons with disabilities. Google’s Parrotron used a speech recognition model to help people with speech impairments to be better understood by both machines and people. Closer to home I-stem offers document accessibility services for people with print disabilities through a combination of technology and human remediation in the loop interventions.

4. Proactive Response to Critical Developmental Challenges: Finally, the new class of technologies could also help us investigate and address complex challenges. For instance, World Economic Forums’ FireAID is helping in early mapping, detection and response to wildfire crisis.

That said, there are clear, real risks. The sector has spent significant time debating existential risk related to artificial general intelligence. As many experts have pointed out, this detracts from the immediate and more apparent risks and we should focus our efforts on them. We have classified these risks into five broad categories – digital risks, malicious use cases, market monopolisation, job loss, and environmental impact.

1. Digital Risks: The AI development process can introduce multiple risks including bias, data security, privacy infringement (e.g. sensitive images used to train a robot vacuum cleaner ended up on Facebook), and hallucination or “making up stuff” (e.g. ChatGPT made up bogus citations for a lawyer doing case research).  Further, as we delegate more decision making to algorithms, there is a risk that we start surrendering our agency over decision making.

2. Malicious Use Cases: Bad actors can leverage AI to strengthen malicious activities such as fraud through impersonation, phishing, and malware attacks. The number of deepfake videos online has been increasing at 900% annually (WEF) and has serious societal implications (e.g., financial frauds) For instance, deepfake videos was recently used to defraud a person for INR 40 lakhs.

3. Market Monopolisation: The power of AI depends on access to large volumes of data and access to expensive computing. These favour incumbent BigTech firms, and therefore could further entrench their market power, leading to limited competition, stifling innovation, and reducing consumer welfare. However, open-source efforts have been able to counter this risk to some extent, as they have been able to produce models that compete with proprietary counterparts.

4. Job Loss: Generative AI has the capabilities to do tasks such as research, synthesis, and software development that white-collar professionals in the knowledge economy do (A WEF report found that employers expect a net loss of 14 million jobs, equivalent to 2% of current employment.). Whether this happens at-scale will depend on whether businesses are willing to trust AI to do these tasks.

5. Environmental impact of AI: Computing now accounts for more emissions than the aviation industry and current development is trending towards ever larger and resource-intensive datasets and models. A medium-sized data centre is estimated to use ~1.3 million litres of water per day for cooling which is astronomical when compared to the fact that the per capita water available for an Indian annually is also ~1.3 million litres. As water scarcity is poised to be the next major resource to be publicly examined, the industry will have to find answers to sustainability challenges as they tackle growth.

We need to invest in India-specific public goods to maximise the benefits of AI and minimise its harms. A question that we often get asked is the following - given that global funds, governments, and large technology firms are pouring money into AI, why not “free-ride” on their innovations? Why does India, specifically, need to spend scarce impact-capital on pro-social AI innovation as well as responsible AI?

We believe that while there are innovations that we should take advantage of (e.g., it perhaps does not make sense to build our own foundational model), there are contextual challenges that global public goods and innovation are unlikely to help India with. This boils down to four reasons:

1. Context-Specific Trade-Offs: India will need context-specific standards, norms, and regulations that reflect our lived challenges (e.g., smaller proportion of digitalized data, low digital literacy) and opportunities, and therefore the trade-offs we are willing to make. For example, while another country might ban the use of AI in specific sectors, India might need to take a different route given our specific challenges and implement AI solutions with necessary safeguards.

2. Different Dimensions of Biases: Bias in the global north revolves around four major axis – gender, sexual orientation, race, and disabilities. India has additional axes to consider, such as region and caste. There are no easy identifiers for these discrimination patterns in existing data sets and thus they flow into models without them explicitly being intended. These lead to “accuracy” gaps, misattribution of traits due to stereotyping (e.g., lower credit scores in predictive models for sections that had lack of access to credit until recently) and could lead to exemplify existing biases.

3. Data & Language Capabilities: Most of the generative AI models are trained in English, and are not available in Indian languages; further, several NHB segments could be data dark (i.e., if data about these segments are not available or not digitalized). This could lead under-representation of certain sections of society in the solutions and lower dispersion of benefits of AI.

4. Internal Stewardship Capabilities: India already has significant AI engineering talent (24% of GitHub AI projects were contributed by software talent from India, largest for any single geography globally) but we still have ways to go in AI stewardship (a share of 5% in global AI publications). As problems in this geography are going to be unique to solve them at scale and with the appropriate speed, we will need to develop internal capabilities on the stewardship front as well.

We should double-down and drive responsible AI innovation. We believe there is an opportunity to invest across flagship initiatives that encourage responsible innovation in AI – one where inclusion and safeguards are not an afterthought, but designed intentionally.

1. Research: Existing literature on risks are dominated by the global north. Therefore, we need to support interdisciplinary research investigating implementation of AI in Indian contexts to understand the benefits, as well as the contextual risks and harms.

2. Commons: Investing in high-value datasets (collected with user consent) in data-dark regions and segments that represent the NHB which can be used to build models in AI. Supporting open-source AI projects that serve the NHB as well as support developers build and deploy AI in a responsible manner.

3. Community Building: Practitioners deploying technologies and researchers working on mitigating harms often work in silos. There is a need for stakeholders to come together to holistically review the harms and build feasible solutions to address them. There is an opportunity to build a unique “large-tent” (a neutral platform) where these stakeholders can engage with each other and build together.

4. Demonstration Projects: Supporting innovators and practitioners interested in deploying AI in a responsible manner in specific sectors; through these projects we will be able to demonstrate what responsible AI in practice looks like and inspire other innovators to take similar paths.

We feel we are at a pivotal point in social & technology evolution. Decisions we take in the short and the medium term around data, compute, inference, and regulation will decide how inclusive, equitable and sustainable this new wave in AI is. This is our evolving point of view on it, and in the spirit of constant evolution and learning, we look forward to the feedback, and if you’re already building in the space we would love to connect. You can reach us at contact@omidyarnetwork.in.