This is the time of year for reflections – and how the lessons can be applied in the future. Doing this exercise with a focus on artificial intelligence (AI) and data may have never been more important. The release of ChatGPT has opened up a perspective on the future that’s just as fascinating – we can interact with a seemingly intelligent AI that summarizes complex texts, spews out strategies, and writes somewhat solid arguments – because it’s terrifying (“the end of the truth” ).
What moral and practical compass should guide humanity in dealing with data-based technology? To answer that question, it’s worth looking at nonprofit innovators – entrepreneurs focused on solving deeply rooted social problems. Why they can help? First, they are masters at spotting technology’s unintended consequences early on and how to mitigate them. Second, they innovate with technology and build new markets guided by ethical considerations. Here, then, are five principles, distilled from looking at the work of over 100 hand-picked social entrepreneurs from around the world, that shed light on how we can build a better way forward:
ADVERTISEMENT
Artificial intelligence must be accompanied by human intelligence
AI isn’t wise enough to interpret our complex, diverse world – it’s just bad at understanding context. This is why Hadi Al Khatib, founder of mnemonic, has built an international network of people to mitigate what technology does wrong. They save eyewitness accounts of possible war crimes – now mostly Ukraine, previously Syria, Sudan, Yemen – from being deleted by YouTube and Facebook. The platforms’ algorithms do not understand the local language nor the political and historical circumstances in which these videos and photos were taken. Mnemonic’s network securely archives digital content, verifies it – yes, using AI too – and makes it available to prosecutors, researchers and historians. They provided important evidence that led to successful prosecutions of crimes. What’s the lesson here? The seemingly better AI gets, the more dangerous it becomes to trust it blindly. Which leads to the following point:
AI cannot be left to technologists
Social scientists, philosophers, changemakers and others need to sit down at the table. Why? Because data and cognitive models that train algorithms are often biased – and computer engineers are unlikely to be aware of the bias. More and more research has revealed that algorithms, from healthcare to banking to criminal justice, have systematically discriminated against black people in the US. Biased data entry means biased decisions, or as the saying goes: Garbage in, Garbage out. Gemma Galdon, founder of Eticsworks with companies and local governments on algorithmic audits to prevent this. Black lives matterFounded by Yeshi Milner, weweave alliances between organizers, activists and mathematicians to collect data from communities that are underrepresented in most datasets. The organization played a key role in bringing to light that the death rate from Covid-19 was disproportionately high in black communities. The lesson: In a world where technology is having an outrageous impact on humanity, technologists should be aided by humanists and communities with lived experience of the problem, to avoid training machines with the wrong models and inputs. Which leads to the following point:
It’s about people, not about the product
Technology needs to be conceptualized beyond the product itself. How communities use data, or rather how they are enabled to use it, is critical to impact and outcome, determining whether a technology leads to more bad or good in the world. A good illustration is the social networking and knowledge sharing application SIKU (named after the Inuktitut word for sea ice), developed by the Arctic Eider Society in northern Canada, founded by Joel Heath. It enables Inuit and Cree hunters across a vast geographic area to leverage their unique knowledge of the Arctic to collaborate and conduct research on their own terms – leveraging their language and knowledge systems while preserving intellectual property rights . From mapping changing sea ice conditions to wildlife migration patterns, SIKU is enabling Inuit to produce vital data that informs their land management and puts them on the radar as valuable, too often overlooked experts in environmental science. The key point here: It’s not just the app. It’s the ecosystem. It’s the app co-developed and owned by the community that delivers results that maximize community value. It’s about the impact of technology on communities.
ADVERTISEMENT
Profits must be distributed fairly
In an increasingly data-driven world, allowing a few major platforms to own, mine and monetize data, everything is dangerous – not just from an antitrust perspective. The dreaded collapse of Twitter brought this to the collective conscience: journalists and writers who had built up audiences for years are suddenly at risk of losing their distribution network. Social entrepreneurs have long been experimenting with different types of data collectives and ownership structures. In Indonesia, Regi Wahyu enables smallholder rice farmers at the bottom of the income pyramid to collect their data – land size, cultivation, harvest – and put it on a blockchain, rewarding them every time their data is accessed and allowing them to save intermediaries for better profits. In the US, Sharon has grown Terry Genetic Alliance into a global, patient-driven data pool for genetic disease research. Patients retain ownership of their data and have an interest in a public benefit entity that hosts it. Aggregated data is shared with scientific and commercial researchers for a fee, and a portion of the profit from what they discover is returned and redistributed to the pool. Such practices illustrate what Miguel Luengo called “the principle of solidarity in AI” in a article in nature: a fairer share of the profit gained from data, as opposed to the winner takes it all.
The negative externality costs of AI must be priced in
The aspect of solidarity leads to a bigger point: the fact that currently the external costs of algorithms are borne by society. The main example: social media platforms. Thanks to the way recommendation algorithms work, excessive, polarizing content and disinformation spread faster than thoughtful, thoughtful messages, creating a corrosive force that undermines trust in both democratic values and institutions. At the heart of the problem: surveillance capitalism, or the platform business model that drives clicks about truth, engagement about humanity, and enables both commercial and government actors to manipulate opinions and behavior at scale. What if that business model becomes so expensive that companies have to change it? What if society insisted on offsetting the external costs – polarization, disinformation, hatred? Social entrepreneurs have taken advantage of strategic litigation, pushed for updated regulations and legal frameworks, and are exploring creative measures such as taxes and fines. The field of public health can offer starting points. After all, the tax on cigarettes has been the cornerstone of reducing smoking and controlling tobacco.
Janice has been with businesskinda for 5 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider businesskinda team, Janice seeks to understand an audience before creating memorable, persuasive copy.