[00:00:00] AH: Thank you so much for taking the time to talk with me today. It's really such an honor to have you here on the episode. Just to start things out, would you mind introducing yourself, what you do, and maybe somehow succinctly, in one word, describe how you feel currently about AI in or from the majority world?
[00:00:22] SA: Hi. My name is Sareeta Amrute. I'm an anthropologist, and the one word I would use to describe my feelings about AI in and from the majority world at this moment is troubled.
[00:00:37] RLG: My name is Rigoberto Lara Guzman. I am a Mexican tech worker currently doing research, strategy and engagement, data and society, which is a remote first 501(c)(3) organization predominantly based in the North Atlantic tech milieu, very specific about where exactly I'm speaking from. My word, to borrow from my colleague, Ranjit, is contingency.
[00:01:05] RS: Hello, everyone. My name is Ranjit. I am a researcher with the AI Underground Initiative of Data and Society Research Institute. I'm currently based in upstate New York, and the one word that I would use would be uneven as a way to think about AI in the majority world.
[00:01:26] AH: Yeah. Really great words to start framing this conversation. So this episode, I would really love to kind of dive deeper into just the first portion of the primer, which is named ground realities. I think there was such a wealth of information that the three of you had been able to research and cultivate, along with so many other researchers. I believe that the work included over 160 pieces of work, which is really phenomenal.
So I would love to start with the first subgroup, the first subtopic, which was decolonizing feminist AI. You start the primer off really looking at the decolonial feminist perspectives of AI, like the definition of artificial, the definition of what intelligence means to certain actors. So I guess I'm wondering, throughout your research for this first subtopic, how did your definitions or views of AI change throughout the process?
[00:02:36] SA: Maybe I can start us off a little bit and then pass it to my collaborators. About this joining of feminist and decolonial, I sort of want to start us there. So I think NGOs, your picking up on feminist readings of AI have been really good at unpacking how AI as a term reifies and reinforces particular ideas about intelligence and what intelligence means. Its feminist theorizing on AI has really shown us that the term itself and its deployment reinforces a very male, very white, very colonial idea of intelligence.
At the same time, I speak for myself, but I think my collaborators would agree. The terms decolonial and decolonizing emerge extremely powerfully from particular locations. They emerge from Latin and South America and North American indigenous context. The term itself, and there's a really quite good piece on this that gets cited a lot by Eve Tuck and Wayne on decolonization not being a metaphor. It's tied to movements for sovereignty and movements for regaining sovereignty over stolen land.
I think, for me, it's very important when we put feminism together with decoloniality and decolonial thinking. Very important to me and it's important to me for the whole syllabus that we are recognizing the places from which theorizing emerges. It's not theory from nowhere. It's certainly not theory from the north. It's theory that connects to, as you said, the grounded conditions of a specific place, and it needs to continually engage with them, even as the theory might move across majority world connections.
[00:05:03] RS: I think just very quickly to add to this, this question kind of reminded me of one of these initial links that Rigo once sent to me, which was primarily around this issue of how AI seems to be an empty signifier. It doesn't mean anything. So unless - That's where feminist decolonizing perspective kind of really helps us because what it does is that it constantly asks the question of where are you grounding it? What is the position from which you're actually looking at AI, which kind of changes the meaning of it from being an empty signifier turning it into something very specific, which is located in a particular place, which requires a certain sense of technical practices, and at the same time is fundamentally about how we are shaped by these technological systems and how we are shaping them in a way.
What AI does as a term is that kind of glosses over all of these specificities that actually make AI work in the first place. So as soon as you move away from AI as an empty signifier, you have to read it from the feminist lens in a way.
[00:06:20] RLG: I would add that this current, corriente - I like to use the word current because there are multiple decolonial standpoints, right? They're not only at a regional level but even interregional. There emerges kind of different schools of thought about what exactly the decolonial term means. It is currently a very vibrant debate about what are its potentials and what are its limits within the context of technology studies or interim studies or critical data studies.
One of the things that emerges that I find inter interesting is the importance of underlining the feminist approach because, traditionally, this is - The decolonial current is an emergence of a certain set of thinkers, predominantly male, predominantly kind of from elite classes, predominantly white passing in Latin America. It's the school that I'm most familiar with. At some point, there enters into the debate claims by feminist thinkers and those milieus that are saying, “Well, hold on. I completely agree with the theories that you are laying down, and we are kind of - There's a differential vulnerability at play.”
So there's a kind of a stacking of a feminist critique onto the decolonial theory schools that says before we continue into kind of unpacking the coloniality of race and class and these kinds of mega structures. None of this is able to occur without first the intervention of gender as a demarcation of oppression. So when we were first discussing how to start this primer, it was significant for us to make it decolonial and feminist because that is kind of where the critique is at and where some of the richest interventions are coming from.
[00:09:02] AH: Yeah. It sounds like to me that like having this sort of intersectional approach to critiquing AI, it's really addressing like how multi-dimensional AI issues have become. It's not just - It's become maybe this like empty signifier, like Ranjit had mentioned earlier. But it just has so many implications that touch on so many dimensions of the world. At least that was my takeaway from this conversation.
I feel like maybe this could be a good segue into the second question that I had, which is in your next subtopic named Afro-modernities. You focus on Africa as the site of technical innovation, and it actually has a long history of being a site of tech innovation really inspired by indigenous frameworks. But typically, it's just not seen as such, especially when you're talking about like modern tech nowadays.
So through sort of reading that part of the primer, it was actually interesting to me. The first key word that popped into my head was data governance. I think what we've been talking about in classes, countries like China, India, Brazil, even Ghana, they're all coming out with their own versions of like a GDPR, sort of data governance. I think a lot of countries now are trying to figure out like how they're presenting themselves in the world, especially when it comes to data governance. I guess I'm just curious from your standpoints and from your experiences so far. Do you feel like data governance could help or hurt Africa and Afro-modernities?
[00:11:06] RS: I think there's a broad set of questions here. So I'm going to take this in two ways. One is about policy and the question of what does it mean to actually think about data governance and data protection as a set of issues which deserve regulation. The second part of it is what does it mean to actually think about and take on some of the philosophical approaches that are basically coming out of Africa more broadly. But at the same time, the way in which we think about the nature of relationality that the African philosophy is kind of based on, as a way to basically articulate what our approach to ethics should be in a way.
To answer the question of ethics through an African perspective, first, I would say that one of the ways of moving away from individualism that currently is a part of how we think about AI ethics and the entire focus on relational frameworks that have come up in the recent years, which kind of focus on the idea of what it means to actually think about communities and think about us in relation to other things that are a part of our being, kind of deeply grounded in African philosophy, right?
So one of the ways in which we think about this is - A straightforward way would be to talk about Ubuntu. But at the same time, there are there are simply a way by which as soon as you move away from individualism, which is the grounding philosophical tenet of modernity in a way and Western thinking, we are in the space of then trying to think about relationality, which kind of moves us into a wide variety of different discourses, which are definitely located in different places, and Africa is a deep site for thinking about these issues in the first place.
Now, moving on to the question of policy itself, the only thing that I would like to say there is that Africa has been the site for thinking about digital interventions, especially right from the success of M-Pesa in Kenya, right? The way in which we are basically thinking about what does it mean for small-scale economies to actually then become the fight for algorithmic intervention, Africa provides new forms of experimentations that are currently happening and are being tested out as a way to start thinking about what does it mean to actually turn small-scale users into a big financial product. That is where most of the tensions around data governance are actually coming about in a way, right?
There's very recent scholarship that is kind of focused on the idea that there is a way by which you can predict somebody's creditworthiness based on their social relations, and most of these experiments are currently being done in Africa, as a way to start thinking about who is creditworthy, and how their kinship relations allow us to think about their creditworthiness. Kenya is at the forefront of this particular issue, especially in the context of what has been recently addressed as algorithmic intimacy in a way.
So part of the challenge when it comes to data governance is that, what does it mean to then protect data of these people that are currently subject to all of these algorithmic systems that are being developed? To think through, what does it mean to develop in the first place, and what does it mean for these systems to intervene in these societies and change them fundamentally? Because digital lending, to a certain extent, is predatory and often becomes a lot more predatory when the margins for people to actually maneuver through are much smaller because they're already dealing with financial hardships.
Now, you're basically placing a new set of technical interventions to basically figure out what they have access to and how, to a certain extent, that access then allows them to have certain opportunities. But at the same time, it's at the expense of the high interest rates that are charged for it. So these larger issues around these phenomena that are happening in different countries raise questions around, okay, who does this data belong to. Then it kind of moves from person, to community, to nation states.
Then there are nation states who are basically arguing that our data should stay within our country, and that's currently the domain of how - In terms of regulation, our nation states are thinking about data sovereignty, which is fundamentally different from indigenous ways of thinking about data sovereignty. So there's a sort of a territoriality here, which is an important way of thinking about where this data goes and how it flows and how can you restrict it through borders, which is kind of the way in which a lot of data protection regimes have been imagined, whether it's in India or whether it's in Brazil.
There's a lot of different distinct tensions here. But at the same time, the core challenge here is what does this data do for the person who's basically providing it. Then as soon as you change the unit of how you're thinking about the problem, whether you're thinking about it at the level of the individual, whether you're thinking about it at the level of the community, whether you're thinking about it at the level of the nation state, the question of data governance change. That is what Africa teaches us, I think, in a way.
[00:16:49] SA: That was a really thorough answer, so I don't know if I need to add anything. I probably don't. But, Ranjit, as you were speaking, I thought one thing that, for me, is important to cross the syllabus or the primer, and we can talk about through the example of African modernities, is really to simultaneously think about some of the things that we usually think of as AI harms, extraction, experimentation, environmental destruction.
But also, at the same time, to surface other ways, other pathways of using and reinventing automated technologies in ways that are keeping with long-held traditions of science and technology and epistemologies. So I was thinking about these three words that we threw out at the beginning; contingent, uneven, troubled. In some ways, I think we can see that looking at the continent of Africa, of course, immediately, we need to think about the differences between Sub-Saharan Africa and North Africa, the modes of AI that are deployed in those places.
One is really very largely about policing in North African surveillance, and another is this multisided project of so-called development experimentation in the name of a populace but not serve the populace by any means. Those are based on the continent's very different contingent colonial histories. Also, as Ranjit is so clearly pointing out, the way that AI systems are deployed is highly uneven, dependent on very local markets and influxes of capital, and so on, and so forth.
Then, of course, all of these histories are very troubled. They bring up these colonial paths that haven't yet been undone. But maybe they're also being troubled by local interventions that move AI into a totally new direction. So it allows us to pose this question of what would it mean, what would a Zimbabwean AI project look like, not from the perspective of the state but from the perspective of a certain set of communities or people or feminist interventions in that space.
[00:19:24] AH: Yeah. I feel like - I mean, my question is such a broad question, and Africa has such a diversity of cultures. So I really appreciate both of you starting to delve into a deeper really kind of recognizing just how multifaceted the issue is when it comes to AI impacting Africa specifically. I would love to shift focus into the next subtopic which was named indigenous protocols.
What I found really fascinating about this particular topic was that it's just there's such a rich diversity of indigenous frameworks and technologies. I think sometimes, it's admittedly easy for me to see the word indigenous and just think like North America indigenous, when, in fact, like indigenous communities really exist everywhere around the world. So I'm just curious if you could shortly answer - Did you feel like there were any new insights that you gained, either from studying like a particular indigenous community or just studying indigenous protocols as like a global practice?
[00:20:44] RLG: I will say that we are moving towards a future in which we won't even need to use the word indigenous, right? The word indigenous still emerges out of a political context around the late 20th century. The etymology of the word is still from the Latin. So it is, again, another signifier, one that has been leveraged in kind of like at the turn of the 500-year mark, which was around 1992, particularly in this continent, to signify a kind of a Pan-Indian or Pan- native solidarity movement that has been taking root all over the place, including most kind of prominently at the UN with the signing of the kind of universal indigenous rights measure.
What I mean by that, there's a way in which indigenous protocols kind of is signaling towards a process that is in motion, and it's not only forward-propelled, as in we're moving into some kind of future. But it's actually opening up a landscape to rethink everything, particularly insofar as science and technology is concerned. I'm speaking from the Northeastern Woodlands here, and this is a site of - At least one site ground zero in terms of a colonial entry point of all the harms that Sareeta has enunciated but also of what was brought here.
What was brought here was a certain kind of episteme or a certain kind of assertion of what knowledge is that, quite frankly, was very lacking in terms of what a lot of indigenous native thinkers and scholars are arguing for now, which is what are technologies that are actually life- affirming. That's where the debate is right now. Technologies that affirm life and peace under the logic of a rationality, in this case, a modern North Atlantic rationality, that through its productive measure, i.e. capitalism, is leading us to a certain death at a planetary scale.
This is a very ripe time to be listening to indigenous thinkers, scholars, activist organizers, and adjacent allies to that movement. Why? Because there are so many really good propositions around how to rethink science and technology and information are.
[00:24:15] AH: Thank you. I think that was such a really insightful and thoughtful response, and I really appreciate you taking the time to fit that in to one or two minutes. I know we're a little over time, so I just want to be respectful of your schedules. I guess maybe as like a parting question, do you feel like with the primer that there's already like a momentum building towards, I don't know, other projects? Are you seeing that already happening? Or have people already started reaching out to you about how this work could continue?
[00:25:00] SA: Maybe I can quickly start us on that and build in some ideas around anti-caste struggle and tech at the same time. One thing that this primer represents, and I think we were very intentional in building it this way, is that it's truly a collaborative effort built on work that's been going on for an extremely long time in so many different parts of the world and just building on what Rigo said about life-affirming ways of knowing and being in relationship with others.
One thing that's been made very clear to me over the past year is how many of our traditions have very, very ancient, going back thousands of years, pathways to get there. But those are the non-dominant, the oppressed pathways, and part of what's happening in this project for meis a kind of rediscovery of those. For me, it's really important to recognize, for instance, in anti- caste organizing and politics, there's an amazing syllabus project put together by Murali Shanmugavelan called the Critical Caste and Technology Studies syllabus.
There's also a recent book that came out by Thenmozhi Soundararajan called The Trauma of Caste, and both of those works actually kind of surface this incredible movement towards a different kind of technology, an anti-caste technology, a life-affirming technology that builds on thinkers like Ambedkar [inaudible 00:26:45], thinkers who have been working all along towards that.
I guess I want to shift the discourse away from what the primer is doing going forward, and think more of the primer as kind of a moment or gesture of embrace in which all of these, as Rigo said in the beginning, these currents are upbeat, are coming together for a moment and revitalizing each other, even in conflict with one another sometimes, and then going on in the world and continuing to do what they've been doing all along.
[00:27:21] RS: I think the only thing that I would add to that is that there is a part of how this conversation unfolded, which kind of talks, speaks to this particular issue of how do we actually start thinking about the place of the primer in the context of the work that is currently happening around these issues. There's a way in which we kind of move from a particular position to particular places to the planet, in the formulation of how we're thinking about this particular problem and the relationship that we have, the digital technologies, and what that relationship means for a wide variety of different scales in a way.
I think that's basically what's at stake in the way in which we think about the majority world primarily because at the end of the day, most of the people in the world would potentially just be involved in using these systems more often than designing them. That basically means that - That doesn't mean that they don't have agency. But it means that the way in which they contend with these systems will be fundamentally different and requires us to basically pay a lot more attention to, rather than thinking about this as a design problem, right?
That's what we kind of end up doing when we think about AI ethics. We kind of think of it as a design problem rather than an everyday life problem. I think that's where the majority worldfocus for me kind of takes a turn towards everyday life and how we ordinarily encounter these technologies on an everyday basis and maneuver through them and live with them in a way. What does that life look like, and what does it mean for our planet is the core question at hand here, where a lot of people are talking about it and thinking about it in a wide variety of different ways.
The hope is that we kind of let it grow, and the primer is just an invitation to keep that growth moving along in a way. So it's kind of at the gathering point. Yet, at the same time, it's a way to basically spread it out and acknowledge the fact that this is just one very small introduction to a wide-ranging scholarship, which I don't think can be - A few people can't do it. It's a global phenomenon in that sense.
[00:29:46] AH: Yeah. I think this primer was definitely an inspiration for me and an inspiration for my classmates. So I really appreciate the three of you taking the time to talk to me today.