Amplifying Latine Voices at UNC



From Stories to Data

How AI Constructs the Algorithmic Memory of Latin American Migrants

“What happens when a memory no longer belongs to the person who lived it, but to the system that classifies it?”

Imagine a transit point along a migration route. Groups of people wait for hours. There are no offices, no uniforms, no stamps. A middleman asks them to hand over their phones for verification. With a small device connected by a cable, he copies recent locations, audio messages, screenshots, contacts, and photographs within minutes. 

What once functioned as an everyday or intimate memory becomes a dataset organized through filters and categories that feed into a risk profile. 

This scene reveals how AI systems not only collect information but also produce silence. Whatever does not fit the algorithmic scheme is not recorded. As a result, these points are forgotten and do not inform decision-making. In these processes, memory is absorbed and reorganized for classification and control (Pfeifer, 2020).

Artificial intelligence–based surveillance systems are transforming the lives of Latin American migrants through a process I call algorithmic memory, by which I mean a form of automated remembering where AI systems reorganize people’s stories, sort them into categories, and give more importance to some parts than others, following existing power relations.

When we talk about AI surveillance, we are not just talking about technology or information. We are talking about what happens when people’s memories and real-life experiences (what they feel, where they go, and what they have lived through) are tracked and turned into labels by AI surveillance systems. These labels are used to decide who looks safe and who looks dangerous. As a result, some people are protected, while others are treated with suspicion, such as many Latin American migrants. This is not fair, because these AI systems tend to be influenced by racism, a history in which powerful countries controlled other countries, and the habit of leaving certain people out (Iturmendi Rubia, 2023; Tello, 2023).


Algorithmic memory does not record life as a narrative, but as a fragment—one photograph, one geolocation pin, one isolated comment. What remains outside the frame are stories of violence that drive migration, community ties, and situated experiences. 

In this fragmented mode of recording, memories no longer belong to the people who make them. Instead, they are handed over to states and corporations, who use them (Lancho, 2022). 

From an anthropological perspective, this means that memories, which should belong to people and help protect who they are, are taken away from them and used to control them. What once helped people carry their own stories and sense of identity is now used by states and companies to watch, judge, and manage their lives.

✴✴✴

The technologies that enable these practices belong to a global market of tools capable of analyzing images, texts, and metadata within seconds. It does not matter whether they are used by state agents, informal intermediaries, or private contractors. Their function is the same. 

Once extracted, the data are classified by algorithms that detect route patterns, frequent contacts, keywords, or faces. What does not fit is discarded; what matches statistical patterns is interpreted as a marker of suspicion. This is not a system error. It is the design—an automated record that minimizes human interpretation (Cabrera-Medina et al., 2024; Gray, 2023; Brevini et al., 2024).

When we widen the lens beyond a single checkpoint, a distributed infrastructure appears: devices, apps, cloud storage, analytics dashboards, and scoring models circulating among multiple actors and providers. What emerges is an archive with no clear custodian, privatized, opaque, and normalized as part of the migration landscape. In this network, human mobility is no longer understood as experience but as flow—a set of signals to be monitored, analyzed, and contained (Molnar, 2025).

Calling this memory is not an exaggeration. Photographs, messages, and audio recordings that once belonged to intimate or community spheres now enter opaque systems whose criteria cannot be consulted or contested by those being evaluated. 

The consequences of this opacity are severe. When these AI surveillance systems label people, they do not explain how or why those decisions are made. People do not know what information was used or what rules were followed. They also cannot explain their side of the story or fix mistakes about their lives. As a result, important decisions are made about people without letting them understand, question, or correct them.

Recent studies show that these systems, marketed as objective and efficient, tend to reproduce structural inequalities and normalize high-risk technologies without adequate oversight (Amnesty International, 2024). For many migrants, automated digitization not only reduces their lived experience to a statistical record but also erodes their ability to narrate their own stories. The memory preserved is not theirs, but the one that serves border management.

From an anthropological perspective, this phenomenon can be understood as a specific form of sociotechnical violence. If human memory contains identities, relationships, and continuity, algorithmic memory disrupts this principle. It transforms information into a governing resource rather than something people control (Weiner, 1992). 

This loss of narrative agency–when one’s life is interpreted by external systems–constitutes a form of harm that goes beyond privacy debates. 

In this context, the central question is not whether data remembers, but who exercises that remembering and for what purpose. In extraction processes, information is not organized to preserve stories but to produce control and classification. This is why initiatives focused solely on reducing bias are insufficient. The problem is not only bias, but the asymmetry of power that allows others to decide which aspects of a person’s life matter and which are discarded.

In response to this dispute, I propose three lines of action. First, to openly acknowledge that there is a conflict over memory and that automated systems are not neutral (Iturmendi Rubia, 2023; Tello, 2023). Second, to demand transparency and clear limits, no one should be evaluated by a profile they cannot access or contest. Third, to strengthen community archives that return custody of photographs, messages, and audio recordings to migrants themselves, understanding memory not as an institutional resource but as a collective right (Lancho, 2022; Molnar, 2025).

References:

Amnesty International. (2024). The digital border: Migration, technology, and inequality. https://www.amnestyusa.org/reports/the-digital-border-migration-technology-and-inequality/ 

Brevini, B., Fubara-Manuel, I., Ludec, C. L., Jensen, J. L., Jimenez, A., & Bates, J. (2024). “6: Critiques of Data Colonialism”. In Dialogues in Data Power. Bristol, UK: Bristol University Press. https://doi.org/10.51952/9781529238327.ch006

Cabrera-Medina, J., Magaña Frade, I., Díaz, A., & Cruz, I. (2024). Crossing digital borders: Technology in the migration process across the United States, Mexico, Honduras, and Chile. Frontiers in Political Science, 6, 1-16. https://doi.org/10.3389/fpos.2024.1487769

Gray, C. (2023). More than extraction: Rethinking data’s colonial political economy. International Political Sociology, 17(2), 1-20. https://doi.org/10.1093/ips/olad007 

Iturmendi Rubia, J. M. (2023). Algorithmic discrimination and its impact on human dignity and human rights. Special reference to immigrants. Deusto Journal of Human Rights, (12), 257-284. https://doi.org/10.18543/djhr.2910 

Lancho, C. (2022). Tecnocolonización: algoritmos para la vigilancia masiva de personas migrantes. AlgoRace. https://www.algorace.org/2022/10/28/tecnocolonizacion-algoritmos-para-la-vigilancia-masiva-de-personas-migrantes/

Molnar, P. (2025). AI, surveillance and the privatisation of migration management. Mixed Migration Centre. https://mixedmigration.org/publications/mmr/2025/ai-surveillance-privatised-migration-management/  

Pfeifer, M. (2020). Intelligent borders? Securitizing smartphones in the European border regime. Culture Machine, 20, 1-22. https://culturemachine.net/vol-20-machine-intelligences/intelligent-borders-securitizing-smartphones-in-the-european-border-regime-michelle-pfeifer/   

Tello, A. (2023). Sobre el colonialismo digital: Datos, algoritmos y colonialidad tecnológica del poder en el sur global. InMediaciones De La Comunicación18(2), 89–110. https://doi.org/10.18861/ic.2023.18.2.3523 

Weiner, A. B. (1992). Inalienable possessions: The paradox of keeping-while giving. University of California Press. https://doi.org/10.1525/california/9780520076037.001.0001

Discover more from Lo Nuestro

Subscribe now to keep reading and get access to the full archive.

Continue reading