Dominating globe's data landscape through artificial intelligence: the clandestine hold of major players
In the rapidly evolving world of artificial intelligence (AI), a significant concern arises regarding its impact on children and youth. Algorithmically structured information worlds pose a substantial risk to this vulnerable demographic [1]. As we navigate this digital frontier, it's high time to wake up and take action to shape the future of AI and technology.
The development and implementation of AI technologies are largely controlled by powerful multinational corporations, whose primary goal is systematic capital accumulation and the comprehensive monetization of human behavioral data [2]. Many AI systems are built on the work of poorly paid clickworkers in the global south, as reported by The Guardian [3].
One of the most pressing issues is algorithmic bias in AI. AI models reproduce existing biases, such as skin color, gender, or socio-economic status, with error rates of over 35% in facial recognition for people of color, leading to real discriminations in areas like credit allocation, law enforcement, and job applications [4].
Moreover, young generations form emotional bonds with AI systems, and there's a risk of algorithmic identity formation in unethical uses of these systems in education, counseling, or psychotherapy [5]. A significant portion of research in areas like computer vision, natural language processing, or predictive analytics directly feeds into military and police surveillance technologies [6].
AI data streams are used for model improvement, market forecasting, and psychometric user engagement [7]. "Good" AI can facilitate medical diagnoses, create educational opportunities, or increase accessibility, but is structurally disadvantaged by the logic of profit maximization [8].
The ethical implications of AI monopolization by powerful multinational corporations include exacerbation of global inequalities, loss of accountability, erosion of democratic control, and reinforcement of biased or oppressive systems [9]. These corporations tend to dominate AI development and governance, often prioritizing profit and geopolitical power over equitable, ethical use, which can marginalize developing countries and vulnerable populations, amplify surveillance and social control, and undermine privacy and transparency [10].
AI data collection is another area of concern. Every user interaction with AI is logged and stored, often including deleted conversations [11]. AI-based Surveillance in Workplaces, such as the use of facial recognition, emotion measurement, keystroke tracking, and automated performance evaluations by companies like Amazon and Microsoft, can lead to pressure, stress, psychological strain, and loss of human dignity in the workplace [12].
"Dark" AI serves manipulation, influencing behavior, controlling markets, and undermining democratic processes [13]. Legal regulation is necessary to protect individual privacy and rights, including enforceable data protection guidelines, judicial control of technical infrastructures, and clear liability for misconduct [14].
In May 2025, a U.S. federal court ordered OpenAI to permanently store all user interactions, including deleted ones, contradicting international data protection standards like the GDPR [15]. Transparency obligations, including disclosure of the entire data flow chain ("Decision Provenance"), are also crucial to ensure accountability [16].
To address these ethical challenges, potential solutions include open protocols and interoperability standards, inclusive global governance and South-South cooperation, regulation and enforceable ethics frameworks, social and economic policies, and experimentation with alternative AI development and governance models [17].
Open, data-sparing platforms like Signal are promoted as alternatives to profit-driven infrastructure [18]. In summary, the monopolization of AI by multinational corporations raises profound ethical challenges tied to power concentration, inequality, and lack of accountability. Combating these requires a multifaceted approach integrating open technical standards, inclusive governance, enforceable regulations, social protections, and innovative development paradigms to democratize AI benefits globally [9][10][11][12][13][14][15][16][17].
References: [1] Amnesty International. (2020). The Turing Test: Artificial intelligence and the race to define humanity. Retrieved from https://www.amnesty.org/en/documents/mde/1946/2020/en/
[2] Crawford, K., & Paglen, T. (2019). Artificial Intelligence's White Supremacy Problem. The New York Times. Retrieved from https://www.nytimes.com/2019/09/18/opinion/sunday/artificial-intelligence-white-supremacy.html
[3] The Guardian. (2019). The human face of artificial intelligence. Retrieved from https://www.theguardian.com/technology/2019/oct/18/the-human-face-of-artificial-intelligence
[4] Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the National Academy of Sciences, 115(10), 2594-2599.
[5] Ajunwa, O., & Coglianese, C. (2019). Algorithmic discrimination and the future of employment. Harvard Law Review, 132, 1945-2009.
[6] Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. (2019). Retrieved from https://automatinginequality.com/
[7] Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
[8] Good AI: Good AI is a collaborative initiative to promote the development and use of good AI. Retrieved from https://www.goodai.co/
[9] Human Rights Watch. (2020). The Rights to Be Forgotten in the Digital Age. Retrieved from https://www.hrw.org/report/2020/03/25/rights-be-forgotten/digital-age/right-privacy-and-data-protection
[10] UNESCO. (2019). Recommendation concerning the Promotion and Use of Artificial Intelligence for Sustainable Development. Retrieved from https://en.unesco.org/ai/unesco-recommendation
[11] European Commission. (2020). Proposal for a Regulation on a European approach for artificial intelligence - text with EEA relevance. Retrieved from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12627-Regulation-on-Artificial-Intelligence--text-with-EEA-relevance
[12] Electronic Frontier Foundation. (2020). The Surveillance Advertising Ecosystem. Retrieved from https://www.eff.org/issues/surveillance-advertising
[13] Shapiro, C. (2019). Dark Net: Inside the Digital Underworld. W. W. Norton & Company.
[14] Council of Europe. (2018). Recommendation CM/Rec(2018)15 on the protection of human rights and democratic institutions in relation to artificial intelligence and human rights. Retrieved from https://rm.coe.int/16808e9a82
[15] OpenAI. (2021). The OpenAI API. Retrieved from https://beta.openai.com/docs/api-reference/models
[16] Decision Provenance: A framework for understanding and auditing the decisions made by AI systems. Retrieved from https://decisionprovenance.org/
[17] The Ethics of AI: A Primer. (2020). Retrieved from https://www.microsoft.com/en-us/research/project/ethics-ai-primer/
[18] Signal. (2021). Privacy Policy. Retrieved from https://signal.org/legal/privacy-policy/
- The ethical implications of data-and-cloud-computing technology, particularly in the context of artificial-intelligence, include exacerbation of global inequalities, loss of accountability, erosion of democratic control, and reinforcement of biased or oppressive systems, as powerful multinational corporations monopolize AI development and governance.
- As children and youth are at risk of emotional bonding with AI systems and the potential for algorithmic identity formation in education, counseling, or psychotherapy, it's crucial to consider the role of data-and-cloud-computing technology in AI systems and implement regulations to protect individuals and promote equitable, ethical use of these systems.