Skip to main content

AI in MENA: From environmental strain to social inequity

From Palestine to Syria: How AI shapes environmental and social inequities in MENA
16 September, 2025
While AI promises climate and energy solutions, MENA communities bear hidden environmental and social costs, calling for context-aware, community-led approaches

Artificial intelligence (AI) is hailed as a potential tool to solve the world’s most urgent environmental challenges — from predicting climate disasters to optimising energy use and managing water scarcity.

Yet beneath the optimistic headlines lies a growing problem: the environmental cost of AI itself.

Training large-scale AI models consumes enormous amounts of electricity and water, generates significant carbon emissions, and requires rare-earth minerals often mined under exploitative and ecologically destructive conditions.

These costs are rarely felt in the tech labs or corporate boardrooms where AI is developed. Instead, they are outsourced to the Global South — where infrastructure is fragile, ecosystems are stressed, and communities are least responsible for global emissions.

Countries like Palestine and Syria are not building AI labs or training massive models. They lack the resources, infrastructure, and political stability to participate in the AI boom fully. Yet, they are still entangled in its consequences — as sites of resource extraction, as climate frontlines, and as places where AI tools are deployed without local input or benefit. The technology may be global, but its burdens are deeply place-based.

Before we ask whether AI can save the planet, we must ask: What kind of intelligence are we building — and for whom?

AI is hailed as a potential tool to solve the world’s most urgent environmental challenges — from predicting climate disasters to optimising energy use and managing water scarcity [Getty]

The hidden cost of AI

Despite being branded as "smart," today’s AI systems are anything but rooted in local wisdom or ecological awareness.

Behind every chatbot, climate model, or smart grid recommendation lies an infrastructure shaped by the logic of global extraction: energy-hungry data centres, high-emission supply chains, and vast volumes of data harvested from often unconsenting communities.

These technologies may appear immaterial, but their impacts are intensely material — and deeply unequal.

AI’s environmental footprint is felt in the mining zones of the Democratic Republic of Congo, in the water-stressed lithium basins of Chile, and in regions like Iraq, where climate stress already exceeds infrastructure capacity.

Environment and Climate
Live Story

Meanwhile, the benefits — predictive analytics, smart infrastructure, or climate optimisation — are concentrated in the Global North.

According to research conducted by the Massachusetts Institute of Technology (MIT), training a single large language model can emit as much carbon as five average cars over their lifetimes.

These models also require millions of litres of water for cooling, straining freshwater supplies — an increasingly urgent issue in arid regions like Syria.

As demand for AI computing grows, so does its appetite for rare-earth minerals like cobalt and lithium, extracted under dangerous and ecologically damaging conditions.

Climate colonialism and the illusion of 'AI for food'

AI is increasingly marketed as a tool to mitigate the effects of climate breakdown, optimising crop yields, predicting droughts, or supporting disaster response. These projects often fall under the banner of “AI for good.”

Yet, the narrative obscures a deeper pattern: the expansion of technological systems that prioritise scalability over sovereignty, and efficiency over equity.

Much of this technology is developed in the Global North and deployed in the Global South with little local input or control.

Agricultural AI tools are frequently trained on datasets lacking regional specificity, making them unreliable in communities with different soils, weather, or farming practices.

Worse, climate-vulnerable areas are often chosen as pilot sites not because AI fits local needs, but because these communities have the least power to object.

This is climate colonialism: exporting the risks of innovation to communities already facing environmental degradation, while extracting data, labour, and legitimacy.

In Palestine, for instance, AI is not solving the climate crisis — it is part of the surveillance infrastructure used to monitor and control a population under military occupation.

AI becomes another layer in a system that erodes sovereignty while justifying itself through a language of humanitarianism and innovation.

Palestine: Surveillance in the name of 'security'

Nowhere is the disconnect between AI’s promise and its impact clearer than in Palestine.

Palestinians live under a regime of military occupation and ecological degradation, with restricted access to water, land, and energy, compounded by the growing threats of climate change.

Yet AI is not being used to alleviate these conditions; it is being used to enforce them.

Israeli authorities have deployed a network of AI-driven surveillance systems in the occupied territories. These include facial recognition, biometric databases, and predictive analytics technologies tested in East Jerusalem and Hebron.

The system turns Palestine into a testing ground that enables real-time tracking and control of Palestinian civilians, reducing lives to data points managed by unseen algorithms.

Syria: Experimentation without sovereignty

In Syria, where war has decimated infrastructure and governance, AI is not developed locally, but third-party actors are deploying it.

International agencies and humanitarian organisations use AI-powered systems like Hala Systems’ Sentry to detect aircraft and warn civilians, or platforms like SKAI to assess damage from satellite imagery.

These tools are often life-saving, but they also raise deeper concerns about technological dependency and consent. Designed and managed abroad, these systems typically lack Syrian oversight or long-term integration.

In Syria, as elsewhere, it means that decisions about AI should not rest with distant actors, but with communities directly affected by its use.

As AI expands across borders, its benefits and burdens are distributed along familiar lines: power, extraction, and inequality. Palestine and Syria may not be shaping the future of AI — but its consequences already shape them.

Anisa Abeytia is a researcher and policy analyst working at the intersection of AI governance, red-teaming, and social justice. Her work focuses on building ethical, community-centred technology that supports inclusive innovation and ecological sustainability. Her background spans digital inclusion, humanitarian diplomacy, refugee advocacy, and US-Syria policy