By: Manish Thakre
The world is becoming increasingly urbanized, with two-thirds of global population growth expected to occur in cities between now and 2050. Cities are therefore set to become among the largest deployers and markets for artificial intelligence (AI) technologies, including to improve public services. In the United States and other advanced economies, urban local bodies are investing in AI to improve traffic management, public safety, energy and waste management, customer service, predictive maintenance, urban planning, cybersecurity, fraud detection, and data-driven decision-making. In the developing world, cities are also experimenting with practical AI applications. For example, in India, cities such as Chandigarh and Pimpri Chinchwad are deploying AI-enabled traffic systems to detect violations and issue automated e-challans (digital traffic tickets). In Chile, Puerto Varas is piloting an AI-assisted tool to improve access to medical appointments and coordination within public health services.
While AI can enhance planning, service delivery, and citizen engagement in cities and local communities, its benefits depend on strong governance frameworks, ethical safeguards, and informed procurement. Cities increasingly rely on private vendors for AI systems, but often without adequate regulation or oversight. A 2023 study of 170 local governments found that only 16% had published AI policies, despite the widespread use of AI technologies. As local governments increasingly rely on private vendors, building institutional capacity — particularly among officials responsible for procurement — is essential to ensure these technologies serve the public interest. Decisions about what AI systems to buy, and under what conditions, determine to what extent these systems benefit citizens or produce harmful outcomes.
When deployed without adequate oversight, AI systems can introduce risks related to bias, discrimination, and privacy violations. In 2017, Rotterdam introduced an AI system to detect welfare fraud that disproportionately flagged women, young parents, and residents with limited Dutch proficiency as high-risk. Public scrutiny and an ethics review led to the system’s suspension in 2021, highlighting how algorithmic tools can exacerbate discrimination when deployed without transparency or safeguards. Similarly, in San Diego, the city council passed an ordinance in 2025 banning the use of AI-driven rent-pricing software that leverages private competitor data, citing evidence that such systems contributed to algorithmic price-fixing during a housing affordability crisis. As these and many more examples demonstrate, local governments frequently lack the internal expertise needed to assess complex AI products. Vendors often provide limited disclosure about how models function, what data they use, or how they evolve over time. Because AI systems evolve, contracts must account for updates and emerging risks.
Some cities are taking the lead in adapting and enforcing AI policies. San José has an AI policy that publicly maintains an active inventory and algorithm register for transparency. The city’s framework includes vendor fact sheets and review processes to assess effectiveness, equity, privacy, and accountability, while also emphasizing staff empowerment through education and training. The city is also a co-founder of the GovAI Coalition, which brings together local, state, and federal agencies to share templates, governance tools, and best practices for responsible AI adoption. Similarly, in Europe, Barcelona's municipal strategy integrates AI into city services through a public algorithm register for transparency and rights-respecting procurement clauses.
These examples carry relevance for India and other countries in the developing world. In its national AI strategy, #AIforAll, NITI Aayog, the policy think tank of the Government of India, highlights priority sectors — including smart cities and mobility — where AI could address major societal challenges. However, the strategy also identifies key barriers to adoption, including limited AI expertise, fragmented data ecosystems, high resource costs, and gaps in privacy and data governance frameworks. As positive and negative examples from around the world demonstrate, local governments must prioritize training for procurement and oversight teams, strengthen cross-departmental review processes, and engage independent experts to evaluate AI systems before deployment. They can also learn from the experiences of cities that have already adopted similar technologies. Such steps are essential to maintaining public trust and legitimacy.
AI can help cities improve efficiency, allocate resources more effectively, and respond to complex urban challenges. Cities must therefore treat AI governance as a public responsibility, not merely a technical upgrade. It is imperative for cities to carefully scrutinize proposed AI systems, assess institutional readiness, consult relevant stakeholders, and establish mechanisms for monitoring and accountability before deployment. When governed responsibly, AI can support equitable service delivery and improve the well-being of those who are most often excluded from decision-making.
Manish Thakre is an independent consultant and collaborator with Equitable Cities Collaborative.

