There were 1,553 press releases posted in the last 24 hours and 356,416 in the last 365 days.

Adopting a Human-centred Approach to the Development and Use of AI

Distinguished Guests,
Ladies and Gentlemen,

First, I would like to thank the organizer for inviting the International Committee of the Red Cross (the ICRC) to this forum. It is my great honor to have this opportunity to share with you a humanitarian perspective on the development and use of AI.

The ICRC is an impartial, neutral and independent humanitarian organization. Since 1863, the ICRC’s sole objective has been to ensure humanitarian protection and assistance for people affected by armed conflict and other situations of violence.

But why is the ICRC, a humanitarian organization, talking about AI?

Simply put, the rapid development of AI is creating significant opportunities, as well as new risks, for the humanitarian sector. The ICRC is committed to exploring if, how, when and where advances in AI can help us achieve our mission. In the meantime, we feel compelled to understand and mitigate the risks that AI poses to the lives and dignity of people living amid conflict or other situations of violence, and to promote the responsible use of AI by related parties.

AI has many potential benefits also for humanitarian organizations. For example, ICRC has developed its own Chatbot to facilitate the staff’s access to information and report drafting. AI based visual recognition technologies are now used to identify missing civilians and combatants by analysing photographs of documents, military identification tags, and written reports recovered from war zones. AI is also being tested to optimise aid delivery routes, improve resource allocation across distributed networks, and support scenario planning and simulations.

In the meantime, AI systems trained by incomplete, outdated, erroneous, or biased data can produce faulty predictions and poor decisions. This may lead to over-representation of certain populations with a bias against certain races, nationalities, genders, or age groups, hindering their access to aid or even exposing them to greater risks. For communities already living through conflict and crisis, such risks are unacceptable.

The ICRC is also evaluating the risks posed by the development and use of AI by belligerents that shapes the environment we work in. For example, generative AI and other digital technologies have greatly accelerated the spread of harmful information, fuelling social polarisation, undermining trust, acceptance and safety of humanitarian workers, and increasing the risk of civilian harm.

In 2024, the ICRC established its institutional AI Policy to guide the exploration of AI in supporting its humanitarian mission. The policy is designed to ensure that all use of AI across the organization remains responsible, safe, coherent, and most importantly, human-centred.

By adopting this human-centred approach, the ICRC upholds a simple but powerful principle: technology must serve humanity. No matter how advanced AI systems become, humans must always remain in control of decisions that affect people’s lives, rights, and dignity. While AI can assist in making humanitarian work more effective, it cannot replace human judgement, empathy, and responsibility.

As a humanitarian organization, maintaining and increasing our physical and emotional proximity to affected people is crucial to building relationships of trust that enable the organization to respond to an evolving palette of needs. AI cannot replace human-to-human interaction. Therefore, the ICRC pays special attention to ensure that the use and development of AI solutions does not jeopardize the ability to demonstrate humanity and empathy through direct and in-person human engagement. 

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.