WHO releases first report on ethics guidelines in health AI | Health

This week, the World Health Organization (WHO) published a report with guidelines on ethics in the use of artificial intelligence in the health area. The document is the result of work that took two years to develop, and was done by 20 specialists. According to WHO, the adoption of AI in health-related processes has a duty to be safe, transparent and accountable.

Flag of the World Health Organization (Image: Pierre Virot/WHO)

According to the report, there are six principles that should serve as a basis for governments, businesses and regulators. Are they:

  • Protection of human autonomy: health decisions must be made by humans, not entirely by machines. In addition, the AI ​​should not be used to guide someone’s medical care without their consent, and their data should be protected.
  • Safety promotion: AI developers must continuously monitor all tools, ensuring they are fully operational and preventing damage.
  • Transparency: developers should publish information about the design of artificial intelligence tools. Processes must be fully audited and understood by users and regulators.
  • Responsibility: mechanisms that determine who is responsible for any problems with AI tools is necessary.
  • Equity Guarantee: the tools must be available in multiple languages, and they need to be trained in a diverse database to avoid cases like the ones we’ve seen in recent years with racist algorithms.
  • Sustainability: tools must be repairable even on resource-poor healthcare systems, and developers must be able to provide frequent updates to avoid technology ineffectiveness.

Such principles should guide professionals in the most diverse possibilities of using artificial intelligence in the area, such as in the development of applications, data processing and in the construction of tools for the prevention, treatment or diagnosis of diseases.

For WHO, technology should not be a quick fix to health challenges — especially in low- and middle-income countries. The authority even warned about the risks of using AI models made in developed countries in economically different regions. In extreme cases, the careless use of artificial intelligence can harm human life.

Pandemic lit warning about AI precautions

During the COVID-19 pandemic, there were clear examples of how the misuse of technology and AI can result in processes detrimental to the well-being of a society. The Singapore government has admitted that data collected for health has been “shifted” for criminal investigations.

In addition, several AI models were built around the world, in countries like the United States and Europe, and with little substantiated data, improperly put into practice to detect Sars-CoV-2 infection — proving to be completely useless later on.

An impasse with technology companies

The WHO report also talks about big techs — such as Google and Apple — that have increased their participation in the healthcare sector through artificial intelligence tools. The organization expressed concern regarding the ethical guidelines of these companies.

“Although these companies may offer innovative approaches, there is concern that they may eventually exert too much power in relation to governments, providers and patients,” the document states.

As a solution, WHO proposes that governments pay attention to proper regulation of oversight mechanisms to make the private sector accountable and responsive, ensuring transparent decisions and operations.

With information: The Verge, WHO.

Leave a Comment