Ceará law requires artificial intelligence under human supervision | Artificial intelligence

A new state law approved by the Legislative Assembly of Ceará and sanctioned by Governor Camilo Santana (PT) requires systems based on artificial intelligence to be supervised by humans.


Glasses on the table in front of a screen with codes (Image: Kevin K./Unsplash)

Law 17.611/2021, authored by Deputy Queiroz Filho (PDT), establishes responsibilities and guidelines for the technology and applies both to companies based in Ceará and to those that have systems in use and operation in the State. It was sanctioned on August 11 this year.

The text is short and contains only four articles. The issue of supervision is dealt with in article 2, item IV, which provides the following guideline: “ensure that the systems are always managed by humans, and submitted to them, and human autonomy and supervision must be maintained”.

In an interview with TV Assembly, the congressman says that the law is not a regulation, but an establishment of principles and guidelines of respect for human beings, non-discrimination and accountability.

Association of companies points out legal uncertainty

The project was criticized by the Brazilian Association of Software Companies (Abes). The entity claims that the text brings legal uncertainty and inhibits innovation.

“The risk of legalization will make the state less attractive for investments, especially in the promising market for startups”, says Rodolfo Fücher, president of the group, in a statement. Loren Spíndola, coordinator of the AI ​​working group at Abes, defends that regulations on the subject should be centralized by the Union.

The statement recalls that there are two bills – 21/2020, in the Chamber, and 872/2021, in the Senate – that already deal with the matter. Both even had public hearings to debate the topic. Abes also mentions the Brazilian Strategy for Artificial Intelligence (EBIA), launched by the Federal Government in April this year.

LGPD provided for human review, but paragraph was vetoed

The General Data Protection Law (LGPD) even provided for the right to human review of automated decisions made in the processing of data. However, the paragraph that determined this was vetoed by President Jair Bolsonaro in 2019, when sanctioning the creation of the National Data Protection Authority (ANPD).

At the time, he argued that the measure went against the public interest by making the business models of many companies, especially startups, unfeasible and could make it difficult to offer credit.

Artificial intelligence can reproduce bias

Attempts to regulate artificial intelligence have their reasons, even if they don’t always find the best way to do it. After all, there are many examples of how technology can have an unexpected and even contrary to the intended effect.

One of the biggest cases is that of an algorithm used by health plans in the USA: when determining who should have access to special care programs, it ended up prioritizing healthier white patients over black patients with more complications. This was because the system was trained to determine health risk based on costs, without taking into account that the country’s black population is poorer than whites and, therefore, cannot afford to spend the same on treatments.

Recently, Twitter was embroiled in a controversy when users realized that the network’s image cropping tool always highlighted white faces. The company says it has not found any bias in its tests, but has tried to solve the problem by increasing the size of the preview images in the interface.

Leave a Comment