Europe Rights Group Warns of Risk of Using AI

The European Union’s rights watchdog has cautioned against using artificial intelligence in predictive policing, medical diagnoses and targeted advertising as they go over rules for next year to address the issues the technology may bring.

Although AI is widely used by law enforcement agencies, human rights groups claim that tyrannical regimes abuse it for discriminatory surveillance. They also allege that it violates the fundamental human rights of people, as well as data privacy rules.

The Vienna-based EU Agency for Fundamental Rights (FRA) has encouraged policymakers to elucidate more on the application of existing rules on AI and guarantee that future laws made on AI protect fundamental rights.

‘AI is not infallible, it is made by people and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions’, said Michael O’Flaherty, the FRA Director in a statement.

The FRA report which was issued on Monday comes as the European Commission, the EU executive mulls over laws to guide reportedly high risk sectors like healthcare, energy, transport and some other parts of the public sector.

The agency insisted that the rules of AI must respect all fundamental rights with safeguards in place to ensure this. It should also include an assurance that the decisions taken by AI can be challenged by people and companies must explain how the systems take AI decisions.

They also said further research should be done on the discriminatory effects of AI so it can be guarded against by Europe and it must also be clarified how rules on data protection apply to the technology.

FRA’s report is done with information gathered from over 100 interviews with both public and private organizations using AI. The analysis is based on the use of AI in Spain, Finland, the Netherlands, France and Estonia.

By Marvellous Iwendi.