How Europe is planning copyright for artificial intelligence

a close up of a computer screen with a blurry background

Artificial intelligence is causing a real headache for legislators: the management of personal data, learning with works protected by intellectual property or the way to offer your answers, wrapped in correct verbiage that invites you to believe that they are correct, but sometimes they are fallacious and also some like ChatGPT do not offer sources to verify it. Europe wants to take the lead when it comes to legislating and the legal framework to demand more of these tools is already well advanced. One of the most striking changes is that AIs like ChatGPT will have to reveal their sources.

Artificial intelligence has many open fronts. The amount of personal information collected and the difficulty in accessing or revoking it has already landed ChatGPT a ban in Italy, while other European countries looked closely at EU privacy law. The way to train and learn to reproduce later, because if they use works protected by intellectual property to later “invent” new ones, in their infinite range of possibilities they can find a key so similar that it directly attacks. Come on, plagiarism. Or directly, for attacking open source projects, something for which OpenAI and Microsoft are already involved in litigation. Then there is the danger of offering answers that seem correct, but they are not and there is no way to prove it with the naked eye because in ChatGPT there are no sources that support what the chatbot tells you. As a consequence, some are calling for a halt to major experiments with artificial intelligence until it is more reliable and there is a better understanding of how it works and the risks involved in using it.

ChatGPT is not the first, but it is by far the most mainstream artificial intelligence that exists, in fact it is the fastest growing platform in the history of the Internet. That is why the ban of Italy first to later lift the veto after the changes made to the OpenAI chatbot are carefully watched. In Europe, a working group specialized in ChatGPT has already been created and the EU as a whole is preparing an Artificial Intelligence Regulation, of which there is already a new draft, as collected by Euractiv, which gives an idea of ​​the progress of the negotiations.

A very advanced regulation and with important changes for the AI

Important consensus has been reached in the European Chamber. If the new regulation goes ahead, there are two requirements that imply a fundamental change in artificial intelligence: the development team will have to respect copyrighted works and it will be mandatory to reveal the origin of the data with which the training is produced. these language models. Thus, the dev team will have to document with a “sufficiently detailed summary” what content protected by intellectual property has been used for the training of language models by artificial intelligence, according to The Wall Street Journal.

What will happen if an artificial intelligence creates content from protected content? According to The Wall Street Journal, then the artist would be in a position to claim part of the benefits based on that inspiration with her original work as the source. As the co-director of the agency’s project in the area of ​​artificial intelligence has stated, what is sought is to “increase accountability, transparency and scrutiny of these models”.

It will include principles of human supervision, technical safety, security, transparency, social welfare and diversity, non-discrimination and equity.

It is only the tip of the iceberg.The two previous premises are important, but the list of measures is long and exhaustive. In general, it will include principles of human supervision, technical safety, security, transparency, social welfare and diversity, non-discrimination and equity. Thus, those models without a specific purpose will also be liable to adhere to obligations such as developing in accordance with EU legislation and fundamental rights (including freedom of expression). Practices that involve “unacceptable” risks such as intentional manipulation, emotion recognition or predictive surveillance will also be prohibited. 

Service providers will have to meet requirements on risk, transparency and data management, with special emphasis on those used to manage critical infrastructures such as energy or water due to their environmental impact. Regarding the processing of confidential data such as sexual orientation or religion, these must receive treatment to anonymize or encrypt them to avoid bias, taking place in a controlled environment and later, they must be eliminated, being necessary to justify the reason for their processing.

Leave a Reply