The Pope in a gangster outfit, Trump annexing territories from Russia, Pashinyan speaking Turkish, etc.: these and similar scandalous materials often cause a stir on the information domain. While their circulation and online discussion indicate that a significant part of the audience believed what they saw and heard.

All the abovementioned examples are deepfake.
But what is deepfake?
It is already a known fact that artificial intelligence allows using the voice, appearance and other data of people, especially famous ones (stars, politicians, etc.), to create a video, photo or recording, which may often look very realistic.
In reality, however, it is either manipulated, i.e. edited, or completely fake material. In fact, deepfake is information generated by AI, which spreads false content on politics or other areas of public life while appearing credible to the audience.
It is also known that deepfake also creates great opportunities for those engaged in criminal activities, e.g., money fraud using deepfake recordings or videos to withdraw money from banks or companies, etc.
Pornographic deepfakes are also widespread. It is when people’s faces are placed on pornographic videos without their consent or even knowledge, among other personal rights violations.
In such cases, content creators mainly extort money from the “victim” by threatening to spread the deepfake.
Despite the dissemination and widespread use of deepfake technology in the Armenian information domain and the associated problems, there is still no relevant legislation in our country. It is also not yet clear whether the problem is addressed within the framework of existing legal regulations or not.
Information security expert Samvel Martirosyan addressed such deepfakes on his Facebook page and gave advice on how to behave.
The Fact Investigation Platform has repeatedly checked deepfake videos featuring politicians, celebrities, as well as those related to the fiscal sector, and presented methods and online tools to identify and check them.
FIP.am-ն ուսումնասիրել է deepfake-ի դեմ պայքարի միջազգային օրենսդրական փորձը։
Not only fact-checkers, but also governments and international organizations have already begun to fight against deepfake by updating their legislation on information security and adopting new conventions.
FIP.am has studied the best international legal practice of combating deepfake.
USA: no comprehensive regulation so far
There is no comprehensive federal law to regulate this issue in the United States yet. According to American sources, a relevant bill is underway and aims to protect national security from the threats posed by deepfake technology.
Despite this, some laws addressing the issue, including at the level of states, have been adopted.
In particular, the Generative Adversarial Networks Act stipulates that the Director of the National Science Foundation shall support standards and research on the development of such materials and other similar technologies in the future, as well as their use.
The Deepfakes Accountability Act proposes to make it mandatory to link content to AI, and to provide a warning if it was created with AI, especially if it could be misleading or harmful.
In the state of California, for example, there are regulations on both the political sphere and privacy. For example, it is prohibited to distribute deepfakes containing anti-corruption 60 days before an election, in which case the injured party can go to court.
Regarding non-consensual intimate content, the state Civil Code allows for a lawsuit if someone uses their face without the consent of another person to create a pornographic deepfake.
In some states, such as Texas, the creation and distribution of deepfakes with the intent to cause harm is a criminal offense.
The creation and distribution of deepfakes are also criminalized in the United Kingdom and South Korea if they feature a pornographic component, violate a person’s privacy and a number of other rights.
Council of Europe Convention on Artificial Intelligence
In 2024, the Council of Europe adopted the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, which Armenia acceded later.
Practically, it is the first international instrument aimed at ensuring the protection of fundamental human rights in the field of artificial intelligence (AI).
The main objectives of the Convention include:
Protection of human rights: to ensure that AI systems do not violate fundamental human rights.
Integrity of democratic processes and respect for the rule of law: to prevent the use of AI that could undermine democratic institutions or the rule of law.
Non-discrimination: to prohibit discrimination by AI systems on the grounds of race, religion, sex or any other grounds.
Privacy and personal data protection: prevent the misuse of personal data through AI.
AI-generated content labeling requirement
The Law adopted in China in 2023 requires clear labeling of AI-generated content, including the use of watermarks and metadata, to combat disinformation and ensure transparency.
Distributing AI-generated content in violation of these requirements implies fines.
Thus, although AI technologies are improving day by day and enabling creation of increasingly realistic deepfakes, there is still no comprehensive regulation across the world.
At the same time, the legislative regulations in a number of countries suggest that the trends in the fight against deepfakes include: establishing transparency requirements, i.e. mandatory proper notification of the audience about AI-generated content, criminalizing the creation and distribution of deepfake content without the consent of the given person, introducing laws preventing the use of deepfakes during electoral processes, and identifying and blocking platforms that create deepfakes with the intent of causing harm.
Nane Manasyan