26 April 2026

Sunday, 18:27

TAMING NEURAL NETWORKS

Why will ‘playing’ with artificial intelligence in Azerbaijan be fined?

Author:

01.04.2026

We live in a reality where a video or a photo is no longer ironclad proof, but often the product of complex algorithms. The decision by the Milli Majlis of Azerbaijan to introduce fines (from ₼80 to ₼150) for publishing content created by artificial intelligence (AI) without a special label is a timely attempt to protect us from the advancing ‘era of lies’. When voices and faces are perfectly forged on a computer, the law must intervene to prevent chaos in the information space.

 

AI versus human

Science confirms that we are practically defenceless against a high-quality forgery. Researchers at the Massachusetts Institute of Technology have proven that the human brain, in seven out of ten cases, takes a skilful deep-fake at face value. The English term ‘deep-fake’ denotes a synthesis of two definitions: deep learning and fake. In other words, it is a technology for creating realistic photos, videos and audio materials in which the face or voice of one person is replaced with the features of another using neural networks. Simply put, a deep-fake is a ‘digital mask’ that AI places over real video. That is, without the mandatory ‘Created by AI’ label, an ordinary person will hardly be able to tell a high-quality fake from the original. And the credibility of any news or photograph is increasingly being called into question.

To prevent this, the C2PA technology comes to the rescue, which experts call a ‘digital passport’ for content—an international standard created by giants such as Microsoft and Adobe. It makes it possible always to know exactly who, when and with what tools created what appears on the screen.

 

How does it work?

For the average person, interacting with the ‘digital passport’ will be extremely simple and will not require programming skills. A special icon will appear on a photograph or in the description of a video—the letter ‘i’ in a circle or the ‘Cr’ (Content Credentials) icon. By clicking on this symbol directly in a browser or social network, the user will see a detailed ‘provenance’ of the file. The system will show whether the photo was taken with a real smartphone or camera, whether AI was used in processing, and whether any serious edits that alter its original meaning were made to the image. If the file is authentic, the system will confirm its integrity, and in the case of a fake or removal of the label, it will issue a warning of violation.

Today, this standard is already being actively implemented by the world’s largest platforms, turning them into a kind of digital detector. TikTok was one of the first to start automatically recognising C2PA labels and marking videos with an AI-generated tag. Instagram and Facebook have joined this movement, integrating the recognition system into their feeds, as has LinkedIn and Pinterest, where authenticity verification has become an important tool for protecting authorship and reputation.

Google is also a member of the coalition. YouTube uses its own internal labels to denote realistic AI content, but the process of full synchronisation with portable C2PA passports is still ongoing. In 2026, the service is actively introducing tools for verifying the authenticity of videos. As for X (formerly Twitter), despite the company having been at the origin of the initiative, its support for C2PA remains selective for now. Authenticity verification here is more often carried out through the ‘Communities’ system than through automatic metadata.

Why is this important to know? For the average user, this means that familiar applications will soon become ‘digital detectors’. That is, if Instagram or TikTok has the special Cr icon, you can be sure that the platform has already checked the ‘digital passport’ of that file and is telling the truth about its origin.

However, it is worth remembering that not all platforms work equally with this standard. Sometimes, when sending a photo via messengers (such as WhatsApp or Telegram), this data can be ‘stripped’ if the application does not support preserving such metadata. That is why Azerbaijan’s law on labelling AI content is so important: it obliges authors to preserve these ‘passports’ so that the chain of trust is not broken.

 

What will control look like?

It is likely that state supervision of compliance with the new rules will be carried out in two main directions. The first is automatic monitoring of the online space. Special detector programmes will scan Azerbaijan’s internet resources, instantly identifying materials with missing or deliberately erased digital signatures. The second direction concerns the responsibility of the platforms themselves: large websites and media outlets will be required to integrate C2PA verification tools into their editorial systems. This will force authors and bloggers to be honest with their audiences.

In the long term, such digital labelling should become as familiar and mandatory a ‘label’ as the list of ingredients on products in a shop. And fines in this matter are a necessary tool for shaping a new culture of media literacy. In a world where a robot can easily pretend to be a neighbour, a politician or a news anchor, the right to verified and authentic information becomes a basic condition for the safety of every citizen.



RECOMMEND:

37