22 February 2026

Sunday, 22:46

AI CENSORSHIP DUE?

Ensuring children's right to develop their intellectual faculties requires adopting an age of consent to use Artificial Intelligence

Author:

01.02.2026

In recent years, the architecture of European education has faced a challenge that, if comparable in force to the advent of the internet, is far more dangerous in its impact on human capital. This refers to the integration of artificial intelligence (AI) into children's daily lives. While technology giants compete in algorithmic perfection, the governments of France, Spain, and Greece are beginning to realise that, for the unformed child's mind, AI is becoming not an assistant, but an 'intellectual crutch' that can lead to the atrophy of basic cognitive skills. The European Parliament has taken initiatives to restrict access to neural networks for individuals under sixteen, and in some countries, strict bans have been implemented for children under ten. These measures are driven not by fear of progress, but by the necessity to protect a child's fundamental right to develop their own thinking.

To date, UNESCO has formally requested that governments establish a minimum age for the use of AI in schools at thirteen. The organisation has emphasised that earlier exposure to these technologies carries potential risks for emotional and cognitive development. However, a number of European states are taking additional measures by considering the introduction of national barriers to protect primary school-age children from the potential impact of algorithms.

At the end of 2025, the European Parliament supported an initiative to introduce a unified 'digital minimum' age of sixteen for access to social media and AI companions without parental consent. France, acting as the driving force behind these changes, plans to completely restrict access to platforms using recommendation algorithms and generative AI for those under fifteen by September 2026. Similar measures are being implemented in Australia, where one of the world's strictest laws has recently come into effect. This legislation restricts access to social networks and a range of AI services for individuals under sixteen years of age. Meanwhile, Denmark and Greece are engaged in active discussions regarding the potential reduction of this age threshold to thirteen or fifteen, depending on the technology type. Even Türkiye is considering a state-level ban on using AI applications for children under fifteen, which hints at the progressing of a powerful regional trend to protect the educational space.

 

The mechanism of cognition

The primary risk associated with AI for a primary school pupil is the acquisition of a ready result without the process of cognition. As psychological experts point out, when a ten-year-old relies on an algorithm for search, text analysis, or mathematical problem-solving, they miss out on crucial stages in the development of neural connections. Education is chiefly about surmounting challenges, during which memory, perseverance, and the ability for critical perception of reality are cultivated. The apparent ease with which AI provides answers can create an illusion of knowledge, even in the absence of true understanding. For a child who has not yet developed a foundation of facts and skills, such assistance can be detrimental. It eliminates the motivation to question, evaluate sources and, ultimately, think for oneself.

In the context of Azerbaijan, which is actively working towards technological leadership in the region and implementing AI strategies at the state level, the question of the timeliness of such a ban is especially acute. Digital transformation is a key strategic objective for the country. However, the quality of tomorrow's human capital will be contingent on the competence and independence of today's schoolchildren. Introducing an age limit on the use of AI until 10–12 years in Azerbaijani schools seems not merely advisable, but a strategically necessary step.

This age is not chosen by chance; it is by the end of primary school that a child completes the formation of basic logical operations and information-processing skills.

The correct implementation of such a ban should not amount to punitive measures or a 'digital curtain'.

Psychocorrection specialist Afag Babazade emphasises the importance of integrating restrictions into educational standards and schools' ethical codes. Rather than implementing a complete technological ban, it is essential to establish an environment where, for a designated period, priority is given to 'analogue' development: handwriting, reading paper books, and project activities requiring physical searches in libraries and archives. According to her, state regulation could take the form of mandatory age verification for developers of educational platforms. At the level of the Ministry of Science and Education, this could be manifested in the development of methodological guides for parents, explaining the potential intellectual impact of early access to ChatGPT on their children.

The banning of AI for children under ten is not a fight against the future, but rather an investment in it. In order to ensure that tomorrow's graduates are able to manage AI effectively, it is essential that they first learn to manage themselves. "It is vital to provide children with the opportunity to develop a well-rounded understanding and the ability to think critically. Otherwise, they risk becoming mere extensions of a flawless yet emotionless algorithm in the future. Azerbaijan, located at the intersection of tradition and innovation, has a unique opportunity to implement a balanced approach: to safeguard childhood against cognitive simplification while preparing a generation capable of a creative, not parasitic, symbiosis with technologies," says the expert.

 

The necessary transition

The shift from protective measures to the deliberate integration of technology in the learning process is a nuanced pedagogical transition. It necessitates a shift in focus from control to mentorship, both for parents and educators. As psychologists have noted, if we ensure that by the age of ten, children have acquired a foundation of basic knowledge and manual skills, the subsequent period should become a time of 'controlled immersion'. The seamless integration of AI into the lives of adolescents in Azerbaijan could serve as a paradigm of how digitalisation can enhance a person's well-being without compromising their individuality.

Following the lifting of the age limit, the initial step should be to instruct the child in the architecture of a query, otherwise known as prompt engineering. The objective of this instruction is not to obtain an answer, but rather to test hypotheses. It is important for parents to explain that a neural network is not the ultimate truth, but a statistical model capable of errors and 'hallucinations'. During this period, it is beneficial to practise joint sessions where the child first formulates their own opinion on a question, then compares it with the AI's answer, identifying inaccuracies and algorithmic bias. This approach transforms the utilisation of technology from a passive consumption model to an active, critical research practice.

The most important ethical consideration in this process is the concept of authorship. It is imperative that adolescents have a clear understanding of the distinction between utilising AI as a tool for structuring ideas and engaging in plagiarism.

In both home and school environments, it is advisable to introduce a rule of 'transparency': if part of the work was done with the help of algorithms, the pupil must be able to justify why they resorted to this help and what personal contribution they made to the final result. This approach fosters responsibility and prevents intellectual laziness, thereby preserving the role of the lead architect of meaning.

Ultimately, the key element of safety is maintaining a balance between digital and physical reality. It is important to ensure that access to complex algorithms does not take precedence over social interaction, sporting activities and reading fiction, even after the age of ten. In Azerbaijan, where family communication traditions are strong, parents can rely on joint discussions about what has been read or seen, without the need for gadgets. By the time they reach adulthood, they will therefore approach it not merely as an experienced user of neural networks, but as a mature individual whose thinking is developed to such an extent that no AI can become its replacement. AI will remain only a powerful tool in the hands of a master.

 

To play and to learn

In order to transform theoretical foundations into practical skills, it is essential to utilise the transitional period after the age of ten to develop 'intellectual fencing' with algorithms. In Azerbaijani families and schools, such practices could serve as a conduit from prohibition to conscious mastery.

Psychologists have identified the 'Fact Detective' method as one of the most effective exercises. In this exercise, a teenager is tasked with requesting a neural network to compile a biography of a known historical figure or to describe a scientific phenomenon. The neural network is then asked to find at least three factual errors in the text, using traditional encyclopedias and verified sources to do so. This approach serves to effectively communicate to the child that artificial intelligence is merely a probabilistic machine and not a repository of absolute truth.

Another key element in cultivating a thinker is the exercise on 'comparative style analysis'. A teenager is tasked with writing a short essay on a topic of their choosing independently, and then requests a text on the same topic from an AI. During a joint discussion with a parent or teacher, the depth of emotions, originality of metaphors, and personal experience present in the human text but wholly absent in the algorithmic one are analysed. This helps the child to understand the value of their own perspective and to recognise that the uniqueness of the human perspective on the world is something that cannot be imitated, even by the most powerful computing capabilities.

In order to develop skills in task setting and logical structuring, it is recommended that the practice of 'reverse engineering' be utilised. The child is given a complex answer generated by a neural network, and their task is to reconstruct the chain of clarifying questions that could have led to such a result. This exercise teaches participants to see the structure of information and understand that the quality of artificial intelligence's work directly depends on the depth and accuracy of human thought. This approach effectively transforms an adolescent from a passive recipient of information into an active task-setter. While understanding the mechanics of the process, they are not held captive by it.

The final stage in establishing digital maturity could be project activity, where AI is used exclusively as a technical assistant for routine operations, such as formatting a bibliography or translating terms, while the hypothesis, methodology, and conclusions remain with the student.

Implementing such practices into Azerbaijan's educational environment will allow not just formal compliance with an age limit, but the cultivation of a generation that perceives technology as a powerful amplifier of its own intellect, not as its replacement. Therefore, the transition from strict restriction in early childhood to virtuoso mastery of tools in youth can be viewed as a natural cycle of maturation for a free and independent mind.

Arguments in favour of age restrictions are also reflected on the real political map of the world. The global discussion on 'digital adulthood' has transitioned from the realm of pedagogical theories into the domain of strict legislative acts.

 

'Safety Zone'

Azerbaijan is in a unique position, having adopted the Artificial Intelligence Development Strategy for 2025–2028. While state priorities are focused on training qualified personnel and implementing AI into the economy, ethical filters are already being laid within the education system. It is evident from global experience that the most successful countries are those that do not simply ban a technology, but rather establish 'safety zones' for the development of natural intelligence in early childhood. International research has confirmed that in schools where the use of gadgets and AI assistants is limited, there is an average improvement in academic performance of 6-10%, with the greatest progress demonstrated by those pupils who previously experienced learning difficulties. This suggests that a ban on AI until the age of ten in Azerbaijan might not hinder progress, but rather, it could be a prerequisite for the country's future citizens to possess a flexible, independent, and powerful intellect capable of competing with any algorithms. The implementation of such a limit is a deliberate choice in favour of quality thinking, which will allow Azerbaijan not only to use global technologies, but also to set its own rules in the intellectual agenda of the future.


RECOMMEND:

27