Education cannot run on ambiguity
Artificial Intelligence is no longer a marginal tool in education. It is already being used to draft lecture notes, generate exercises, prepare presentations, write reports, translate material, produce images and even assist with software development. Yet in many cases, students, teachers, researchers and citizens still cannot tell whether what they are reading or using was written by a human, generated by a machine, or produced through a hybrid process of human editing and AI assistance. That uncertainty is not a minor procedural issue. It is a direct challenge to academic integrity, institutional trust and democratic accountability.
The debate in education is too often framed as if the only question were whether generative AI should be embraced or resisted. That is no longer the decisive point. The real issue is whether institutions can use these systems without making their use explicit. They cannot. If a student submits work shaped by a large language model, that use must be declared. If a lecturer prepares course material with generative AI, that use must also be declared. If a public body publishes educational or informational material assisted by AI, the public has a right to know.
This is not an anti-technology position. It is the minimum condition for legitimate use. Without clear labeling, assessment becomes unreliable, authorship becomes blurred, and the educational process is reduced to managing opaque outputs. The answer is not prohibition, but disclosure.
A practical model already exists
A useful example comes from ETH Zurich, whose teaching and learning guidelines take a proactive approach to generative AI in education. Rather than treating AI as either a forbidden shortcut or an unquestioned productivity tool, ETH places responsibility, transparency and fairness at the center of institutional practice. It makes clear that students remain responsible for the content they submit, that AI use should be declared clearly, and that lecturers are responsible for quality control, possible bias, and explicit communication on when AI use is or is not permitted.
This approach matters because it moves the discussion from abstract principle to operational policy. It shows that responsible AI use is not achieved through vague encouragement or isolated warnings. It requires written rules, clear expectations, and a culture in which disclosure is treated as a normal part of academic and administrative practice.
What an immediate disclosure policy should contain
Every school, university and public organisation that produces educational, administrative or public-interest content should adopt a mandatory AI disclosure policy now. At a minimum, such a policy should require clear labeling of texts as human-written, AI-generated, or AI-assisted and human-edited. It should require visible marking of AI-generated or AI-modified images, audio and video. It should require source-code documentation when AI tools have been used in software production. It should also define rules for coursework, examinations, teaching materials, feedback, research support and public communication.
Just as importantly, institutions should make clear that disclosure alone does not remove responsibility. Students remain accountable for what they submit. Lecturers remain accountable for the correctness, quality and fairness of what they distribute. Public bodies remain accountable for the reliability of the information they publish. A disclosure policy is therefore not merely a labeling exercise. It is a framework for responsibility.
AI literacy and labeling must go together
Disclosure policies will only work if they are paired with structured AI literacy. Institutions must ensure that the people working with these systems understand their operation, opportunities, risks and likely harms. Simply allowing access to AI tools without training is not responsible innovation. In education, the risks are immediate and concrete. Generative AI can fabricate references, reproduce bias, introduce factual errors, mishandle confidential information and create a false appearance of competence.
That is why schools, universities and public bodies need more than permissive guidance. They need training for students, lecturers and administrative staff. They need context-specific rules for acceptable use. They need safeguards for privacy and copyright. And they need procedures that distinguish between support for learning and substitution of human effort.
A call for immediate institutional action
Schools, universities and public bodies should stop waiting for confusion, misconduct cases or regulatory pressure to force action. The policy response is already clear. Every use of AI in educational and public-interest content should be explicitly disclosed. Every institution should adopt written rules on acceptable use, declaration requirements, assessment integrity, privacy protection and copyright compliance. Every educator and staff member should receive practical guidance. And every student should know that AI can support learning, but cannot replace personal responsibility, independent judgment or academic honesty.
If Europe wants trustworthy AI in education, disclosure must become the norm now. Not later, not selectively, and not only after harm has already occurred. The first rule of responsible AI use in education is simple: if AI was used, say so.
Sources for this article:
- ETH Zurich, Generative AI in Teaching and Learning Guidelines (December 2024): A strong institutional model for higher education, built around responsibility, transparency and fairness, with explicit expectations for both students and lecturers on disclosure, quality control and compliance: https://ethz.ch,
- European Commission, AI Literacy Questions and Answers: Clarifies that Article 4 of the AI Act applies from 2 February 2025 and requires providers and deployers of AI systems to ensure a sufficient level of AI literacy among staff and others acting on their behalf: https://digital-strategy.ec.europa.eu,
- European Commission, AI Act: regulatory framework for artificial intelligence: Summarises the Act’s transparency obligations, including the requirement that AI-generated content be identifiable and that certain synthetic public-interest content be clearly and visibly labeled: https://digital-strategy.ec.europa.eu .

