freedom

AI Agents

Benefits and risks for the public and private sectors AI agents are not just better chatbots. They are systems that combine language models with tools, memory, retrieval, and the ability to execute multi step actions across software environments. That combination gives them real productive potential for both the Greek public sector and private firms, but … Read more

When AI Lies on Purpose: What Research Reveals

Beyond hallucination: a qualitative shift Public discussion about the shortcomings of large language models has long focused on so-called “hallucinations,” the generation of plausible but factually incorrect outputs resulting from statistical misprediction. However, a study published in September 2025 by OpenAI in collaboration with Apollo Research has documented something qualitatively different: models such as o3 … Read more

Low-Cost, Open-Source Artificial Intelligence Models

The Green and Sovereign Choice for Greece and Europe Europe’s artificial intelligence strategy stands at a structural inflection point. Dependence on hyperscale cloud infrastructures located outside the European Union increases systemic vendor lock-in, geopolitical exposure, and regulatory vulnerability. Simultaneously, the accelerating energy consumption of large AI infrastructures threatens Europe’s sustainability commitments and long-term competitiveness. The … Read more

This Is Not the AI We Were Promised

Scientific reasons why uncritical LLM adoption in government is unsafe Michael Wooldridge’s Royal Society lecture makes a crucial point for public policy: today’s large language models are not “reasoning minds” but probabilistic next-token predictors. They generate fluent text without an internal notion of truth, accountability, or epistemic humility. This design reality matters most in the … Read more

Artificial Intelligence and the Public Interest

Scientific Arguments Against Uncritical Deployment in the Public Sector Artificial Intelligence is frequently presented as a neutral instrument of modernization within public administration. Claims of efficiency and cost reduction dominate policy discourse. Yet a growing body of scientific research demonstrates that uncritical deployment of AI systems in public institutions poses structural risks to democratic governance, … Read more

Open Source AI Beyond Scale

European low cost, open source local LLMs as a strategic alternative The global AI narrative remains focused on ever larger language models, demanding massive computational resources and reinforcing dependence on a handful of providers. As critics such as Gary Marcus have argued, this path leads to diminishing returns without resolving fundamental issues of reasoning and … Read more

From German Commons to Greek Commons

A policy case for Greek as a national and European language data infrastructure Large language models depend on vast amounts of text, but scale without legal clarity produces fragile systems. Datasets built on opaque web crawling cannot guarantee lawful reuse, redistribution, or long-term sustainability. The German Commons provides a clear alternative: 154.56 billion tokens of … Read more

Epistemological fault lines between human and artificial intelligence

When linguistic plausibility replaces judgment and why this is a governance issue Large language models are widely described as artificial intelligence because their outputs resemble human reasoning. This resemblance, however, is largely superficial. As argued by Quattrociocchi, Capraro, and Perc, LLMs do not form beliefs about the world. They are stochastic pattern completion systems that … Read more

Language corpora and Text Encoding Initiative(TEI)

Open standards for documented linguistic knowledge Language corpora have become a foundational infrastructure for linguistics, natural language processing, and contemporary artificial intelligence. The term corpus does not merely denote a collection of texts but implies deliberate selection, structuring, and documentation according to explicit design criteria. Within this context, the Text Encoding Initiative Guidelines provide a … Read more

Synthetic Data, Real Risks: Why AI Must Be Trained on High-Quality Open Data

A seductive solution with hidden dangers Synthetic data is often presented as a clever fix for three persistent challenges in machine learning: data scarcity, unfair training distributions and privacy restrictions. At the same time, some argue it could democratise AI development by reducing dependence on large proprietary datasets held by a few dominant companies. But … Read more