AI changes the economics of attack
Cybersecurity has entered a new phase. The main shift is not that computers suddenly became vulnerable. Software has always contained bugs, and some of those bugs have always been exploitable. What has changed is the economics of attack. Artificial intelligence lowers the cost of finding weaknesses, understanding unfamiliar code, writing convincing social engineering messages, testing exploit paths and adapting tactics when the first attempt fails.
In the past, serious vulnerability research required rare expertise, time and specialised tooling. Today, frontier AI systems can read complex codebases, reason through execution paths, explain suspicious patterns and help a human operator move from a vague idea to a practical attack plan. This does not turn every amateur into an elite hacker. But it does make mediocre attackers faster, organised attackers more productive and state-backed actors more scalable.
That is why the current debate around AI-enabled hacking matters. The problem is not only malicious code generation. The deeper issue is speed. If attackers can discover and exploit vulnerabilities faster than maintainers can identify, validate, patch and deploy fixes, the balance of power shifts. The critical question becomes whether defenders can use the same technology earlier, more responsibly and at greater scale.
The threat is broader than “AI writes malware”
AI affects almost every layer of cybersecurity. It can generate highly plausible phishing messages in local languages, adapted to a specific institution, job role or public event. It can analyse public information about an organisation and suggest likely weak points. It can help attackers understand error messages, chain tools together, rewrite scripts and test alternative approaches. In the hands of capable operators, it becomes an acceleration layer.
At the same time, AI systems create new attack surfaces. Applications built on large language models can be vulnerable to prompt injection, insecure tool use, data leakage, poisoned training or retrieval data, excessive autonomy and weak plugin design. When a model is connected to email, databases, code repositories, ticketing systems or production infrastructure, a failure is no longer just a wrong answer. It can become an operational incident.
This is why AI security cannot be treated as a narrow technical issue. It is a governance issue, a procurement issue, a software supply chain issue and, for public institutions, a democratic accountability issue. If critical services depend on opaque systems that cannot be inspected, reproduced or independently assessed, society is being asked to trust black boxes at the very moment when verification matters most.
Why open source helps defenders
Open source is not automatically secure. Public code can be poorly maintained, underfunded or vulnerable. But open source has structural advantages that matter in the AI era: transparency, auditability, reproducibility, collective repair and reduced dependency on a single vendor. These are not abstract values. They are operational capabilities.
Transparency allows researchers, universities, companies and public bodies to inspect what a system actually does. Auditability allows security teams to verify assumptions instead of relying only on vendor claims. Reproducibility helps defenders rebuild, test and validate software artefacts. Collective repair allows a wider community to identify and fix flaws. Reduced lock-in gives organisations the freedom to change support providers, build internal competence and avoid being trapped in a closed security stack.
For public administrations, hospitals, schools, municipalities and small businesses, this is crucial. Cybersecurity cannot be reserved for wealthy organisations that can afford expensive proprietary platforms. A resilient digital society needs tools that can be studied, adapted, localised, supported by local companies and improved by communities.
Concrete examples of open defence
A modern defensive stack can start with software transparency. Every public digital service should maintain a software bill of materials using open formats such as SPDX or CycloneDX. When a serious vulnerability is disclosed, the organisation should immediately know which applications, containers and libraries are affected. Tools such as Trivy or Grype can scan containers and dependencies. Gitleaks can detect secrets accidentally committed to repositories. Semgrep can enforce secure coding rules before code reaches production.
The software supply chain must also become verifiable. OpenSSF Scorecard can help assess open source projects for risky development practices. SLSA provides a framework for stronger build integrity and provenance. Sigstore and Cosign can be used to sign and verify containers and software artefacts. These practices matter even more when AI-generated code enters development pipelines, because speed without provenance creates new risk.
For monitoring and incident response, open tools can provide a strong baseline. Wazuh can support endpoint monitoring and security information management. Zeek and Suricata can analyse network traffic. Falco can detect suspicious runtime behaviour in cloud-native environments. MISP and OpenCTI can help institutions share threat intelligence in a structured way. AI can then be added as an assistant for triage, correlation and report drafting, but not as an unchecked autonomous decision-maker.
For AI-specific systems, open methods are equally important. Prompts, retrieval pipelines, model access rules, tool permissions and logging policies should be documented and testable. A public body using an AI assistant should know which data the system can access, which actions it can perform, what is logged, how outputs are validated and who is accountable when something goes wrong.
The policy choice: open, verifiable, maintainable
The answer to AI-enabled cyber risk is not to hide more infrastructure behind closed products. It is to build systems that can be inspected, tested, patched and governed. That means open standards in procurement, mandatory SBOMs for public software, reusable public code, independent audits, support for open source maintainers and national cybersecurity capacity that includes universities, research centres, SMEs and civic technology communities.
AI will make attacks faster. It can also make defence faster. But defenders will only benefit if they have access to the tools, the code, the data structures and the institutional capacity required to act. In the AI era, open source is not just a development model. It is part of the infrastructure of digital sovereignty.

