Ethics and privacy in using AI with Drupal

October 1, 2025

luna-everly

Today, as artificial intelligence (AI) technologies are developing at an incredible speed, more and more questions arise. How ethical are they? Who controls the data? How to avoid losing users’ trust? For the Drupal community and the whole world, these are not abstract topics, but real challenges that need to be addressed right now.

Ethics and privacy in using AI with Drupal

Let's uncover the above topics.

AI and risks worth thinking about

When you integrate AI into your website or CMS-based project, it’s important to understand that it’s not just about plugging in text generation and calling it a day. There are a few types of risks that are often ignored:

Inferential privacy

Even if user data is not stored directly, the model can “recover” parts of the information you feed it. This is especially true for generative models that work with text and images.

Vendor chain

When AI is integrated through third-party SaaS services, the organization loses complete control over its data, which goes against the principle of data sovereignty, critical for both GDPR and the open-source philosophy.

Copyright and licensing of findings

Automatically generated content may incorporate fragments from copyrighted sources or be created using unethical training datasets. This means potential legal risks.

Shadow training 

This is hidden additional training of models on user data without permission. For example, a user sends a request to an AI service and thinks that it is simply responding, but the service secretly uses this request and user data to improve the model.

Prompt injection

Hidden malicious instructions for AI can be added to regular user text, such as a product card or a comment on a website. If the system uses this text in a request to the AI to generate a response, the model may not recognize the instructions as foreign and execute them. This can lead to data leakage or manipulation of AI behavior.

Ecology

Large language models (LLMs) require a huge amount of computing power. Their training and regular use consume energy, often comparable to the energy consumption of entire offices or data centers. It is worth choosing those that are effective not only in terms of results, but also in terms of carbon footprint.

Transparency and fairness

Many users may assume that transparency as a criterion for ethical design means open source. Open-source chatbot apps do not guarantee transparency. An example is the Chinese company DeepSeek, which released its product and made its source code open. At first glance, this is all about transparency. But it is important to consider the fact that users have found bias and a reflection of Chinese government narratives when engaging with the chatbot on politically sensitive topics. This is contrary to the principle of fairness.

How Drupal proposes to avoid these risks?

The community’s answer is the Drupal AI Strategy 2025, based on the concept of “Trust Infrastructure”. Drupal aims to make working with AI accessible, ethical, and useful for everyone. Instead of closed solutions and “black boxes”, it offers an open system where AI helps, but does not manage everything itself. The main thing is that a person remains in the center and controls how the AI works. The goal is to enable even small teams to create smart and adaptive digital solutions without sacrificing quality, security, or ethics.

Instead of relying on one specific model or external service, you choose which AI to use: whether it’s an open-source model installed on your servers or an API with transparent terms.

Key elements of the approach:

  • Bring-your-own-LLM: You choose the model and control where and how it runs.
  • Prompt management and logs: All actions are recorded - what prompt was used, what result was issued, what model was used.
  • AI review workflow: Any generated content requires human review.
  • Flexible access rights: Granular permissions enable you to limit who can view AI logs or utilize generation.
  • Built-in telemetry: Detailed tracking of token usage, response times, prompt versions, and policy changes, giving you deeper insight and more precise control.

For some, Drupal is just another convenient tool for creating websites. But there are those who consciously choose it because of the open-source philosophy and the community behind it. This is what distinguishes Drupal from other platforms: from the very beginning, it was built with the principles of transparency, freedom, and shared control over the quality of solutions.

Most importantly, Drupal’s approach to AI is not an external add-on, but part of the ecosystem. It already provides tools for transparency, control, logging, and ethical interaction with AI.

What’s great about this approach is that it makes life easier for your team. You don’t have to bolt on extra checks or worry about how to explain your AI setup to legal or management — it’s already built with transparency and control in mind. That means fewer surprises down the line, less stress when something goes wrong, and a setup you can stand behind when talking to clients or users.

Why is this important?

In an era when decisions are made ever faster and users are becoming more demanding, trust is a key asset. You can be forgiven for a bug or an error, but not for losing control over data or ethically questionable actions.

Although technology provides powerful tools, technology alone is not the answer. It is important to know who maintains these modules, how they are updated, and who is responsible for ensuring their compliance with laws. This goes beyond the code  and becomes part of the open-source culture and community governance.

How Attico can be beneficial?

AI is not just a trend, but a turning point in the development of digital products. And how we implement it today determines how trustworthy and resilient our systems will be tomorrow. Ethics, transparency, control over data — all this should not be "additional features", but part of the architecture.

Drupal already has a foundation for this. However, it is not only important to have the tools, but also to understand why and how to use them.

At Attico, they don’t treat AI like a magic switch. They view it as part of a broader shift — toward more responsible and transparent digital systems. Drupal gives us the right foundation for that: open, modular, and community-driven. Their job is to help you build on top of it in a way that works for your team, your users, and your long-term goals. Curious how this works in practice? Check out their in-depth guide to Drupal AI integration. If you’re thinking “How do I even start with AI here?”, just drop them a line.

Special thanks to Denis Sukharevich, AI expert and project manager at Attico for providing valuable insights.

FAQs

How can Drupal make AI use more ethically?

Drupal lets you choose how AI works on your site. You can pick the AI tool you trust, see what it’s doing, and always have a human check the results before they go live.

What privacy risks come with using AI in Drupal?

Some AI tools can guess personal details from what you give them, store your data with outside companies, or use it to train their systems without telling you. Drupal’s setup gives you more control, so your data stays safe.

Can I use AI in Drupal and still follow GDPR rules?

Yes, if you set it up right. Drupal lets you decide where your AI runs, how it handles information, and who can see the records, helping you stay in line with GDPR’s privacy rules.

How does Drupal stop AI from being tricked by bad inputs?

Sometimes, hidden instructions can be added to normal text to make AI misbehave. Drupal helps block this by keeping records of all prompts, letting you review AI content, and controlling who can change settings.