AI: key issue in digital sovereignty and private cloud
In the first part of this blog series We discussed how digital sovereignty is evolving from a technical fringe issue to a strategic boardroom issue. Organizations' infrastructure choices, such as private cloud, hybrid models, and data location, form the foundation for that sovereignty. But it's the deployment of AI that truly makes the issue of control and autonomy urgent.
The research results (which we will publish in full in February) show that organizations in 2026 are balancing between two forces. On one side is the promise of AI innovation, and on the other, the need to maintain control over data, security, and compliance. In this second blog post, we analyze how organizations are approaching AI, which concerns dominate, and how this influences the shift to sovereign AI solutions accelerates.
AI use: fragmented, ad hoc and without policy
Many organizations are still in a transition phase when it comes to AI. In our survey, 62% indicate that AI is still underutilized within their own organization and that they need to catch up. At the same time, employees often use AI tools on their own initiative: 56% use AI without clear guidelines or support from their organization.
This means that AI adoption within many organizations is primarily bottom-up. Individuals are experimenting with generative AI tools, while governance, policy, and infrastructure are lagging behind. Roughly half (52%) of organizations have not yet established a formal policy for employee AI use.
This ad-hoc adoption carries risks. Without central governance, sensitive data could unintentionally end up in public AI models. This not only creates privacy and compliance risks but also increases the risk of data breaches or intellectual property loss. This is all the more reason to address this issue. GDPR compliant to deal with.
The biggest concerns: data security, bias and lack of control
The figures show that concerns about AI are widespread within organizations, particularly regarding security and control. The main concerns cited are:
- Risk of data leaks or insufficient data security, by far the most frequently mentioned concern (83% see this as a minor or major concern)
- Risk of unintended biases or errors in AI outcomes (bias), underscoring the need for transparency and control over model training (79% see this as a concern)
- Lack of control over where data is stored and processed, directly linked to the sovereignty issue from blog 1 (77% see this as a concern)
In addition, uncertainty about legislation and regulations plays a role (79% see this as a concern), whereby EU AI Act further sharpens this issue.
These concerns make it clear that for many organizations, AI is not just an innovation issue, but primarily a governance and security issue.
Privacy and security are more important than speed
A striking finding from the research is that security and privacy risks are decisive in organizations' AI choices: 75% indicate that these risks are crucial in their AI decisions. Many organizations indicate that they first want to have full control over data usage and infrastructure before scaling up AI further (39%). Only 21% choose to benefit from innovation as quickly as possible, even if that means sacrificing control over data and infrastructure.
This confirms a fundamental shift in how organizations view AI. AI is no longer primarily a means of outpacing competitors, but must be embedded within existing frameworks of compliance, security, and data sovereignty. For organizations in sectors such as healthcare and for the government this is not a luxury, but a necessary condition.
The awareness of public versus private AI is growing
A major turning point is that organizations are becoming increasingly aware of the difference between public and private AI solutionsPublic AI tools are powerful and accessible, but they operate on shared infrastructures and external data centers. This carries the risk of corporate information or confidential data leaking outside of a controlled environment.
Private AI, on the other hand, operates within a secure and controlled environment, offering organizations complete control over where data is stored, how it is processed, and who has access to it. The research results show that 69% of organizations are aware of the differences between private and public AI solutions, and 66% consciously choose AI solutions in a private or controlled environment.
At the same time, a challenge remains. Many organizations lack clarity about how data is processed and whether it is used to train AI models. This lack of transparency makes it difficult to make informed choices between public and private AI.
AI legislation: fragmented preparations
De EU AI Act, which has been in effect since 2024 and will be further rolled out towards 2026, places new demands on organizations using AI. The research results show that organizations are in various stages of preparation:
Approximately 24-29% have already taken steps, such as adapting IT or data policies with specific AI guidelines, investigating whether legislation applies, and auditing existing AI systems for risk or bias. A significantly larger proportion (38-46%) are currently making these preparations or plan to implement them by the end of 2026. A substantial proportion (25-35%) do not yet have concrete plans.
This demonstrates that AI governance is still developing at many organizations, while regulations are actually becoming stricter. Organizations that fail to take action now risk being non-compliant and incurring fines or reputational damage. NIS2 and the DataAct further tighten this obligation.
From public hype to private value
The shift from public to private AI isn't a trend, but a logical consequence of organizations' demands for control, security, and compliance. Private AI offers the same functionality as public AI tools, but with full control, integrated into the existing IT environment, and in compliance with Dutch and European laws and regulations such as GDPR, NIS2, and the AI Act.
For organizations working with confidential or regulated data, such as hospitals, municipalities, financial institutions, and IT service providers, private AI is no longer a nice-to-have but a strategic necessity. It enables organizations to realize AI innovation without compromising data sovereignty.
At Uniserver we see this reflected in solutions such as Fuse AI, which offers private AI capabilities within your own sovereign infrastructure.
Outlook: AI as part of the sovereignty strategy
The research results make it clear that today's infrastructure choices directly determine whether AI can be deployed safely, compliantly, and sovereignly tomorrow. Organizations that choose now sovereign cloud solutions and private AI models lay the foundation for sustainable innovation within clear security and legal frameworks.
In the next blog in this series, the focus shifts to the role of European legislation and geopolitics. The focus will be on how regulations such as the AI Act, NIS2, and the DataAct forces organizations to fundamentally rethink their AI and data strategy and why digital autonomy is increasingly becoming the norm.

