When language models are adopted on corporate data, responsibility is shared among the vendor, the system integrator, and the client. Sergio Ajani of Innovaway shares his perspective.
When large language models (LLMs) are deployed on corporate data, responsibility for security is shared. Sergio Ajani frames the issue with remarkable clarity: the frantic adoption of Artificial Intelligence—similar to what happened with cloud computing during the pandemic—brings with it systemic vulnerabilities. In the era of LLMs, absolute certainty is a luxury, but intellectual honesty requires acknowledging this in order to provide companies with innovation that is truly secure and governable.
The dataset as an attack surface
The effectiveness of an LLM depends on the data used during the training phase. As Sergio Ajani points out, ‘the dataset itself is an extremely critical attack surface.’ From this perspective, Innovaway identifies three main vulnerabilities:
Three levels of responsibility
The issue of responsibility (shared responsibility) is crucial but still not clearly defined in the AI landscape. Sergio Ajani identifies three levels of intervention:
RAG, access controls, and the problem of complacency
For enterprise use, the most concrete model is a Retrieval-Augmented Generation (RAG) architecture fine-tuned on corporate data. This approach inevitably expands the attack surface, making data protection—an essential evolution of Data Loss Prevention, for instance to safeguard industrial secrets and ‘Made in Italy’—and strict access control absolutely critical. Language models also suffer from the well-known ‘compliance problem’: if a malicious actor queries the system by impersonating a senior role, the model may comply. In addition, RAG architectures introduce a temporal governance challenge, and direct prompt access exposes the risk of exfiltrating confidential documents. The pragmatic solution adopted internally by Innovaway is a purpose-driven AI, more filtered: less spectacular, but more secure, with verified RAG pipelines and closed prompts that prevent arbitrary file uploads.
A look at the future
To avoid slowing down innovation, Innovaway is looking ahead. In collaboration with several Italian universities, it is exploring neuro-symbolic architectures, a paradigm that combines neural networks with deterministic reasoning. The goal is to create a ‘supervisory agent’ capable of verifying prompt and output compliance in a logical, not only statistical, way. This is not about replacing people, but about using augmented intelligence for supervision, enabling more reliable outcomes than the artificial intelligence we know today. This is one of the building blocks of the company’s 2026–2028 industrial plan, a growth path based on organic development and targeted acquisitions, strengthened by technological integration stemming from partnerships with market-leading vendors such as HCLSoftware. The crucial shift today, however, remains cultural: a broad awareness effort is needed—starting from top management—to fully understand the real potential and risks of this technological transformation.
Click here to read the full interview with Sergio Ajani