Building Responsibly: How Marino Software aligns with the EU AI Act

Marino CEO Keith Davey on Marino’s regulation-compliant approach to the use of AI in the design & development cycle.

Image credits: Yutong Liu & Digit  / Better Images of AI  / Deed - Attribution 4.0 International - Creative Commons

The EU AI Act marks a defining moment for our industry. It introduces clear rules, risk-based classifications, and accountability measures for AI development and deployment across Europe. For us at Marino Software, this legislation doesn’t arrive as a surprise or a burden. It aligns closely with how we already approach our work: with a focus on responsibility, compliance, and security.

As a software company delivering solutions in high-regulation verticals, such as financial services, healthcare, and the public sector, we’ve long worked under the assumption that good software must be verifiable, testable, secure, and human-led. The EU AI Act is, in many ways, formalising standards we already uphold.

Principled approaches to AI in Software Development

AI is transforming how software is designed, built, and maintained. But at Marino, we’re not chasing hype. We’re choosing our use cases deliberately, based on governance, traceability, and long-term maintainability.

We are not using AI to originate production code or to make unsupervised architectural decisions. Instead, our AI usage is centred on accelerating development through prototyping, validation, and testing. Our goal is to make teams more productive, not to hand off responsibility.

This aligns with both the letter and spirit of the EU AI Act: the use of AI must be transparent, auditable, and proportional to its context.

Where We Are Using AI and Why

Prototyping and Exploration

AI is proving especially useful in early-stage prototyping, where speed and experimentation are crucial. We use AI tools to draft UI flows, simulate service architectures, and generate sample data. These outputs are always treated as disposable artifacts, used to inform human decisions, not to automate them.

Validation and QA Assistance

We’re also seeing value in AI-assisted validation and test automation — especially in identifying edge cases or speeding up routine testing tasks. These uses help us increase test coverage, reduce risk, and improve consistency, without compromising on human review and control.

Guardrails in Practice

We maintain strict controls over AI tooling across all projects. Our development teams use only approved toolsets. All AI-generated output is subject to the same rigorous review processes as any other code or design artifact.

Our biggest concern and priority remains the long-term integrity and transferability of our codebases. We build systems that must last, scale, and evolve over time. AI is currently not a replacement for that craftsmanship; it’s a support mechanism. That’s why we’re seeing mixed results when evaluating productivity gains. The value is real, but the risk is real too.

Looking Ahead: AI within boundaries

As the EU AI Act comes into force, we believe it will serve as a useful framework to reinforce good habits, not restrict innovation. Our clients expect us to help them move fast, but not at the expense of security, accountability, or ethical standards. That’s why we continue to evaluate each AI use case through the lens of appropriate application, oversight, and responsibility.

We see AI as a tool, not a shortcut. And we welcome any regulation that helps the industry separate sustainable value from short-term hype.

!@THEqQUICKbBROWNfFXjJMPSvVLAZYDGgkyz&[%r{\"}mosx,4>6]|?'while(putc 3_0-~$.+=9/2^5;)<18*7and:`#

Need a Quote for a Project?

We’re ready to start the conversation however best suits you - on the phone at
+353 (0)1 833 7392 or by email