The EU AI Act is poised to become the new GDPR, creating enforceable global standards for artificial intelligence.
Dave Berg, CPO at Constellation Network, argues that this regulation is not a burden but a necessary “liability shield” that most businesses welcome for risk mitigation.
Berg warns that without a unified standard like the EU’s, companies would face a “patchwork nightmare” of conflicting rules that would stifle innovation.
With key obligations of the European Union’s AI Act having taken effect on August 2, 2025, the global technology industry is facing a moment of regulatory déjà vu. For many veterans of the enterprise software world, the Act’s arrival feels like a direct sequel to the rollout of GDPR—a sweeping, enforceable standard with the power to reshape global norms, whether corporations are ready or not.
The central tension is no longer if AI should be regulated, but how. While the US has favored a lighter touch of best-practice suggestions, the EU is establishing a framework with real consequences, forcing a new era of transparency and accountability onto a technology that is already making life-altering decisions in the dark.
Dave Berg, the Chief Product Officer at Constellation Network, saw a familiar script playing out. For Berg, the EU AI Act isn’t just another regulation; it’s the first meaningful attempt to instill professional discipline into an industry that has thrived in a state of unstructured innovation. Drawing on over two decades of experience leading product strategy at firms specializing in AI, cybersecurity, and automation, he argued that the Act’s true power lies in its enforcement, a lesson the world learned from its predecessor.
“I see the EU AI act following the exact same pattern as GDPR. Everybody underestimated it at first, and then GDPR actually had teeth. As opposed to in the US where we tend to write things about best practices and suggestions, the EU is onto something, laying out best practices suggestions and penalties for those who don’t comply.”
The push for accountability is a direct response to a growing danger. In the race to innovate, AI models have become black boxes, making critical decisions about people’s lives without any clear record of their training data or inherent biases. Berg warned that this isn’t just a technical problem; it’s a moral one.
Dangerous and reckless: “If you don’t have clear visibility into what you trained it with and how you adjusted and tuned the model, you don’t know what you’re getting. And if you’re using these models to do things like credit worthiness or hiring, you’re affecting people’s lives with unknown technology. That’s dangerous; that’s reckless,” Berg stated. He revealed that the “black box” is often a convenient shield for breaking promises. “‘Oh yeah, Mr. Customer, your data is safe with us. We won’t use your data to train the model for anybody else. We promise.’ And they break that promise almost every time because they think, ‘our model will be better.’ But you’re not playing by the rules.”
Without a unified standard, businesses face a chaotic future. Berg warned that a proliferation of conflicting local and regional AI rules would create an unmanageable compliance burden, ultimately stifling the very innovation it seeks to govern.
A patchwork nightmare: “The challenge there is if it turns into a cottage industry, then it’s going to be a nightmare to support from a commercial perspective. I’m going to be dealing with, ‘Okay, what are the laws in New York? What are the laws in California? What are the laws in Europe?'”
While tech behemoths like Meta are pushing back against the EU’s voluntary AI Pact, Berg argued they are the exception, not the rule. He maintained that the vast majority of the business world—especially smaller companies without armies of lawyers—welcomes the clarity that the binding EU AI Act provides. For them, regulation isn’t a burden; it’s a vital tool for risk mitigation.
The liability shield: “The big guys are pushing back on it, but the little guys actually want this,” Berg said. “I want to know what my software is made of, where it came from, what I have rights to, and what I don’t have rights to. That’s part of running the business.”
For companies staring down the compliance deadline, the path forward may seem daunting. But Berg’s advice is pragmatic. The goal shouldn’t be to perfectly document the past—an impossible task he called a “fool’s errand”—but to commit to building responsibly from this day forward. “Don’t try and dig up all those old skeletons if you don’t have to, because the rate of change is so fast with this. Start with what you’re working on now and build for the future.”
Ultimately, Berg suggested the Act’s greatest long-term impact may be in creating a safe environment for the next wave of innovation. As AI tools become more accessible to citizen developers and startups, the need for clear guardrails becomes a matter of public safety.
“You’re getting a lot of people with not a lot of domain knowledge who can whip something up pretty fast. And if there’s no structure for them, they can be dangerous,” he concluded. “Provide some structure for them so they don’t go off the rails.”