Accelerating toward a circular economy requires transparency about chains, products, materials. For example, through the ESPR, Batteries Regulation and CPR, Digital Product Passports are becoming mandatory for a variety of product groups. AI systems promise to help process all that data. But who ultimately makes the decisions? And who is responsible when things go wrong?
DPPs are not a technological issue. They confront organizations with fundamental questions about human decision-making, governance and accountability. For no matter how advanced AI becomes: humans remain responsible.
What DPPs ask of organizations
A DPP requires detailed information about the entire product life cycle. That means data from suppliers, manufacturers, distributors, customers and waste handlers. These include material composition, source of raw materials, manufacturing processes, repair capabilities and recycling instructions.
DPPs can be publicly accessible, but not everyone sees the same information. There are different levels of access for different roles. A consumer sees different data than a recycler or a regulator. So transparency does not mean full disclosure for everyone. It is about the right information for the right party at the right time.
Where AI helps
In practice, product data is often incomplete, scattered across systems and of varying quality. Precisely because this quantity and complexity of data is no longer manageable for humans alone, AI quickly becomes part of the solution, for example to link sources from existing data systems, fill in missing fields based on assumptions, identify inconsistencies. AI can make suggestions; whether they are correct remains a human consideration.
Three situations show how that works.
Situation 1: Data is missing or fragmented
A manufacturer wants to put together a DPP, but the required data is scattered across ERP systems, supplier spreadsheets and legacy databases. AI can help by linking these sources, filling in missing fields based on similar products and spotting inconsistencies.
But then the human questions begin. Is the assumption the model makes about the origin of a resource correct? Who is responsible if that assumption later proves incorrect? Which internal experts should you involve before publishing data? And how do you document which values are based on real measurements and which are based on estimates? This requires human judgment, not automatic logic.
Situation 2: DPP makes data visible to supply chain partners
A DPP can make information accessible to customers, suppliers and regulators. AI can help analyze public data from competitors and create benchmarks. Interesting, but also sensitive.
After all, who decides which party gets access to which information? Which suppliers do you trust as a source for upstream data? How do you verify that supplied information is correct? And what if a competitor uses the same AI tools to analyze your data? These are not technical questions, but strategic and ethical considerations that people have to make.
Situation 3: DPP supports procurement and waste management.
For buyers and waste processors, DPPs provide valuable information about what they are buying or processing. AI can link this data to procurement criteria, recycling protocols or circular targets. Useful when selecting suppliers or optimizing waste streams.
Then again, AI can present options, but humans determine what “best” means. Do carbon reductions outweigh costs? How do you translate organizational values into concrete procurement decisions? And who bears the final responsibility if a choice later turns out to be wrong? In addition to legal requirements, standards and values also play a role that cannot be captured in an algorithm. This is not a technical choice, but a governance decision.
More legislation than you think
In addition to the DPP laws themselves, there is a layer of regulations that govern how organizations may handle data and AI. These laws are often overlooked but are crucial to the proper functioning of DPP systems.
For example, there is the Data Act which regulates who may use and share product data. Or the Data Governance Act which creates a framework for trusted data sharing and intermediaries. The AI Act sets requirements for how AI systems may be deployed. The Cyber Resilience Act focuses on the security of digital infrastructure. And through eIDAS and the EU Digital Identity Wallet, it determines who may create, validate, access and sign digital information.
All these laws together define the playing field within which DPP transparency must be organized. They provide frameworks, but not answers. Organizations must do that themselves.

People remain central
No law determines what constitutes “good enough” data. No AI system is legally responsible for decisions. That responsibility lies with people, within organizations. They set up governance, assign roles and are accountable when things go wrong.
It is tempting to approach DPPs as an IT project or a compliance checklist. But that misses the point. DPPs are neither an IT project nor a compliance checklist, but an organizational and governance issue. And above all, an issue of human decision-making.
AI can help scale up transparency. But only humans can drive that transparency. That requires clear governance, new skills and close collaboration along the chain.
Want to know more?
Together with Positive Impact, Empact is hosting an interactive session on this topic in March 2026. During this session, we will explore how AI and human intelligence work together within DPP processes, and what trade-offs are involved. Sign up at positiveimpact.nu.
Do you have questions about DPPs, data governance or how to prepare your organization for these developments? Get in touch with Empact. We are happy to think along with you.