digna 2026.01 Expands Enterprise Data Validation Inside the Database
New release adds multi-column uniqueness and referential integrity checks, enabling enterprise data validation directly inside source databases.
The new release introduces advanced validation features including multi-column uniqueness checks and referential integrity validation, enabling organizations to enforce complex structural and relational data quality rules directly where data is stored.
As enterprise data environments continue to scale across warehouses, lakes, and operational systems, organizations increasingly face challenges ensuring that data remains consistent and trustworthy without adding additional processing layers or moving large volumes of data outside their infrastructure.
The latest update to digna addresses this challenge by expanding its in-database validation architecture, allowing validation logic to run within the source database through SQL-based inspections rather than exporting datasets into external data quality engines.
According to the company, this approach reduces operational complexity while helping organizations maintain stronger control over sensitive enterprise data.
“Many traditional data quality tools require exporting large datasets before checks can be performed,” said Marcin Chudeusz, CEO of digna. “Our approach moves the validation logic to where the data already lives. This allows teams to enforce sophisticated data quality rules without introducing additional data movement.”
Expanded Validation Capabilities
Release 2026.01 introduces several enhancements to the digna Data Validation module, expanding the types of data integrity rules organizations can enforce across their environments.
One of the key additions is multi-column uniqueness validation, which allows teams to verify compound business keys across datasets. Many real-world business entities rely on combinations of attributes—such as account identifiers, product codes, or timestamps—to define uniqueness. Traditional single-column checks cannot detect duplicates within these compound relationships.
The new functionality enables validation of configurable column sets, helping identify duplicate business entities that may otherwise remain undetected in complex analytical systems.
The release also introduces referential integrity checks designed to validate relationships between datasets. These checks ensure that foreign key values in one datasource exist within a referenced datasource, helping detect orphaned records and broken relationships that can undermine downstream analytics and reporting.
The integrity checks support validation across multiple database environments, including different schemas, tables, views, or database connections within the same project.
These validation mechanisms are intended to support common enterprise scenarios such as:
maintaining data warehouse integrity
validating master data relationships
supporting regulatory reporting
improving reliability of downstream analytics and BI systems
Validation Without Data Movement
A distinguishing aspect of the platform’s architecture is that validation runs directly within the source database.
Instead of extracting data into external processing environments, digna executes SQL-based inspections through database interfaces and evaluates the resulting metrics externally. This design allows organizations to monitor data quality without copying datasets or creating additional storage layers.
The company states that this approach is particularly relevant for enterprises operating in regulated sectors where data residency, governance, and operational control are critical considerations.
“Enterprises increasingly want data quality capabilities that integrate with their existing platforms rather than requiring additional infrastructure,” said Danijel Kivaranovic, PhD, CTO of digna. “Running validation directly in the source database helps maintain governance and reduces unnecessary system complexity.”
Supporting Complex Enterprise Data Environments
In addition to the expanded validation coverage, Release 2026.01 also introduces improvements to datasource modeling and connection management designed to support heterogeneous enterprise data landscapes.
The update includes global database connections, logical datasources, and the ability for projects to reference multiple source connections. These enhancements are intended to simplify configuration across environments where data resides in multiple warehouses or databases.
Together, the new features aim to make data quality operations easier to maintain as enterprise data architectures evolve.
About digna
digna is a European software company headquartered in Vienna, Austria, developing a modular platform for data quality and data observability. The platform enables organizations to monitor data behavior, detect anomalies, and enforce validation rules across databases, data warehouses, and analytical systems while operating within customer environments.
Mayowa Ajakaiye
digna GmbH
+4312260056 ext.
email us here
Visit us on social media:
LinkedIn
Facebook
YouTube
X
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
