The data compatibility blog,
Join the discussion

Referential Data Integrity Promotes Data Trust

0 March 24, 2024

Comprehensive referential data integrity between and within datasets is ensured using Maxxphase’s Data Compatibility methods, thereby fostering trust in the data. Modern technologies such as AI/ML require trustworthy directly interoperable datasets which can only be efficiently produced by our data compatibility methods.
 

The Disparate Data Foundation – Data You Can’t Trust

 
The dirty little secret about traditional data architectures is that no referential data integrity is enforced between datasets. Each dataset is disparate and offers no direct dataset interoperability. Datasets that lack referential data integrity between them are often characterized as siloed datasets. Because they lack data integrity, you should never trust data combined from multiple disparate datasets!

Beyond the lack of referential data integrity, disparate datasets are disjointed datasets with conflicting metadata and data content pairings. When you data integrate disparate datasets, the data from each source dataset is removed from its native data context and transformed into a new ‘integrated’ data context. Is the source dataset now equal to its slice of the integrated dataset? No, it is not. At best, you can say the source dataset has been recast or, at worst, corrupted. When someone familiar with the source dataset reviews its slice in the integrated dataset, they often notice the differences in the data and no longer trust that integrated dataset. Why should anyone trust the integrated dataset when the resulting recast data is unfamiliar to them?
 

The Compatible Data Foundation – Data You Can Trust

Something very interesting happens when you enforce comprehensive referential data integrity among datasets. These datasets are no longer disparate and they become directly interoperable. As a result, you can seamlessly blend these compatible datasets on demand. Tedious and expensive data integration projects are no longer needed as data integration is replaced by direct dataset interoperability. Data transformations, required for data integration projects, are no longer required or wanted to recast and consolidate source datasets.
 
By using Maxxphase’s Data Compatibility Standards as the foundation for your datasets, you ensure that the proper referential data integrity is enforced between your datasets. Later, when compatible datasets are blended and materialized to form a consolidated compatible dataset, each dataset slice is equal to each compatible source dataset. Since the datasets are not data transformed, no differences are found because no data was recast. So, you can trust data combined from compatible datasets since the datasets have comprehensive referential data integrity enforced.
 

Data Integrity Across Your Data Architecture

One very important process for creating trusted data is to enforce comprehensive data integrity throughout the data architecture. Each properly designed dataset has internal referential data integrity and perhaps its own data quality assurance software. Direct dataset interoperability also ensures external referential data integrity between compatible datasets. Therefore, every dataset of your data architecture should be a compatible dataset. These compatible datasets seamlessly unify your data assets into a modular data fabric. This modular data fabric filled with trustworthy analytics-ready data is the perfect data architecture for your business’s modern technology initiatives such as AI/ML.
 
We feel the slogan ‘garbage in, garbage out’ is still relevant. If your data is a mess, or the volume of data is too great to handle or too costly, or if you need a directly interoperable data foundation with high-quality and accurate data to support advanced technologies, the compatible data foundation is your only logical solution.

Contact us

Blog Contact