07/01/2021

How to build a data quality framework 

We recently blogged about the importance of data quality, outlining steps that financial services firms can take to ensure that data quality is measured, maintained and improved over time. 

 

A crucial part of this process is devising a data quality framework. This enables key stakeholders to address fundamental questions relating to the quality of data that flows into their organisation. Questions like, “Who owns the data?”, “Who takes responsibility?” and, “What steps must be taken to ensure there is a robust firm-wide approach to data quality?” 

 

Clearly, this requires a lot of thought and collaboration. A single person, team or department cannot define a framework that promotes data quality across the wider firm. To do that, focus and sponsorship from senior management are critical. 

 

Here at Mudano, we’ve spent a lot of time working with financial services organisations to help them conceptualise and implement data frameworks, and we want to share our practical experiences with you to provide a sense of how the process works. 

 

Intense but rewarding 

 

We recently worked with a large UK bank which understands that data is driving transformational capabilities and whose vision is to use data as a strategic asset to create superlative customer experiences. The client knew that to achieve this aim, consumers across the organisation must trust the quality of its data. Data quality was fundamental to the firm’s wider data strategy and data management policy, not to mention its regulatory obligations.  

 

Management realised the bank could benefit from some specialist data advice, so they called us in to help out. The project involved six workshop sessions over the course of two weeks, educating key stakeholders on the theory and practice of assuring data quality. The experience was intense but incredibly rewarding – for us and the client. 

 

The importance of rules and controls 

 

The objective was to help the bank to build a firm-wide data quality process model, providing the group with a common approach to improving and managing data quality. To make this a reality, we had to take a step back and explain the significance of data lineage. 

 

Data quality assurance relieves upon the quality of data being evidenced across its end to end journey, also known as its data lineage. This is ultimately achieved by measuring data using rules and improving the quality of data by implementing controls across dimensions 

 

These terms might sound a little complicated and esoteric, but when broken down, they make complete sense: 

 

  • Rules are statements that specify how data should be processed in an organisation and are used to assess the quality of data. 

 

  • Controls are actions that when deployed, reduce or mitigate risks associated with the quality of the data being managed, produced, maintained and used. 

 

  • Dimensions are used by data management professionals to describe a feature of data quality, assess data against defined standards in order to determine the quality of data, and evidence and communicate the quality of data. 

 

Data quality rules and controls can share the same underlying logic, but can operate independently of each other. In other words, they can complement each other, without contradicting or counteracting. Deploying a robust set of rules and controls as part of a wider framework helps deliver data quality assurance. 

 

Putting theory into practice 

 

Putting data quality rules and controls into practice can be problematic. Assuring the quality of all data in all places as often as possible can be costly, resource intensive (both in terms of people and systems), and complex to implement and maintain. The alternative – assuring only select data in one place at periodic intervals – might be low cost and simple to execute, but it yields suboptimal results. 

 

No two data contexts are the same, which is why we focus on getting to grips with the unique needs, capabilities and constraints of the clients we serve. Having said this, some imperatives remain constant, no matter who we are advising and helping. Data lineage is one of them. 

 

Data lineages are often very complicated, with a significant number of data transfers, multiple and/or recursive paths and many complex data transformations. These events result in many points where data quality risk is posed. Data quality rules and controls must be deployed in appropriate locations to address the risk of inadequate data quality, and we work with clients to ensure this happens. 

 

The right approach 

 

Organisations face very real challenges in identifying, understanding and prioritising their data quality issues. Ultimately, data quality assurance is achieved by publishing the results and outputs of data quality rules and controls respectively across data quality dimensions. To improve and maintain data quality, we must measure it – and rules enable this. To improve the quality of data, controls must also be deployed to reduce or mitigate risks associated with data being managed, produced, maintained and used.  

 

Ensuring data quality is about designing the most effective controls as possible and deploying them – alongside well defined rules – at the most appropriate points in the data lineage. By adopting this highly structured and well documented approach, financial services organisations can reap the commercial rewards of working with quality data. 

This site uses cookies and by using this site you are consenting to this. Find out why we use cookies and manage your settings here.