The Queen’s Speech and Data Governance

The Queen of England made an historic and welcome visit to Ireland in 2011.  She delivered a memorable speech at the Irish State banquet, in which she said “With the benefit of historical hindsight, we can all see things which we wish had been done differently, or not at all”.

In real life, we cannot change the past.  The same does not apply to data created in the past.  Regulators now expect financial institutions to:

  • Identify data quality mistakes made in the past
  • Correct material mistakes
  • Implement data governance controls to prevent recurrences

I quote from the UK Financial Regulator’s requirement that all deposit holding financial institutions deliver a single customer view (SCV) of deposit holders:  “There may be a number of reasons why SCV data is not 100% accurate. This might be due to defects in the systems used to compile the SCV, but we would expect such defects to be picked up and rectified during the course of the systems’ development.”

Dodd-Frank, Solvency II, FATCA, BASEL III and many more regulations all require similar. Use this checklist to check if your organisation suffers any common Enterprise-Wide Data Governance Issues.

What data quality mistakes have you uncovered from the past, and how have you corrected them? I’d love to hear about them.

Do you know what’s in the data you’re consuming?

Standard facts are provided about the food we buy

These days, food packaging includes ingredients and a standard set of nutrition facts.  This is required by law in many countries.

Food consumers have grown accustomed to seeing this information, and now expect it. It enables them to make informed decisions about the food they buy, based on a standard set of facts.

Remarkable as it may seem, data consumers are seldom provided with facts about the data feeding their critical business processes.

Most data consumers assume the data input to their business processes is “right”, or “OK”.  They often assume it is the job of the IT function to ensure the data is “right”.  But only the data consumer knows the intended purpose for which they require the data.  Only the data consumer can decide whether the data available satisfies their specific needs and their specific acceptance criteria. To make an informed choice, data consumers need to be provided with facts about the data content available.

Data Consumers have the right to make informed decisions based on standard data content facts

The IT function, or a data quality function, can, and should provide standard “data content facts” about all critical data such as the facts shown in the example.

In the sample shown, a Marketing Manager wishing to mailshot customers in the 40-59 age range might find that the data content facts satisfy his/her data quality acceptance criteria.

The same data might not satisfy the acceptance criteria for a manager in the Anti Money Laundering (AML) area requesting an ETL process to populate a new AML system.

Increasing regulation means that organisations must be able to demonstrate the quality and trace the origin of the data they use in critical business processes.

In Europe, Solvency II requires insurance and re-insurance undertakings to demonstrate the data they use for solvency calculations is as complete, appropriate and accurate as required for the intended purpose. Other regulatory requirements such as Dodd Frank in the USA, BASEL III and BCBS 239 are also seeking increasing transparency regarding the quality of data underpinning our financial system.

While regulation may be a strong driving force for providing standard data content facts, an even stronger one is the business benefit that to be gained from being informed.  Some time ago Gartner research showed that approximately 70% of CRM projects failed.  I wonder were the business owners of the proposed CRM system shown data content facts about the data available to populate the proposed CRM system?

In years to come, we will look back on those crazy days when data consumers were not shown data content facts about the data they were consuming.

Incomplete loan data puts €8.2billion at risk

Ireland’s leading business newspaper, The Sunday Business Post, reported on 13th Nov 2011 that incomplete loan documentation data could complicate banks’ ability to take security on €8.2billion worth of loans, in the event of a default.  (Click here to see the full article).

Data quality measurement can detect incomplete data

Central bank researchers discovered incomplete data in 78,000 of 688,000 loans surveyed. The researchers were producing a paper for a conference on the Irish mortgage market on October 13 2011. They found 10,094 loans lacked a property identifier, 35,044 had no initial valuation, 15,413 had no valuation date, and 18,628 specified no geographic data.

Similar issues with bad loan data led to greater haircuts for the banks when the National Asset Management Agency (NAMA) transferred billions in assets in 2009 and 2010.  In the US, banks have been stopped from pursuing delinquent borrowers where loan data was incomplete or missing.

How could such a situation arise?  How can similar problems be prevented?

Front line staff are often under pressure to complete a sale, and “sort out the details later” (previously discussed here) .  Hence even the most robust and vigorous data validation processes often provide a “bypass” facility. This is normal business practice, and perfectly acceptable.   In many instances, critical documentation for a loan (or other product), may not be available at the time of data entry. Problems only arise if no one goes back to “sort out the details later”.  One or two loans with incomplete data may not pose a major risk, – but incomplete data in 10% of a loan book spells serious trouble.

Common sense data quality management steps can prevent similar problems arising in your organisation. Data validation alone is insufficient.  Data quality measurement, and on-going data quality monitoring is required.

In the case study reported above, central bank researchers used data quality measurement to detect the incomplete loan data.  Similar data quality measurement can and should be incorporated into all business critical systems.  Regular monitoring could generate an alert when the % of loans with incomplete data exceeds a threshold – say 2%.  Alternatively, monitoring could generate an alert when the time limit for “sorting the details out later” has been exceeded.

This case study highlights the difference between data validation and data quality measurement.  I will deal with this topic in my next post.

Feedback, as always, most welcome.

What is your undertaking-wide common understanding of data quality?

Do you have an undertaking-wide common understanding of data quality?  If not – I suggest you read on…

When a serious “data” problem arises in your organisation, how is it discussed? (By “serious”, I mean a data problem that has, or could cost so much money that it has come to the attention of the board).

What Data Quality KPIs does your board request, or receive to enable the board members understand the problem with the quality of the data? What data quality controls does your board expect to be in place to ensure that critical data is complete, appropriate and accurate?

If your board has delegated authority to a data governance committee, what is the data governance committee’s understanding of “Data Quality”?  Is it shared across your organisation?  Do you all speak the same language, and use the same terminology when discussing “Data Quality”?  In brief – are you all singing from the same “Data Quality Hymn Sheet”?

Why do I ask?

Solvency II – What is your undertaking wide common understanding of Data Quality?

For the first time, a regulator has stated that organisations must have an “undertaking-wide common understanding of data quality”.

Solvency II requires insurance organisations to demonstrate the data underpinning their solvency calculations are as complete, appropriate and accurate as possible.  The guidance from the regulator goes further than that.

CP 56, paragraph 5.178 states:  “Based on the criteria of “accuracy”, “completeness” and “appropriateness”… the undertaking shall further specify its own concept of data quality.  Provided that undertaking-wide there is a common understanding of data quality, the undertaking shall also define the abstract concept of data quality in relation to the various types of data in use… The undertaking shall eventually assign to the different data sets specific qualitative and/or quantitative criteria which, if satisfied, qualify them for use in the internal model.”

Business Requirements should be clear, measurable and testable. Unfortunately, the SII regulator uses complex language, that make SII Data Quality Management and Governance requirements wooly, ambiguous and open to interpretation.  My interpretation of the guidance is that the regulator will expect you to demonstrate your “undertaking-wide common understanding of data quality”.  

What might a common understanding of data quality look like?

Within the Data Quality industry, commonly used dimensions of data quality include.

  • Completeness
    Is the data populated ?
  • Validity
    Is the data within the permitted range of values ?
  • Accuracy
    Does the data represent reality or a verifiable source ?
  • Consistency
    Is the same data consistent across different files/tables ?
  • Timeliness
    Is the data available when needed ?
  • Accessibility
    Is the data easily accessible, understandable and usable ?

Little did I know at the time I wrote the above blog post that a regulator would soon require organisations to demonstrate their understanding of data quality, and demonstrate that it is shared “undertaking wide”.

How might you demonstrate that your understanding of data quality is “undertaking-wide” and “common”?

You could demonstrate that multiple “data dependent” processes have a shared understanding of data quality (processes such as CRM, Anti Money Laundering, Anti Fraud, Single View of Customer etc.)

In the UK, the Pensions Regulator (tPR) has issued record keeping requirements which requires pensions companies to measure and manage the quality of their schemes data.  I believe the Solvency II “independent third party” will at least expect to see a common understanding of data quality shared between Solvency II and tPR programmes.  

What do you think? Please share…

Data Governance – Did you drop something?

Welcome to part 5 of Solvency II Standards for Data Quality – common sense standards for all businesses.

Solvency II Data Quality - Is your data complete?

Solvency II Data Quality – Is your data complete?

I suspect C-level management worldwide believe their organisation has controls in place to ensure the data on which they base their critical decisions is “complete”. It’s “applied common sense”.

Therefore, C-level management would be quite happy with the Solvency II data quality requirement that states: “No relevant data available is excluded from consideration without justification (completeness)” (Ref: CP 56 paragraph 5.181).

So… what could go wrong?

In this post, I discuss one process at high risk of inadvertently excluding relevant data – the “Data Extraction” process.

“Data Extraction” is part of the most common business process in the world, the “Extract, Transform, Load process”, or ETL for short. Data required by one business area (e.g. Regulatory reporting) is present in different (source) systems. The source systems are often operational systems. Data is commonly “extracted” from “operational systems” and fed into “informational systems” (which I refer to as “End of Food Chain Systems”).

If the data extraction can be demonstrated to be a complete copy – there is no risk of inadvertently omitting relevant data. In my experience, few data extractions are complete copies.

In most instances, data extractions are “selective”.  In the insurance industry for example, the selection may be done based on product type, or perhaps policy status.  This is perfectly acceptable – so long as any “excluded data” is justified.

Over time, new products may be added to the operational system(s). There is a risk that the data extraction process is not updated, the new products are inadvertently excluded, and never make it to the “end of food chain” informational system (CRM, BI, Solvency II, Anti-Money Laundering, etc.)

So… what can be done to manage this risk.

I propose a “Universal Data Governance Principle” – namely: “Within the data extraction process, the decision to EXCLUDE data is equally important to the decision to INCLUDE data.”

To implement the principle, all data extractions (regardless of industry) should include the following control.

  1. Total population (of source data)
  2. Profile of source data based on the selection field (e.g. product type)
  3. Inclusion selection list (e.g. product types to be included)
  4. Exclusion selection list (e.g. product types to be excluded) – with documented justification
  5. Generate an alert when a value is found in the “selection field” that is NOT in either list (e.g. new product type).
  6. Monitor the control regularly to verify it is working
So – ask yourself – Can you demonstrate that your “data extractions” don’t overlook anything – can you demonstrate that “No relevant data available is excluded from consideration without justification (completeness)”?
Feedback welcome – as always.

FSA SII progress review findings – More Data Governance required

February 2011 – UK Financial Services Authority publishes findings of their Solvency II Internal Model Approval Process (IMAP) thematic review. 

Worryingly, but not surprising are the findings that data management, data quality and data governance are areas requiring most attention: I include specific paragraphs below:

3.2 Data management appeared to be one area where firms still have comparatively more to do to achieve the likely Solvency II requirements.

3.15 Data quality: Few firms provided sufficient evidence to show that data used in their internal model was accurate, complete and appropriate.

6.10 We witnessed little challenge or discussion on data quality at board level. We expect issues and reporting on data governance to find a regular place within board and committee discussions. Firms need to ensure that adequate and up-to-date quality management information is produced. It is important that the board has the necessary skills to ask probing questions.

See the full report at:

http://www.fsa.gov.uk/pubs/international/imap_final.pdf

Know your data

You must know your data.

Do you know what’s in your data box of chocolates?

You must know where it is, what it should contain and what it actually contains.

When your data does not contain what it should, you must have a process for correcting it.

CEOs, CFOs and CROs often take the above as “given”.  They make business critical decisions using information derived from data within their organisation.  After all, its applied common sense.

For the insurance industry, Solvency II requires evidence that you are applying common sense.

If you operate in the EU market or process the personal data of EU data subjects, you must comply with the EU General Data Protection Regulation (GDPR) or face severe fines. To comply, you must “know your (personal) data” and how you manage it.

In my experience, data is like a box of chocolates “You never know what you’re gonna get.”

Do you know your data?