Data aggregation and reporting principles – applied common sense

Principles for effective risk data aggregation and risk reporting

Basel Consultative Document
Data aggregation and reporting principles (BCBS 239)

Those of you familiar with my blog will know that I am a fan of common sense.

I believe that data quality management requires one to apply common sense principles and processes to your data.  I believe that the same common sense principles apply regardless of the industry you are in.

Your data will be unique, but the common sense questions you must ask yourself will be the same.  They include:

  • What MI reports do we need to run our business?
  • What critical data do we need in our MI reports?
  • Who owns and is responsible for gathering the critical data we need in our MI reports?
  • What should our critical data contain?
  • What metrics do we have to verify our critical data contains what it should?
  • etc…

Click on the image to see a document that lists what I regard as “common sense” data aggregation and reporting principles.  They were published as a consultative document on 26th June 2012 by the Basel committee on Banking Supervision (BCBS). The principles are commonly known as BCBS 239. The committee invited comments from interested parties, which are available at http://www.bis.org/publ/bcbs222/comments.htm. I co-operated with a group of fellow independent data professionals to comment and you may see our comments at http://www.bis.org/publ/bcbs222/idpg.pdf. You may see the final version at http://www.bis.org/publ/bcbs239.pdf. The largest banks in the world (known as Global Systemically Important Banks, or G-SIBS) must comply by Jan 2016. Other, “Domestic Systemically important banks”, or D-SIBS, must reach compliance three years after the date on which they were so designated, which varies by bank. Many received their designation during 2014.

While the document is targeted at risk management within the banking industry, the principles apply to all industries. The document explicitly refers to “Risk data aggregation and risk reporting” – I suggest you ignore the word risk and read it as “data aggregation and reporting principles”.

Over the next while I plan to explore some the principles proposed in the document. I plan to explore the practical challenges that arise when one seeks to implement common sense data quality management principles. I welcome your input.  If you have a specific question – let me know – I will do my best to answer it.

Risk data aggregation and risk reporting – Board and senior management responsibilities

BCBS 239 compliance D-Day – Data Quality Risk Checklist

Basel Committee issues “Principles for effective risk data aggregation and risk reporting – final document” (aka BCBS 239)

FSA imposes £2.4 million fine for inadequate risk reporting systems

 

Do you know what’s in the data you’re consuming?

Standard facts are provided about the food we buy

These days, food packaging includes ingredients and a standard set of nutrition facts.  This is required by law in many countries.

Food consumers have grown accustomed to seeing this information, and now expect it. It enables them to make informed decisions about the food they buy, based on a standard set of facts.

Remarkable as it may seem, data consumers are seldom provided with facts about the data feeding their critical business processes.

Most data consumers assume the data input to their business processes is “right”, or “OK”.  They often assume it is the job of the IT function to ensure the data is “right”.  But only the data consumer knows the intended purpose for which they require the data.  Only the data consumer can decide whether the data available satisfies their specific needs and their specific acceptance criteria. To make an informed choice, data consumers need to be provided with facts about the data content available.

Data Consumers have the right to make informed decisions based on standard data content facts

The IT function, or a data quality function, can, and should provide standard “data content facts” about all critical data such as the facts shown in the example.

In the sample shown, a Marketing Manager wishing to mailshot customers in the 40-59 age range might find that the data content facts satisfy his/her data quality acceptance criteria.

The same data might not satisfy the acceptance criteria for a manager in the Anti Money Laundering (AML) area requesting an ETL process to populate a new AML system.

Increasing regulation means that organisations must be able to demonstrate the quality and trace the origin of the data they use in critical business processes.

In Europe, Solvency II requires insurance and re-insurance undertakings to demonstrate the data they use for solvency calculations is as complete, appropriate and accurate as required for the intended purpose. Other regulatory requirements such as Dodd Frank in the USA, BASEL III and BCBS 239 are also seeking increasing transparency regarding the quality of data underpinning our financial system.

While regulation may be a strong driving force for providing standard data content facts, an even stronger one is the business benefit that to be gained from being informed.  Some time ago Gartner research showed that approximately 70% of CRM projects failed.  I wonder were the business owners of the proposed CRM system shown data content facts about the data available to populate the proposed CRM system?

In years to come, we will look back on those crazy days when data consumers were not shown data content facts about the data they were consuming.

Data Governance – Did you drop something?

Welcome to part 5 of Solvency II Standards for Data Quality – common sense standards for all businesses.

Solvency II Data Quality - Is your data complete?

Solvency II Data Quality – Is your data complete?

I suspect C-level management worldwide believe their organisation has controls in place to ensure the data on which they base their critical decisions is “complete”. It’s “applied common sense”.

Therefore, C-level management would be quite happy with the Solvency II data quality requirement that states: “No relevant data available is excluded from consideration without justification (completeness)” (Ref: CP 56 paragraph 5.181).

So… what could go wrong?

In this post, I discuss one process at high risk of inadvertently excluding relevant data – the “Data Extraction” process.

“Data Extraction” is part of the most common business process in the world, the “Extract, Transform, Load process”, or ETL for short. Data required by one business area (e.g. Regulatory reporting) is present in different (source) systems. The source systems are often operational systems. Data is commonly “extracted” from “operational systems” and fed into “informational systems” (which I refer to as “End of Food Chain Systems”).

If the data extraction can be demonstrated to be a complete copy – there is no risk of inadvertently omitting relevant data. In my experience, few data extractions are complete copies.

In most instances, data extractions are “selective”.  In the insurance industry for example, the selection may be done based on product type, or perhaps policy status.  This is perfectly acceptable – so long as any “excluded data” is justified.

Over time, new products may be added to the operational system(s). There is a risk that the data extraction process is not updated, the new products are inadvertently excluded, and never make it to the “end of food chain” informational system (CRM, BI, Solvency II, Anti-Money Laundering, etc.)

So… what can be done to manage this risk.

I propose a “Universal Data Governance Principle” – namely: “Within the data extraction process, the decision to EXCLUDE data is equally important to the decision to INCLUDE data.”

To implement the principle, all data extractions (regardless of industry) should include the following control.

  1. Total population (of source data)
  2. Profile of source data based on the selection field (e.g. product type)
  3. Inclusion selection list (e.g. product types to be included)
  4. Exclusion selection list (e.g. product types to be excluded) – with documented justification
  5. Generate an alert when a value is found in the “selection field” that is NOT in either list (e.g. new product type).
  6. Monitor the control regularly to verify it is working
So – ask yourself – Can you demonstrate that your “data extractions” don’t overlook anything – can you demonstrate that “No relevant data available is excluded from consideration without justification (completeness)”?
Feedback welcome – as always.

Know your data

You must know your data.

Do you know what’s in your data box of chocolates?

You must know where it is, what it should contain and what it actually contains.

When your data does not contain what it should, you must have a process for correcting it.

CEOs, CFOs and CROs often take the above as “given”.  They make business critical decisions using information derived from data within their organisation.  After all, its applied common sense.

For the insurance industry, Solvency II requires evidence that you are applying common sense.

If you operate in the EU market or process the personal data of EU data subjects, you must comply with the EU General Data Protection Regulation (GDPR) or face severe fines. To comply, you must “know your (personal) data” and how you manage it.

In my experience, data is like a box of chocolates “You never know what you’re gonna get.”

Do you know your data?

Charter of Data Consumer rights and responsibilities

Time for charter of Data Consumer rights and responsibilities

There are many rights enshrined in law that benefit all of us. One example is the UN Charter of Human Rights.  Another example is the “Consumer Rights” protection most countries enforce to guarantee us, the buying public, the right to expect goods and services that are of good quality and “fit for purpose”.  As buyers of goods and services, we also have responsibilities.  If you or I buy a “Rolex watch” for $10 from a casual street vendor, we cannot claim consumer protection rights if the watch stops working within a week. “Let the buyer beware” or “Caveat Emptor” is the common sense responsibility that we, as consumers must observe.

I have previously written about business users’ right to expect good data plumbing. Business users (of data) have responsibilities also.  I believe its time to agree a charter of rights and responsibilities for them.  Business users of data are “Data Consumers” – people who use data to perform their work, whatever work that may be.  Data Consumers make decisions based on the data or information available to them. Examples can range from a doctor prescribing medication based on the information in a patient’s health records, to a multi-national chief executive deciding to buy a business based on the performance figures available, to an actuary developing an internal model to determine Solvency II Capital Requirements.

What rights and responsibilities should data consumers have?

Here’s my starter set:

  • The right to expect data that is “fit for purpose”, data that is complete, appropriate and accurate.
  • The responsibility to define what “fit for purpose” data means to them.
  • The right to expect guidance and assistance in defining what constitutes complete, appropriate and accurate data for them.
  • The responsibility to explain the impact that “sub-standard” data would have on the work they do.
  • The right to be informed of the actual quality of the data they use.
  • The right to expect controls in place that verify the quality of the data they use meets the standard they require.

What do you think ? Please feedback your suggestions:

The Ryanair Data Entry Model

I was prompted to write about the “Ryanair Data Entry Model” by an excellent post by Winston Chen on “How to measure Data Accuracy”.

Winston highlights the data quality challenge posed by incorrect data captured at point of entry.  He illustrates one cause as the use of default drop down selection options. He cites an example of a Canadian law enforcement agency that saw a disproportionately high occurrence of “pick pocketing” within crime statistics.  Further investigation revealed that “pick pocketing” was the first option in a drop down selection of crime types.

Winston provides excellent suggestions on how to identify and prevent this source of data quality problems.  Dylan Jones of Dataqualitypro.com and others have added further great tips in the comments.

I believe you need to make Data Quality “matter” to the person entering the data – hence I recommend the use of what I call the “Ryanair Data Entry Model”.   This is the data entry model now used by most low cost airlines. As passengers, we are required to enter our own data. We take care to ensure that each piece of information we enter is correct – because it matters to us.  The same applies when we make any online purchase.

With Ryanair, it is impossible to enter an Invalid date (e.g. 30Feb), but it is easy to enter the “wrong date” for our needs. E.g. We may wish to Fly on a Sunday, but by mistake we could enter the date for Monday.

We ensure that we select the correct number of bags, since each one costs us money. We try to avoid having to pay for insurance, despite Ryanair’s best efforts to force it on us.

It may not be easy to have data entry “matter” to the persons performing it in your organisation – but this is what you must do if you wish to “stop the rot” and prevent data quality problems “at source”. To succeed, you must measure data quality at the point of entry, provide immediate feedback to the data entry person (helping them to get it right first time). Where possible, you should include data entry quality in a person’s performance review – reward for good data quality, and lack of reward for poor data quality.

Poor quality data entered at source is a common Data Governance issue, which I discuss further here:

Have you encountered examples of poor data quality entered at source?  Have you succeeded in identifying and preventing this problem? Please share your success (and horror !) stories.

What does complete appropriate and accurate mean?

Welcome to part 2 of Solvency II Standards for Data Quality – common sense standards for all businesses.

The Solvency II Standards for Data Quality run to 22 pages and provide an excellent substitute to counting sheep if you suffer from insomnia. They are published by The Committee of European Insurance and Occupational Pensions Supervisors (CEIOPS) (now renamed as EIOPA).

Solvency II Data Quality Standards – not as page turning as a Dan Brown novel

I accept that Data Quality Standards cannot aspire to be as page turning as a Dan Brown novel – but plainer English would help.

Anyway – enough  complaining.  As mentioned in part 1, the standards require insurance companies to provide evidence that their Solvency II submissions are based on data that is “as complete, appropriate, and accurate as possible”.  In this post, I will explore what the regulator means by “complete”, “appropriate” and “accurate”.  I will look at the terms in the context of data quality for Solvency II, and will highlight how the same common sense standards apply to all organisations.

APPROPRIATE: “Data is considered appropriate if it is suitable for the intended purpose” (page 19, paragraph 3.62).

Insurance companies must ensure they can provide for insurance claims. Hence, to be “appropriate”, the data must relate to the risks covered, and the value of the capital they have to cover potential claims.  Insurance industry knowledge is required to identify the “appropriate” data, just as Auto Industry knowledge is required to identify data “appropriate” to the Auto industry etc.

COMPLETE: (This one is pretty heavy, but I will include it verbatim, and then seek to simplify – all comments, contributions and dissenting opinions welcome) (page 19, paragraph 3.64)

“Data is considered to be complete if:

  • it allows for the recognition of all the main homogeneous risk groups within the liability portfolio;
  • it has sufficient granularity to allow for the identification of trends and to the full understanding of the behaviour of the underlying risks; and
  • if sufficient historical information is available.”

As I see it, there must be enough data, at a low enough level of detail, to provide a realistic picture of the main types of risks covered. Enough Historical data is also required, since history of past claims provides a basis for estimating the scale of future claims.

As with the term “Appropriate”,  I believe that Insurance industry knowledge is required to identify the data required to ensure that data is “complete”.

ACCURATE: I believe this one is “pure common sense”, and applies to all organisations, across all industries. (page 19, paragraph 3.66)

Data is considered accurate if:

  • it is free from material mistakes, errors and omissions;
  • the recording of information is adequate, performed in a timely manner and is kept consistent across time;
  • a high level of confidence is placed on the data; and
  • the undertaking must be able to demonstrate that it recognises the data set as credible by using it throughout the undertakings operations and decision-making processes.

Update – In October 2013, following an 18 month consultative process, DAMA UK published a white paper explaining 6 primary data quality dimensions.

1. Completeness
2. Uniqueness
3. Timeliness
4. Validity
5. Accuracy
6. Consistency

For more details see my blog post, Major step forward in Data Quality Measurement