Opportunity to apply lessons learnt in my new job

This week I started a new job as Head of Customer Information at Bank of Ireland in Dublin. I am excited at the prospect of applying the lessons I have learnt for the benefit of our customers.

I would like to take this opportunity to thank my fellow data management professionals worldwide for generously sharing their experience with me. I started to write this blog in 2009. My objective was to “Share my experience and seek to learn from the experience of others”. I have certainly learnt from the experience of others, and I hope to continue to do so.

The opinions I express on this blog will continue to be my own. I look forward to continuing to hear yours.

The dog and the frisbee and data quality management

The Wall Street journal reported it as the “Speech of the year“.

In a speech with the intriguing title “The dog and the frisbee“, Andrew Haldane, the Bank of England Director of Financial Stability has questioned whether the Emperor (in the form of ever increasing, ever more complex regulations such as Solvency II, BASEL III and Dodd Frank) is naked. He points out that the BASEL regulations, which have increased from 30 pages to over 600 pages completely failed to identify banks that were at risk of collapse, while a simple measure of the bank’s leverage ratio did identify them.

He also points out “Dodd-Frank makes Glass-Steagall look like throat-clearing.” The Glass-Steagall act of 1933, which separated commercial and investment banking, ran to a mere 37 pages; the Dodd-Frank act of 2010 ran to 848, and may spawn a further 30,000 pages of detailed rule-making by various agencies.

I recommend you read the speech yourself – his arguments, together with his wit are superb. I include a brief extract below:

‘In the UK, regulatory reporting was introduced in 1974. Returns could have around 150 entries. In the Bank of England archives is a memo to George Blunden, who was to become Deputy Governor, on these proposed regulatory returns. Blunden’s handwritten comment reads: “I confess that I fear we are in danger of becoming excessively complicated and that if so we may miss the wood from the trees”.

Today, UK banks are required to fill in more than 7,500 separate cells of data – a fifty-fold rise. Forthcoming European legislation will cause a further multiplication. Banks across Europe could in future be required to fill in 30–50,000 data cells spread across 60 different regulatory forms. There will be less risk of regulators missing the wood from the trees, but only because most will have needed to be chopped down.’

Brilliant !

Andrew Haldene is calling for more simple, basic rules. I agree with him,

I have worked in data management for over 30 years. The challenges I see today are the same challenges that arise time and time again. They are not Solvency II specific, BASEL specific, or Dodd Frank specific. They are universal. They apply to all critical data within all businesses.

The fundamental truth is “The data is unique, but the data management principles are universal”

It is time to stop writing specific data management and data quality management requirements into specific legislation.  Regulators should co-operate with the data management profession, via independent organisations such as DAMA International, to develop a common sense universal standard, and put the effort into improving such a standard.

What do you think? I welcome your comments.

Data Governance – Did you drop something?

Welcome to part 5 of Solvency II Standards for Data Quality – common sense standards for all businesses.

Solvency II Data Quality - Is your data complete?

Solvency II Data Quality – Is your data complete?

I suspect C-level management worldwide believe their organisation has controls in place to ensure the data on which they base their critical decisions is “complete”. It’s “applied common sense”.

Therefore, C-level management would be quite happy with the Solvency II data quality requirement that states: “No relevant data available is excluded from consideration without justification (completeness)” (Ref: CP 56 paragraph 5.181).

So… what could go wrong?

In this post, I discuss one process at high risk of inadvertently excluding relevant data – the “Data Extraction” process.

“Data Extraction” is one of the most common business processes in the world.  Data is commonly “extracted” from “operational systems” and fed into “informational systems” (which I refer to as “End of Food Chain Systems”).  Data Extraction is usually followed by a “Data Transform” step to prepare the data for loading into the target system. I will discuss “Data Transformation” risks in a later post.

If the data extraction can be demonstrated to be a complete copy – there is no risk of inadvertently omitting relevant data. Few data extractions are complete copies.

In most instances, data extractions are “selective”.  In the insurance industry for example, the selection may be done based on product type, or perhaps policy status.  This is perfectly acceptable – so long as any “excluded data” is justified.

Over time, new products may be added to the operational system(s). There is a risk that the data extraction process is not updated, the new products are inadvertently excluded, and never make it to the “end of food chain” informational system (CRM, BI, Solvency II, Anti-Money Laundering, etc.)

So… what can be done to manage this risk.

I propose a “Universal Data Governance Principle” – namely: “Within the data extraction process, the decision to EXCLUDE data is equally important to the decision to INCLUDE data.”

To implement the principle, all data extractions (regardless of industry) should include the following control.

  1. Total population (of source data)
  2. Profile of source data based on the selection field (e.g. product type)
  3. Inclusion selection list (e.g. product types to be included)
  4. Exclusion selection list (e.g. product types to be excluded) – with documented justification
  5. Generate an alert when a value is found in the “selection field” that is NOT in either list (e.g. new product type).
  6. Monitor the control regularly to verify it is working
So – ask yourself – Can you demonstrate that your “data extractions” don’t overlook anything – can you demonstrate that “No relevant data available is excluded from consideration without justification (completeness)”?
Feedback welcome – as always.

Know your data

You must know your data.

Do you know what's in your data box of chocolates?

You must know where it is, what it should contain and what it actually contains.

When your data does not contain what it should, you must have a process for correcting it.

CEOs, CFOs and CROs often take the above as “given”.  They make business critical decisions using information derived from data within their organisation.  After all, its applied common sense.

For the insurance industry, Solvency II requires evidence that you are applying common sense.

In my experience, data is like a box of chocolates “You never know what you’re gonna get.”

Do you know your data?

Charter of Data Consumer rights and responsibilities

Time for charter of Data Consumer rights and responsibilities

There are many rights enshrined in law that benefit all of us. One example is the UN Charter of Human Rights.  Another example is the “Consumer Rights” protection most countries enforce to guarantee us, the buying public, the right to expect goods and services that are of good quality and “fit for purpose”.  As buyers of goods and services, we also have responsibilities.  If you or I buy a “Rolex watch” for $10 from a casual street vendor, we cannot claim consumer protection rights if the watch stops working within a week. “Let the buyer beware” or “Caveat Emptor” is the common sense responsibility that we, as consumers must observe.

I have previously written about business users’ right to expect good data plumbing. Business users (of data) have responsibilities also.  I believe its time to agree a charter of rights and responsibilities for them.  Business users of data are “Data Consumers” – people who use data to perform their work, whatever work that may be.  Data Consumers make decisions based on the data or information available to them. Examples can range from a doctor prescribing medication based on the information in a patient’s health records, to a multi-national chief executive deciding to buy a business based on the performance figures available, to an actuary developing an internal model to determine Solvency II Capital Requirements.

What rights and responsibilities should data consumers have?

Here’s my starter set:

  • The right to expect data that is “fit for purpose”, data that is complete, appropriate and accurate.
  • The responsibility to define what “fit for purpose” data means to them.
  • The right to expect guidance and assistance in defining what constitutes complete, appropriate and accurate data for them.
  • The responsibility to explain the impact that “sub-standard” data would have on the work they do.
  • The right to be informed of the actual quality of the data they use.
  • The right to expect controls in place that verify the quality of the data they use meets the standard they require.

What do you think ? Please feedback your suggestions:

The Ryanair Data Entry Model

I was prompted to write about the “Ryanair Data Entry Model” by an excellent post by Winston Chen on “How to measure Data Accuracy”.

Winston highlights the data quality challenge posed by incorrect data captured at point of entry.  He illustrates one cause as the use of default drop down selection options. He cites an example of a Canadian law enforcement agency that saw a disproportionately high occurrence of “pick pocketing” within crime statistics.  Further investigation revealed that “pick pocketing” was the first option in a drop down selection of crime types.

Winston provides excellent suggestions on how to identify and prevent this source of data quality problems.  Dylan Jones of Dataqualitypro.com and others have added further great tips in the comments.

I believe you need to make Data Quality “matter” to the person entering the data – hence I recommend the use of what I call the “Ryanair Data Entry Model”.   This is the data entry model now used by most low cost airlines. As passengers, we are required to enter our own data. We take care to ensure that each piece of information we enter is correct – because it matters to us.  The same applies when we make any online purchase.

With Ryanair, it is impossible to enter an Invalid date (e.g. 30Feb), but it is easy to enter the “wrong date” for our needs. E.g. We may wish to Fly on a Sunday, but by mistake we could enter the date for Monday.

We ensure that we select the correct number of bags, since each one costs us money. We try to avoid having to pay for insurance, despite Ryanair’s best efforts to force it on us.

It may not be easy to have data entry “matter” to the persons performing it in your organisation – but this is what you must do if you wish to “stop the rot” and prevent data quality problems “at source”. To succeed, you must measure data quality at the point of entry, provide immediate feedback to the data entry person (helping them to get it right first time). Where possible, you should include data entry quality in a person’s performance review – reward for good data quality, and lack of reward for poor data quality.

Poor quality data entered at source is a common Data Governance issue, which I discuss further here:

Have you encountered examples of poor data quality entered at source?  Have you succeeded in identifying and preventing this problem? Please share your success (and horror !) stories.

Solvency II mandates Data Governance

Welcome to part 3 of Solvency II Standards for Data Quality – common sense standards for all businesses.

Regardless of the industry you work in, you make critical business decisions based on the information available to you.  You would like to believe the information is accurate.  I suggest the CEIOPS’ standards for “Accuracy”apply to your business, and your industry, just as much as they apply to the insurance industry.  I would welcome your feedback…

The CEIOPS (now renamed EIOPA) advice makes it clear that Solvency II requires you to have Data Governance in place (which CEIOPS / EIOPA refers to as “internal systems and procedures”).   The following sections of the document make this clear:

3.32 In order to ensure on a continuous basis a sufficient quality of the data used in the valuation of technical provisions, the undertaking should have in place internal systems and procedures covering the following areas:

• Data quality management;

• Internal processes on the identification, collection, and processing of data; and

• The role of internal/external auditors and the actuarial function.

3.1.4.1 Data quality management – Internal processes

3.33 Data quality management is a continuous process that should comprise the following steps:

a) Definition of the data;

b) Assessment of the quality of data;

c) Resolution of the material problems identified;

d) Monitoring data quality.

I will explore the above further in my next post.  Meanwhile, what Data Quality Management processes do you have in place?  Do you suffer from common Enterprise-Wide Data Governance Issues?