What does complete appropriate and accurate mean?

Welcome to part 2 of Solvency II Standards for Data Quality – common sense standards for all businesses.

The Solvency II Standards for Data Quality run to 22 pages and provide an excellent substitute to counting sheep if you suffer from insomnia. They are published by The Committee of European Insurance and Occupational Pensions Supervisors (CEIOPS) (now renamed as EIOPA).

Solvency II Data Quality Standards – not as page turning as a Dan Brown novel

I accept that Data Quality Standards cannot aspire to be as page turning as a Dan Brown novel – but plainer English would help.

Anyway – enough  complaining.  As mentioned in part 1, the standards require insurance companies to provide evidence that their Solvency II submissions are based on data that is “as complete, appropriate, and accurate as possible”.  In this post, I will explore what the regulator means by “complete”, “appropriate” and “accurate”.  I will look at the terms in the context of data quality for Solvency II, and will highlight how the same common sense standards apply to all organisations.

APPROPRIATE: “Data is considered appropriate if it is suitable for the intended purpose” (page 19, paragraph 3.62).

Insurance companies must ensure they can provide for insurance claims. Hence, to be “appropriate”, the data must relate to the risks covered, and the value of the capital they have to cover potential claims.  Insurance industry knowledge is required to identify the “appropriate” data, just as Auto Industry knowledge is required to identify data “appropriate” to the Auto industry etc.

COMPLETE: (This one is pretty heavy, but I will include it verbatim, and then seek to simplify – all comments, contributions and dissenting opinions welcome) (page 19, paragraph 3.64)

“Data is considered to be complete if:

  • it allows for the recognition of all the main homogeneous risk groups within the liability portfolio;
  • it has sufficient granularity to allow for the identification of trends and to the full understanding of the behaviour of the underlying risks; and
  • if sufficient historical information is available.”

As I see it, there must be enough data, at a low enough level of detail, to provide a realistic picture of the main types of risks covered. Enough Historical data is also required, since history of past claims provides a basis for estimating the scale of future claims.

As with the term “Appropriate”,  I believe that Insurance industry knowledge is required to identify the data required to ensure that data is “complete”.

ACCURATE: I believe this one is “pure common sense”, and applies to all organisations, across all industries. (page 19, paragraph 3.66)

Data is considered accurate if:

  • it is free from material mistakes, errors and omissions;
  • the recording of information is adequate, performed in a timely manner and is kept consistent across time;
  • a high level of confidence is placed on the data; and
  • the undertaking must be able to demonstrate that it recognises the data set as credible by using it throughout the undertakings operations and decision-making processes.

Update – In October 2013, following an 18 month consultative process, DAMA UK published a white paper explaining 6 primary data quality dimensions.

1. Completeness
2. Uniqueness
3. Timeliness
4. Validity
5. Accuracy
6. Consistency

For more details see my blog post, Major step forward in Data Quality Measurement


Solvency II Standards for Data Quality – common sense standards for all businesses

When you visit your family doctor, you expect him or her to be familiar with your medical history.  You expect the information your doctor keeps about you to be complete, appropriate and accurate. If you’re allergic to penicillin, you expect your family doctor to know about it. Your health and well being depends on it.  Call it applied common sense.

You expect your family doctor to have information about you that is complete appropriate and accurate

In running your business, you make business critical decisions every day.  You base your decisions on the information available to you, information that you would like to be complete, appropriate, and accurate. Call it more applied common sense.

The Solvency II Standards for Data Quality (EIOPA Consultation Paper 43) apply the same common sense.  They require insurance companies to provide evidence that their Solvency II submissions are based on data that is “complete, appropriate, and accurate”.

This is the first of a series of posts in which I plan to explore the “common sense” Solvency II standards for data quality, and I hope you will join in.

What is Solvency II? When you insure your family home, you would like to think your insurance company will still be around, and will have the funds to compensate you in the event of a fire, break-in or other disaster.  Solvency II is seeking to do just that.  Solvency II requires Insurance companies to prove they have enough capital funding to prevent them failing. Given the backdrop of the ongoing world financial crisis, I believe this is a reasonable objective.

As you may have noticed, I am a fan of “common sense”. Think about it… would you knowingly make a business critical decision on the basis of information that you know to be “incomplete, inappropriate or inaccurate”?  I think not.  So, how do you know that your information is “complete, appropriate and accurate”? The standards set out in The Solvency II Standards for Data Quality enable ALL organisations, not just insurance companies, apply the same common sense standards.

As I add posts, I will link to them from here.

In my second post, I explore what exactly the terms “complete”, “appropriate” and “accurate” mean in the context of data quality for Solvency II, and what they mean for all organisations.

My third post explores the need in all organisations for Data Governance, and Data Quality Management – Solvency II actually mandates the need for Data Governance.

In my fourth post I ask “How do you collect your data“.  I ask you this because common sense (and Solvency II) requires that you know how you collect your data, how you process it, and how you assess the quality of the data you collect and process.

My fifth post Data Governance – Did you drop something?  explores the risk associated with data extractions.

Please join in this debate and share your experience…

How to deliver a Single Customer View

How to deliver a Single Customer View

How to cost effectively deliver a Single Customer View

Many have tried, and many have failed to deliver a “Single Customer View”.  Well now it’s a regulatory requirement – at least for UK Deposit Takers (Banks, Building Societies, etc.).

The requirement to deliver a Single Customer View of eligible deposit holders indirectly affects every man, woman and child in the UK.  Their deposits, large or small, are covered by the UK Deposit Guarantee Scheme.  This scheme played a key role in maintaining confidence in the banking system during the dark days of the world financial crisis.

UK Deposit Takers must not only deliver the required Single Customer View data, they must provide clear evidence of the data quality processes and controls they use to deliver and verify the SCV data.

The deadline for compliance is challenging.  Plans must be submitted to the regulator by July 2010, and the SCV must be built and verified by Jan 2011.

To help UK Deposit Takers, I have written an E-book explaining how to cost effectively deliver a Single Customer View.  You may download this free from the Dataqualitypro website:

While the document specifically addresses the UK Financial Services Requirement for a Single Customer View, the process steps will help anyone planning a major data migration / data population project.

If you are in any doubt about the need for good data quality management processes to deliver any new system (e.g. Single Customer View, Solvency II, etc.), read the excellent Phil Simon interview on Dataqualitypro about why new systems fail.

Common Enterprise wide Data Governance Issues – #12. No Enterprise wide Data Dictionary.

This post is one of a series dealing with common Enterprise Wide Data Governance Issues.    Assess the status of this issue in your Enterprise by clicking here:  Data Governance Issue Assessment Process

No Idea What This Means

Anyone know what this acronym means?

An excellent series of blog posts from Phil Wright (Balanced approach to scoring data quality) prompted me to restart this series.  Phil tells us that in his organisation, “a large amount of time and effort has been applied to ensure that the business community has a definitive business glossary, containing all the terminology and business rules that they use within their reporting and business processes. This has been published, and highly praised, throughout the organisation.” I wish other organisations were like Phil’s.

Not only do some organisations lack “a definitive business glossary” as Phil describes above, complete with business rules….
Some organisations have no Enterprise wide Data Dictionary.  What is worse – there is no appreciation within senior management of the need for an Enterprise wide Data Dictionary (and therefore no budget to develop one).

Impact(s):

  • No business definition, or contradictory business definitions of the intended content of critical fields.
  • There is an over dependence on a small number of staff with detailed knowledge of some databases.
  • Incorrect or non-ideal sources of required data are identified – because the source of required data is determined by personnel with expertise in specific systems only.
  • New projects, dependent on existing data, are left ‘flying blind’.  The impact is similar to landing in a foreign city, with no map and not speaking the language.
  • Repeated re-invention of the wheel, duplication of work, with associated costs.

Solution:

CIO to define and implement the following Policy:  (in addition to the policies listed for Data Governance Issue #10):

  • An Enterprise wide Data Dictionary will be developed covering critical Enterprise wide data, in accordance with industry best practice.

Does your organisation have an “Enterprise wide Data Dictionary” – if so, how did you achieve it?  If not, how do new projects that depend on existing data begin the process of locating that data?  Please share your experience.

The truth the whole truth and nothing but the truth

I enjoyed the good-natured contest (i.e., a blog-bout) between Henrik Liliendahl SørensenCharles Blyth and Jim Harris. The contest was a Blogging Olympics of sorts, with the Great Britain, United States and Denmark competing for the Gold, Silver, and Bronze medals in an event called “Three Single Versions of a Shared Version of the Truth.”

I read all three posts, and the excellent comments on them, and I then voted for Charles Blyth.  Here’s why:

I worked as a Systems Programmer in the ’80s in the airline industry. Remarkable as it sounds, system programmers in each airline used to modify the IBM supplied Operating System, which was known as the Airline Control Program (ACP), later renamed as Transaction Processing Facility (TPF).  Could you imagine each business in the world modifying Windows today? I’m not talking about a configuration change, I’m talking about an Assembler code change to the internals of the operating system. The late ‘80s saw the development of Global “Computer Reservations Systems” (CRS systems) including AMADEUS and GALILEO.  I moved from Aer Lingus, a small Irish airline, to work in London on the British Airways systems, to enable the British Airways systems share information and communicate with the new Global CRS systems. I learnt very important lessons during those years.

  1. The criticality of standards
  2. The drive for interoperability of systems
  3. The drive towards information sharing
  4. The drive away from bespoke development

What has the above got to do with MDM and the single version of the truth?

In the 70’s and 80’s, each airline was “re-inventing” the wheel by taking the base IBM Operating System, and then changing it.  Each airline started with the same base, but then Darwin’s theory of evolution kicked in (as it always does in Bespoke development environments).  This worked fine as long as each airline effectively worked in a standalone manner, and connecting flights required passengers to re-check in with a different airline etc.  This model was blown away with the arrival of Global CRS systems. Interconnectivity of airline reservation systems became critical, and this required all airlines to adhere to common standards.

We are still in the bespoke era of Master Data Management.  This will continue for some time.  Breaking out of this mode will require a major breakthrough.  The human genome project, in which DNA is being ‘deciphered’ is one of the finest examples of how the “open source” model can bring greater benefit to all.   The equivalent within the Data world could be the opening up of proprietary data models.  IBM developed the Financial Services Data Model (FSDM).   The FSDM became an ‘overnight success’ when BASEL II arrived.  Those Financial Institutions that had adopted the FSDM were in a position to find the data required by the Regulators relatively easily.

Imagine a world in which the Financial Regulator(s) used the same Data Model as the Financial Organisations?

Naturally, such a model would not be set in stone.  There would be incremental improvements – with new versions published on a regular (perhaps  yearly, biannually, or maybe every 5 years).

Back to the great “Truth” debate.  Charle’s view most closely aligns with mine, and I particularly liked his reference to granularity – keep going till one reaches the lowest granularity required – remember, one can always summarise up, but one cannot take apart what has been summarised up.

Most importantly, I would like to thank Henrik Liliendahl SørensenCharles Blyth and Jim Harris for holding this debate.  Debates like this signal the beginning of the move towards a “Single Version of the Truth” – and has already led to a major step forward.  Dean Groves suggested on Henrik’s blog that the word “version” be changed to “vision” – Suddenly, we had agreement – We all aspire to a “Single VISION of the Truth”.

I look forward to many more debates of this nature.

You can see the results of the vote here

Plug and Play Data – The future for Data Quality

The excellent IAIDQ World Quality Day webinar looked at what the Data Quality landscape might be like in 5 years time, in 2014.  This got me thinking.  Dylan Jones excellent article on The perils of procrastination made me think some more…

Plug and Play Data

Plug and Play Data

I believe that we data quality professionals need a paradigm shift in the way we think about data.  We need to make “Get data right first time” and  “Data Quality By Design” such no brainers that procrastination is not an option.   We need to promote a vision of the future in which all data is reusable and interchangeable – a world of “Plug and Play Data”.

Everybody, even senior business management, understand the concepts of “plug and play” and reusable play blocks.  For “plug and play” to succeed, interconnecting parts must be complete, fully moulded, and conform to clearly defined standards.  Hence “plug and play data” must be complete, fully populated, and conform to clearly defined standards (business rules).

How can organisations “get it right first time” and create “plug and play data”?
It is now relatively simple to invoke cloud based verification from any part of a system through which data enters.

For example, when opening a new “Student” bank account, cloud based verification might prompt the bank assistant with a message like “Mr. Jones’ date of birth suggests he is 48 years old.  Is his date of birth correct?  Is a “Student Account” appropriate for Mr. Jones”?

In conclusion:

We Data Quality Professionals need to educate both Business and IT on the need for, and the benefits of “plug and play data”.   We need to explain to senior management that data is no longer needed or used by only one application.  We need to explain that even tactical solutions within Lines of Business need to consider Enterprise demands for data such as:

  1. Data feed into regulatory systems (e.g Anti Money Laundering, BASEL II, Solvency II)
  2. Access from or data feed into CRM system
  3. Access from or data feed into Business Intelligence system
  4. Ad hoc provision of data to satisfy regulatory requests
  5. Increasingly – feeds to and from other organisations in the supply chain
  6. Ultimate replacement of application with newer generation system

We must educate the business on the increasingly dynamic information requirements of the Enterprise – which can only be satisfied by getting data “right first time” and by creating “plug and play data” that can be easily reused and interconnected.

What do you think?

Common Enterprise wide Data Governance Issues #11: No ownership of Cross Business Unit business rules

This post is one of a series dealing with common Enterprise Wide Data Governance Issues.  Assess the status of this issue in your Enterprise by clicking here:  Data Governance Issue Assessment Process

Business Units often disagree

I'm right, he's wrong!

Different Business Units sometimes use different business rules to perform the same task.

Withing retail banking for example, Business Unit A might use “Account Type” to distinguish personal accounts from business accounts, while Business Unit B might use “Account Fee Rate”.


Impact(s) can include:

  1. Undercharging of Business Accounts mistakenly identified as Personal Accounts, resulting in loss of revenue.
  2. Overcharging of Personal Accounts mistakenly identified as Business Accounts, which could lead to a fine or other sanctions from the Financial Regulator.
  3. Anti Money Laundering (AML) system generates false alerts on Business Accounts mistakenly identified as Personal Accounts.
  4. AML system fails to generate alert on suspicious activity (e.g. large cash lodgements) on a personal account misidentified as a Business Account, which could lead to a regulatory fine.
  5. Projects dependent on existing data (e.g. AML, CRM, BI) discover that the business rules they require are inconsistent.

Solution:
Agree and implement the following Policy:  (in addition to the policies listed for Data Governance Issue #10)

  • Responsibility for resolving cross business unit business rule discrepancies lies with the Enterprise Data Architect.

For further details on Business rules – see Business Rules Case Study.

Your experience:
Have you faced a situation in which different business units use different business rules?   Please share your experience by posting a comment – Thank you – Ken.