Archive for 2011
Friday, December 9th, 2011
Do you wish your organization had more BI power users?
BI power users build complex reports, drive statistical models, satisfy ever changing regulatory reporting, and drive sophisticated analysis for your management team. These folks do the heavy data lifting at your organization. They are the data gurus, good ones are hard to find, and they don’t work cheaply. Without a sizeable contingent of these brainiacs, you’re stuck in a world of basic operational reporting. With them, you can rule the data that drives your business and sound decision making. You don’t have to wish for power users. Create them. This blog series will explain how.
There are many types of thinkers working in your organization. The people who make up your analytical community are no exception. Brain science splits learners into two camps – the visual and the logical learners, corresponding to the right and left hemispheres of the brain. The way to train your analysts is to cater to how each of their brains operates by bringing a little science into your tool selection, user enablement and training.
What is the right approach? You could survey each analyst to discover their learning style, then craft an optimal learning path with customized toolsets and techniques. That approach is likely to be slow, painful and expensive. Instead, tap into their instincts by leveraging their natural learning style. Latch onto their brains.
What does a brain-centric approach to creating BI power users look like?
As a visual thinker, SQL training threw me into a proverbial rabbit hole. I was set back months in my development as an analyst. Several years later when I started training end users on tool use, I thought everyone wanted to start out building querying skills using visual tools. I was fooled – twice – maybe you have been too. My predilection for visual tools is just as disabling a bias as a focus on pure SQL. We need to work with both camps. Truth be told most of your users do prefer starting with visual tools. On the other hand, some of your users are already comfortable with visual tools and need to work with code to enable more productivity. Forcing a right brainer into code too quickly, or a left brainer into image based analysis, is a mistake that is counterproductive to human growth.
The right mix of tools and support plays to both sides of the brain to enable your users to grow at an optimal pace. Visual enhances code, code explains and reinforces visual. Access to both types of tools, as well as the hybrid visual tools that allow code display – will please all your fledgling brainiacs. The better your development program accommodates their brains, the quicker you’ll see results. The ‘secret sauce’ is recognizing the delicate dance between images and words that deepens understanding. Power users’ brains work with a rich set of visual and coded tools.
I propose that the reason it is so difficult to grow BI power users is that we continue to ignore differences in how people think. We deploy tools using data requirements, not people requirements, and certainly not thinking requirements. Don’t ignore the differences. Grow your analysts incrementally by bringing their visual and logical parts into lockstep.
In my next blog, I’ll trudge deeper into this topic to explore how the right triage of tools and capabilities can help build that elusive BI power user community.
Friday, December 2nd, 2011
The best executed system testing happens prior to the project ‘test phase.’ If you think that’s a catch-22, you’re right. How can system testing happen prior to the completion of development, before the ‘test phase’ even begins? In fact, if you read my last six blogs and followed my lead, you already have the prerequisities in the form of carefully crafted system test sets built for rapid execution and validation. For more guidelines, read on.
Move the code into the test environment after it has been thoroughly system tested.
I’ve never understood why projects rush to get code into the test environment. Think of the constraints on our working environments. Development is owned and controlled by the developers. Test, on the other hand, is locked down, typically by the QA team (and by all rights should be). Working through functional canary tests in development is quicker, easier, and cheaper.
Execute the vast majority of system tests on the development integration platform.
A common practice is to migrate work from developer “sandboxes” to a development “integration” area, where code is assembled into a BI solution. The integration area is the ideal test platform. That move is an at least an order of magnitude easier than moving the code out of your freedom loving development environment to the lockdown on test.
Make system testing a formality.
Development of system test materials, scripts, and the test harness needs to coincide with the arrival of unit code into the development integration environment. Require the development team to demonstrate successful system testing within the development integration area BEFORE moving the code into the test area. The goal is to make system testing a formality. (Mindful readers will note that I stated just the opposite in part #5. The rule only applies if you complete the majority of system testing on development).
Regression test by design.
Too many projects focus on passing failed tests while ignoring previously passed tests. The fallacy is that testing is complete when code fixes pass a retest. The reality is that code fixes can impact code that ran successfully in the past, causing test failure. Every set of code fixes mandates a test rerun. Fortunately, our automated and fast-running functional canary test datasets means you can afford lots of test cycles (see part #6).
Every incremental code change requires the addition of a small set of tests to the functional canary test dataset. The entire test dataset runs together to verify that the incremental change and all prior code perform as expected. Each functional canary test set run validates the entire code base. This ‘black box’ treatment of the code (see part #2) ensures that every logic path – old and new – is retested each time the code is changed.
This is regression testing by design.
System testing expands to encompass new rules as the code base matures. Our functional canary data sets and scripts expand in scope and precision as the code base matures. The code base and system testing mature in lockstep, all outside the confines of the test platform.
Rerun your test scripts on the system test environment to validate code promotion.
Typical project plans deliver system test scripts and data sets just in time for system testing. Initial migration from development to test uncovers defects related to migration. Errors are found in the setup of system test data sets and test scripts – even if the code is working perfectly. This happens when neither the migration process, the test setup, or the code base are isolated. Discovering the source of issues is going to take time – and yet more migrations. This is an expensive way to validate testing and migration.
Let’s say the code passes all system tests prior to migration to the system test environment. You will know (in development) how well the code is working. The test scripts will be debugged before they reach the test environment. After promotion, repoint the test harness to the new environment and the right datasets and you’re ready to rerun system testing on the test box. The system tests will now focus on the validity of migration and setup of the test environment. Any differences can be attributed to those issues, not bugs in the code or test scripts. This can reduce from weeks to days the time it takes to validate that the migration process works correctly. This is a cheap and expeditious way to validate migration.
Success is in the timing and coordination.
System testing done this way requires some TLC and more attention from management. Development of the test scripts and the test harness need to be timed with the development and integration of the solution components. More coordination is required between developers writing code, creating test cases and test data.
Rethinking system testing
The testing process described in the seven parts of this blog may be seem counter-intuitive, given that what is proposed is the completion of most system testing before code touches the test box. You’ll need to rethink how your team conducts testing, to consider the content and timing of test scripts and data sets, to offer guidance to the development and testing teams, and to find a skilled developer to tie the tests and validations together with appropriate automation (see part #6). None of these are beyond the skills of an experienced development team working with a good project manager.
This approach to system testing can help hit project objectives and deliver the project sooner, to the kudos of management and the end user community.
Good luck with your system testing efforts.
Please share your thoughts about this blog, or relate your own experiences.
- Jim Van de Water contributed to this blog.
Category Blog, Business Intelligence, Jim Van de Water, Steve Knutson | Tags: BI system testing, canary testing, data warehouse system testing, ETL system testing, functional system testing, incremental system testing, Jim Van de Water, QA testing, Steve Knutson, testing best practices, testing data warehouse applications,
Monday, November 28th, 2011
My wife and I are avid fans of the reality show ‘Kitchen Nightmares.’ In case you haven’t seen it, the show’s intimidating host, Chef Gordon Ramsay, is called in to help restore the image and operations of failing restaurants. Ramsay’s formulaic approach is to rework the menu and atmosphere, ignite the enthusiasm of kitchen and wait staff, restore the faith of owners, and re-launch the restaurant – all in the course of a few intense days.
Having seen so many episodes of the show, my wife and I look on in disbelief as the distraught owner goes through yet another exercise in cleaning up the kitchen, tuning the menu, building up team spirit, and bringing the restaurant décor and operations up to 21st century standards. At this point, isn’t the scope of the upcoming effort apparent the minute Chef Ramsay enters the front door?
The Chef points out the subtle and the substantial to struggling owners and workers blinded by years of doing the same thing over and over again.
We in the BI world could use a Chef Ramsay – critic, comforter, and confessor – an oracle and master of all things data warehousing. Think of the messaging the Chef might bring to our workplaces. After years of operating the same way, are we inured to problems and opportunities? What are our resident experts missing that is simply inexcusable? Are the issues painfully obvious?
Imagine the Chef as he steps into your data warehousing shop. Would he advise you to clean up your data quality? Would he admire or be disturbed by your service levels? Would he be proud to say that you deliver ‘the most amazing’ reports capable of modern tools? Would he provide counseling to help rebuild team spirit? Would he shake up the management team? Would he advise that your data warehouse needs a complete makeover and re-launch?
Chef Ramsay achieves the miraculous in just a couple days. It would be unrealistic to expect our data warehouse issues to be solved as quickly, but the Chef offers a fresh perspective that can illuminate our biases.
Friday, November 18th, 2011
Many organizations struggle to measure the value of data governance. Healthcare payers are no exception. The good news: there’s no shortage of data points to measure the impact and effectiveness of data governance.
The right combination of proven quantitative and qualitative metrics will help improve your information, and also will drive continued executive support for your program. Please enjoy our complimentary guide.
DOWNLOAD: The Ultimate Guide to Data Governance Metrics for Health Payers: 30+ Ways to Discover and Score Success in 2012
Monday, November 14th, 2011
Ajilitee’s newly-released “Ultimate Guide to Data Governance Metrics for Health Payers” dives into 30+ data governance metrics from a quantitative and qualitative vantage point. But today, I’m exploring a subset topic, which is how to measure the success of a Data Governance Council initiative.
We’ve said this often, but it bears repeating: data governance programs tend to fall short of expectations because they wind up as tactical data quality initiatives that address accuracy and consistency in silos. They also lack an effective governing body to manage data ownership, lineage and accountability across the enterprise.
We believe that establishing a Data Governance Council is the key to transforming a data governance program into real business value. An informed and active Data Governance Council will tackle inaccurate, inconsistent and incomplete data holistically through policies and cultural change at the leadership level. And just as a data governance program establishes metrics on its data quality and performance measurements on the data stewards, it’s equally as important to set performance goals for the Data Governance Council.
In the Health Payer space, metrics are driven by corporate drivers and key performance indicators. Typical drivers include the following:
- Cost avoidance and cost containment
- HIPAA, Privacy and/or regulatory compliance
- Fraud detection
- Constraints management
- Products and Plans time-to-market
A Data Governance Council composed of upper management will be engaged and committed if they are presented with demonstrated successes that address these drivers. This includes continuous data quality that the governance program helps ensure is embedded upstream rather than performed sporadically downstream. Also realized are the benefits of improved transparency, audit-ability and data lineage, which are essential to compliance with government regulations such as HIPAA.
During its instantiation, the Data Governance Council should be reminded that they are part of this group to serve as proactive change agents. Therefore, this ability to be change agents should be measured. To that end, we recommend these five key metrics to measure the Data Governance Council and its members:
- METRIC 1: Advocacy success measure.
- Getting each Council member to recognize that their role is not a passive one. To remain on the Council, they are expected to be “data integrity proselytizers” – e.g., identifying a steward for their line of business, and speaking at their team meetings about the new policies, progress and changes, and so forth.
- METRIC 2: Meeting success measure.
- Demonstration of commitment. This can be accomplished by an early vote to have a policy that a Council member could and would be “disinvited” for lack of attendance.
- METRIC 3: Each Council Member must bring a Data or Process Issue request to the Council.
- Demonstration that the Council member understands what is an appropriate process and/or data issue that warrants attention from the DG Council. They must be willing to push skeletons in their own business areas in front of their peers for resolution.
- METRIC 4: Number of Policies Established.
- Enterprise Policies serve as the basis for prying systemic data issues away from the silo-minded lines of business. In the first years, typical policies include defining the list of governed data elements; approving Unique Identifier data elements (e.g., Unique Provider, Unique Institution, Unique Member); establishing USPS Address Standardization; conforming Provider Specialty Taxonomy to CMS labels.
- METRIC 5: Maturity Model measure.
- The Data Governance Council should demonstrate proficiency in their role before tackling the more complex topic of a Data Governance 5-Year Maturity Model. But by the end of Year 1, the level of progress on the Maturity Model should be set and tracked for each succeeding year.
At each Council meeting, we advocate reviewing each of these Scorecard metrics. Everyone sees the contributions of their colleagues. It’s important to have this level of visibility and openness – peer pressure works wonders! And we stress not to be “locked-in” to a particular set of members. It’s not uncommon to realize that another representative needs to be added or someone cannot make the necessary commitment and should be replaced.
Finally, measure the business impact of a Data Governance Council and then publish results on an enterprise data governance internal website or Sharepoint. This demonstrates the commitment to improved data integrity at the highest executive ranks.