CONTACT US 224 265 0400 or Email

The Agile Incubator Blog

Big data technologies in healthcare insurance (payers): nosql and MDM–part 1

We were working with a client around Fraud, Waste and Abuse (FWA) recently and we needed to clean up the client’s Provider data to help us track longitudinal changes in fraud behavior. Some of the published reports suggest that FWA accounts for, minimally, $21B in Medicare payments that never should have been made. That’s a lot of money. I’ll blog more on FWA, analytics and how Ajilitee could help you under our Managed Analytics service offerings, but in this blog I wanted to focus on just one small aspect of managing Provider data at Payer companies.

Many healthcare insurance companies are not known for rapid innovation. But times have changed. The need to manage members using Customer Relationship Management (CRM) technologies has greatly increased and many Payers have started their CRM efforts. But, these CRM technologies have been in use other industries for 20 years.  The technologies needed to support CRM analytics and member point of care/point of service interactions are different then what many Payer organizations have in place today.

Where do bigdata technologies play? We have been asked that question here at Ajilitee and as part of working on our FWA products and internal R&D innovation, we have been using and looking at these technologies for awhile.

Bigdata technologies span many areas and have interesting names: hadoop, hbase, hive, pig, nosql, mahout and more. Nosql and hadoop in particular are core technologies that other layers build on. For example, hive and pig build on hadoop. Hadoop itself can build on nosql technologies. I will not repeat all of these concepts in this blog because a lot of content has already been written on these technologies. From here on, I’ll assume you are familiar with some of these terms or look them up easily.

Guiding Thoughts

Here’s a couple of principles that can help guide the conversation:

  • Deep insights: The more you understand your domain specific problem, the easier it is to adapt new technologies to solve them in novel ways or to recognize trade-offs you are implicitly making with current technologies. You can find novel uses that solve a problem in different ways. Think Geoffrey Moore and disruption.
  • Delamination: By rethinking the layers of technologies you use today, its possible to pick and choose the layers and how they combine together to create new solutions. Many nosql packages do not include compute engines within their data access and storage solution. So solutions such as hadoop must be layered on top of the nosql databases in order to obtain compute processing. There are a lot of variations to this thought but just think “delamination” can be helpful around innovation.
  • Do something different: Unless you are really, really smart, a great way to learn about how to use the new technologies is try something and fail multiple times to solve a problem.  This learning by doing is key to innovation. Of course, you have to learn from your mistakes each time—failing by itself is pointless.

Scanning the bigdata Technical Landscape

There are many new technologies in the bigdata world. Let’s look at nosql. Many people question whether the technologies are mature enough for production use and whether they help them solve business problems faster, cheaper or better in some way.

Bigdata and nosql conversations usually start with explaining issues found in the world of managing and serving massive amounts of data needed for websites. But the technologies seem confusing at times and begin to wonder if they apply to our problems especially those in the healthcare world.

Nosql technologies are often quoted as having the following properties:

  • No sql: This means there is no sql!
  • Schema-less: There is no schema!
  • Eventually consistent: Forget ACID. Forget transactions. Eventually your data is consistent.
  • Fault tolerant, scalable, distributed: A whole number of really great architectural characteristics that sound good but you are not always sure apply to you.

Once you start looking at the bigdata technologies you are immediately struck by the fact that:

  • With some nosql technologies you actually do define a schema and indicate what a column’s type and name is. Some nosql technologies also need to know how to sort columns so they need knowledge of how to compare keys or values. That sounds like a schema!
  • With nearly all bigdata technologies, someone has already written a [insert technology name here] Query Language to you can run queries. This sounds like everyone still wants *SQL.
  • With some nosql technologies, you have to specify properties such as coherency, which indicates that you can get the value you just wrote back out of the database. While not the full definition of ACID, it starts sounding close!
  • There is a lot of parallelism everywhere. The file system is parallelized! Don’t forget that the job flow is parallelized as well! (assuming it fits a specific data parallel processing model). Everything is about scale-out—add more nodes and everything keeps running, the system stays up, and everyone has fast, guaranteed access!
  • There appears to be several interfaces into the database some of which require programming in a language you may not be familiar with e.g. not SQL! How do you even load data?! You almost feel like you are programming at the lowest level of database programming possible. Wait a second, which layer are you programming? The filesystem level? The map-reduce level? Or both?

This seems very confusing. So let’s think through the issues. We want to avoid having everything look like a nail because we have a bigdata hammer.

Healthcare Example

In the Payer space, Master Data Management (MDM) is finally becoming a component of the business and architectural landscape.  In the Payer world, MDM means managing Providers, Members, Contracts and Products and other business entities that you can often touch and feel or that are really considered non-claims data.  Other types of Payer data include “event” data. This is data generated by interactions with Members from sales, service and care oriented interactions. Of course, there is also claims data. Claims data is the largest source of data, followed by “event” data then MDM. Are some of the bigdata technologies relevant for even the small datasets such as MDM datasets? Small in this case means a few million rows and typically much less. Of course, this links back to our FWA problem statement at the beginning of the blog and the need to clean up our Provider list to perform Fraud analysis. We’ll illustrate some of our points with industry specifics.

  • Schema-less:
    • The NPEES Provider list from CMS changes multiple times a year. The list contains all of the Medicare Providers and some, but not all, of their demographics.  The columns of data change although not frequently. New providers are added or removed as appropriate. Provider data inside a Payer typically originates in several systems–sometimes up to 20.  So it would seem that a technology that claims to be “schema-less” would be useful. But schema-less does not mean that you do not need to specify the data types of the data.  You have to specify the format somewhere so external tools can use the data. The NPEES file has several sub-entities in it like addresses and other codes indicating the Provider’s specialty or whether the Provider is an individual or an organization. Shouldn’t we pull these sub-entities out and make them their own table? Shouldn’t we also try to specify where and how the data should be loaded to be efficiently accessed, perhaps by using table partitions, or striped volumes or other typical database designs? These are normal database design concepts.
      • Part of the value of being schemaless is that you tend to concentrate data together into denormalized structures and use it to answer a smaller set of business questions such as “what data changed between NPPES files each month?” And the data you load may be very dirty, so lets load it all as strings, then convert the data in the database itself. We don’t have to work to hard to specify types, but we must specify some. We can also ignore doing detailed table design because most nosql database are designed to scale out. Bigdata can help us push aside this operational complexity.
      • MDM data changes over time. For example, you may choose to append one external data vendor’s Provider demographic data one year, then switch the vendor the next. That’s a whole new set of data structures in the traditional world. Being schemaless allows us to manage data changing over time without having to reload or re-baseline to achieve acceptable performance. Hence, you can evolve the schema more easily and that’s a great reduction in operational complexity.
      • No-sql: We clearly need to write a query to determine what changed between different NPPES file releases. We have to write a query. The entire claim of nosql must be false! The answer is more subtle than that. The claim of nosql is really one of not having many characteristics of traditional RDBMS databases built into the database layer. For example, you will not see nosql databases implementing referential integrity through sql statement such as foreign keys, etc. You do have to specify primary keys for some nosql database just to help with managing the data.  In fact, many systems today, whether a data warehouse or a transactional system, actually implement integrity in the processing layer above the database these days. This is neither wrong nor right, but just where it is often happening. Hence by saying its nosql, you are really saying that the data architecture is one where the data is more concentrated, where integrity is implemented in a processing layer and not the database, and where the data access interface makes as few assumptions about the data as possible in terms of its structure.
        • There is often another implication of schemaless that is less often recognized. Because the nosql database essentially delaminate the database stack to some degree. Mathematical processing occurs outside the layer.. While nosql creates uniform access performance by keeping the interface simple and scaling out, it also does not allow computations to be automatically pushed down to an individual node for parallel processing. That’s where hadoop steps in. By teasing apart the computing part from the data access layer, you have to now choose where computing occurs. In the case of hadoop, that processing can occur on a node where the data lives (there is a Cassandra+Hadoop integration layer) or you can process the data controlling using uniform access performance to avoid overloading the compute server. This also means there are really not any stored procedures in nosql databases.
        • Eventually Consistent: In our specific case, eventually consistent is fine. Since we are loading the data and deduping and cleansing it initially, that’s not a big deal. But…
          • Let’s also think through the case of Provider MDM. In an enterprise MDM system, all transactional systems should reference, in real time, the MDM system to obtain authoritative data when it needs it. The MDM data should be consistent. However, even in Payers today, the MDM data is not immediately consistent. There is an acceptable lag between one transactional system authoring some master data and another system in being able to access it. Typically, the lag is a few hours or a day or a few days. In fact, if we look at nosql databases like Cassandra, its quite possible to improve the time to consistency at a significant lower cost structure. For example, a social media site can tune its consistency which means it can tune how fast you can see your new “friends” post. You will also want to tune the consistency you want for MDM and scale it up or down. You can do this in nosql technologies without incurring additional development time or complexity all using the same database. That’s huge and compelling. Because enterprise MDM makes the MDM system an operational imperative, you have almost immediately solved some very vexing architecture problems at an incredible inexpensive cost point.
          • Cool architecture: The previous bullets have already pointed out the need for scale and robustness so I will not repeat that here. But an additional thought may be worth pointing out. If you think about the data access patterns, where the architecture is concentrated to have an MDM hub that selves up authoritative data to many, many consuming applications (many reads, few writes) and this all happening in relatively real-time, then scaling and fault-tolerance are actually key. In the MDM vision landscape, cool architecture is actually really important and your MDM hub does look more and more like a website serving up data. You need a transactional OD.  Its also important to realize that you can only take advantage of the cool architecture if the other parts of your architecture are also simplified. Your mileage may vary if all you are doing is plugging in new technology into the same old landscape without any changes anywhere.
            • Because a cool architecture can be delaminated, we have to plan for how computations (queries) will be executed. You cannot automatically have the computations pushed down to a node without using hadoop or something similar. Otherwise all computation and IO gets throttled on the node you issue the query from. That’s one are of review and choice you have to think through and one place where hadoop, hive and pig try to help you think through. Other database engines that distribute the data and computations might make this much easier but you may have to make other architecture choices to use those tools. Think deeply and carefully about cool architecture.

So based on some deeper thinking, it appears that the bigdata and nosql world can offer something of value even for a Provider MDM problem which seems like an ill-fit to begin with.


It appears that if the problem you are trying to solve is important enough to use these other technologies, there is some benefit to using them in the right mix and in the right proportions to your existing architecture. They are viable and based on our experience at Ajilitee, can be made production ready. In some cases, they can dramatically reduce operating complexity despite their seemingly lack of maturity around tooling. In many Payers, reducing operating complexity is a huge win.

In the next blog we will demonstrate learning by doing using bigdata technologies on larger Provider datasets and common healthcare processing analytical patterns. I’ll also return to the FWA theme.

As a treat, John Bair our CTO is speaking at TDWI’s Cool BI Forum in Chicago on May 8. He’ll be talking about these technologies and how they can help you. His talk is based on direct experiences from building products and solutions for our clients.