OUR
INSIGHTS

Lab Informatics

LIMS Master Data Best Practices Part 3: Maintenance and Scalability

CATEGORY
Lab Informatics

DATE
December 20, 2019

Share this...
Share on facebook Share on twitter Share on linkedin Youtube

Master data design has very important impacts over the lifecycle of a LIMS, as nearly every piece of functionality in the system revolves around the design of the master data. One of the most important aspects to any LIMS implementation is designing the master data so that it is easy to maintain and scale as the organization grows and business needs change. Some of the key benefits of configuring your master data to be maintainable and scalable include:

  • Easier to add and/or modify master data down the road
  • Increased system efficiency and reliability
  • Future system enhancements are less resource intensive
  • Better management for large volumes of data
  • Increased user acceptance
  • Increased ROI

In short, focusing on maintainability and scalability when configuring your master data really helps improve the lifespan and usability of your LIMS. In this blog, we will provide some best practice tips on how to set up master data so it will be easy to maintain and scale as your organization grows and the system matures.

Configuring Master Data for Maintainability

Once a LIMS system is configured users often must live with the master data rules set during configuration. While change control can be used to update the configuration, sometimes the process is so cumbersome that it isn’t worth the effort, or the changes are so numerous that the system never reaches a finalized state. The tips in this section are some we have found to be most helpful during the configuration process to make creating and updating Master Data easier on users as the system evolves.

Put System reserved words and System specific rules where they can be found. Every LIMS system has reserved keywords that can’t be used as a data name. These keywords are usually buried in the installation documentation and are hard to find. In addition, every LIMS system also has different conventions for using special characters or uppercase and lowercase letters. Instead of just being part of the “tribal knowledge” passed down through system administrators, these reserved words and specific rules should be included in the master data document where they will be easy to find long after the initial installation has been completed.

Create naming conventions based on business structure. Use naming codes that refer to your business processes or structure such as site, product specification or variation, lab type, or manufacturing line. This creates a uniform system that will change less often and reduces the overall amount of static data.

For example: Let’s say you have a multi-site deployment of a LIMS system and need to perform a pH check of manufacturing lines:

Good assay name: SF03-L1PH (San Francisco site Building 3 – Line 1 pH)

Bad assay name:  LINE3PH (line 3 pH)

If the specification for the product changes, the good assay name will tell you which assay needs to be updated. With the bad assay name, there is no way to tell.

Use references in naming conventions. It is best to avoid naming master data based on a previous LIMS system or by using a specific instrument name. In these cases, as the system or instrument no longer exists the name doesn’t have any meaning. Instead, add a field or lookup table to use as a reference.

Good assay name: Chem-8260-GCMS (Chemistry dept – EPA method number – instrument type reference)

Bad assay name: SL8260-MS123 (Sapphire LIMS method 8260 – Mass spec #123)

Use summary analyses. Where instrument interfaces are being used, such as Empower, analyses should be separated into a raw data analysis and an independent reporting or summary analysis. This setup provides a few key benefits:

  • Data integrity will be maintained, because the method can be locked down. Analysts will still have the ability to manipulate the data in the reporting or summary analysis but, the raw data won’t be manipulated.
  • It provides flexibility for method changes. By having a separate method analysis, you only need to change one method instead of several if an update is needed due to an instrument change or a new instrument.
  • Having a separate raw data analysis frees the instrument from being held while the raw data is being analyzed. Additional replicates can be run, the instrument can be taken offline for maintenance, or another run can be set up.

Try not to tie your metadata to a name – instead use a field. When a defined name is used it must be hard coded into the system. This is time consuming for developers during initial setup. If any changes need to be made after the system is configured, it must be done by a developer and changes must go through change control and possibly re-validation. By creating fields instead, changes can be done in the front end of the system and the change control process is much easier.

Custom tables and fields should all start with a prefix. This separates custom tables and fields from what is pre-defined in your system. A prefix can be used to group related objects based on your master data map. This is very useful for data review or if you have a multi-site deployment strategy. A custom table prefix could be used to designate tables designed for a specific site or business process. Some of the benefits of using a prefix are:

  • If a custom table has a prefix, you don’t need to create a prefix on the fields inside the table.
  • It creates a recognizable and uniform standard that can be re-used.

Work with the database administrator to manage growing data. Over time, your data will grow. It’s easy to let lists grow to a size where users must scroll forever to find what they are looking for, or tables to where it becomes difficult to see columns on the screen. As data grows, work with the system database administrator to create indices or queries to manage growing data. This helps to maintain system performance.

Configuring Master Data for Scalability

The Master Data Plan document will be the guide on how to scale your master data. This document is usually written during the development phase of any software update or data migration project. But often it encompasses only the initial configuration. To make your plan scalable, the Master Data Plan should include instructions on how to deal with data as it grows once the initial configuration is done.

Schedule who and when for all tasks. When putting together the project timeline, adequate time and resources should be given for completing master data tasks. Too often master data is left to the last minute or the time and resources needed are underestimated. Missed entries are rationalized away with the belief that it can be entered as needed. Often this results in a deployment that is never fully completed.

When writing Master Data Plan for the project, make sure you identify who will be entering data, who will do the testing, and when tasks will be completed. This provides the check that everything is complete for go-live. If data will be entered after go-live, include it in the plan. Then, expand on this baseline to explain the who and when for entering and testing data into the future.

Define how data is transferred into the system (Data Migration plan). The main aspect of any upgrade or implementation is how to migrate master data from the old system (or no system) into a new one. Your Master Data Plan should include the list of tasks needed to put all the master data into the new system. As the configuration is built tasks will fall into an order required by the system. For example, when migrating an analysis and its components the component table may need to be migrated into the system before the analysis table.

As this information is recorded it becomes the Data Migration plan. This provides the ability to import or export large amounts of data. So when you have a new product, you can potentially add the master data as groups of tables instead of entering pieces individually. It’s a much faster and cleaner method of adding data that can be verified using scripts instead of manually entering fields one at a time.

Configure the data map for growth and provide rationale. The Master Data Plan will include the data map outlining the business units involved, workflows, how workflows relate to each other, and the list of master data fields from each workflow. Instead of leaving the data plan as just a list of the initial setup, include how to manage data as it grows over time. Be sure to include the rationale behind why the plan was configured as it was so it is easy to understand how to expand it in the future.

Consider how master data is created as your business grows. Some questions to consider are:

  • What are the criteria to determine if master data fields are added or not?
  • Will new tables or sub-tables need to be created?
  • How will new workflows be added?
  • How will lists be handled when they get too big?
  • What happens when data is no longer needed?
  • Why are some third-party systems included (e.g., ERP, manufacturing), while others were not (training, document management)?
  • Will instruments, equipment, or another system be incorporated in the future?

Answers to these and similar questions provide a framework for expansion that is easy to understand.

Define naming convention to be used and the rationale behind it. The rationale for naming conventions should also be included for the same reasons as the rationale for the configuration. This includes the rules and variations behind corporate and site field names.  For a small deployment of only a few labs or sites, there may not be any variations to consider. For a large deployment, however, there could be many site variations. If the naming convention is based on your business structure, the rules can be specific, because the business structure is less likely to change.

Conclusion

When creating master data for a new LIMS, there are many things that should be done to ensure the data is easy to manage and can grow as your system matures. We’ve provided a number of key best practice recommendations in this blog that will help you improve maintainability and scalability in your LIMS when configuring your master data. Following these recommendations will ultimately help you increase the ROI of your LIMS over its full lifespan. Be sure to tune in for part 4 of our Master Data Blog Series, where we will discuss more best practice recommendations for master data quality control and change management.

Astrix is a laboratory informatics consulting firm that has been serving the scientific community since 1995. Our experienced professionals help implement innovative solutions that allow organizations to turn data into knowledge, increase organizational efficiency, improve quality and facilitate regulatory compliance. If you have any questions about our service offerings, or if you would like have an initial, no obligations consultation with an Astrix informatics expert to discuss your master data strategy or LIMS implementation project, don’t hesitate to contact us.

LET´S GET STARTED

Contact us today and let’s begin working on a solution for your most complex strategy, technology and staffing challenges.

CONTACT US
Web developer Ibiut