Your Report 2.0: A Data Engineer’s Guide to Migrating BI Reports from Old to New Systems: Part 2, Building the Extract

Requirements in hand, you are ready to start the fun of actually building the report extract to match the data mapping contract document you produced earlier!

If you haven’t yet, I highly recommend reading part one of this series here: Part 1: Requirements Gathering

We left off completing the gap analysis for the report.  More than likely, this activity resulted in needing to make several changes in your new system to support the report.  For example, if you are sourcing from a newly built Data Warehouse, it is possible that the Data Warehouse does not have all the data that you need.  Thus, there might be enhancements required to the Warehouse before you can source the data.  Or, if you are sourcing from a Datamart that feeds off a Data Warehouse, you might need to modify the ETL that moves data to the Datamart.  However you source your data, the first step before building the extract is to get the data you need loaded to location you are going to source the report off of.

Sample Data Flow

Some other considerations at this point:

  1. Are your ETL’s running frequently enough to support reporting refresh requirements? For example, if your report requires to be refreshed hourly, but your ETLS that load the Datamart only run overnight, you are likely going to need to update the schedule.
  2. Are there any aggregations that should be performed in the ETL rather than in the report itself? If you have a blazing fast Data Warehouse, you should use it!
  3. Do you have all the historic data you need? Does the historic data change? If not, perhaps consider hosting a historic view of the data somewhere for speedier consumption by the reporting tool.  Or, just keeping that historic data in the tools data store itself.  We will discuss incrementals further in the next sections.
  4. Your goal is to get the data into the easiest and most simple form as possible for consumption into the report. Try to avoid using the reporting tool as another layer of ETL, as it wasn’t built for that purpose.

Once you have all the data in the final location you are going to source the report from, you are ready to write the actual report extract. If you have completed all the previous steps, this step should be quick and easy! Your report extract will consist of writing some sort of a view or stored procedure that exposes the data in the exact format the reporting tool requires.  The more you can expose the data to exactly what the report needs, the faster the report will run, and the snappier it will be on the front end.  I always suggest decoupling the extract from the source tables in the Data Warehouse or Data Mart, such that if you have to make changes to the model, it doesn’t affect the report extracts. You can do this easily in the form of a view or a stored procedure.

If you are going to run the extract for just certain period of time, or if the report you are building takes a historic look at the data for all time, you are likely going to want to implement the extract as an incremental procedure.  Elsewise, it will take an extremely long time for the extract to run, and even more time for the data to cross the internet and land in the report.  The easiest way (but not the only way, and there are always exceptions) to set up incrementals is the following:

  1. Make sure you have metadata on the tables in your Data Warehouse or Datamart.  At the least, this should include a ‘created date’ and a ‘last updated date time’.  This is a best practice for data management in general, not just for writing report incrementals.
  2. Pick your ‘main’ table / unique identifier.  What is the main table you want to drive the incrementals? For example, if you have a report that reports on Emergency Room visits, it will likely be your main ‘Visit’ table.
  3. Have the reporting tool pass your extract procedure the MAX last updated date that it has been sent by previous runs.
  4. Your procedure should take the MAX last updated date as a parameter, and only return records that have a ‘last updated date’ > the MAX of the last update.
  5. The report will have to do a merge into whatever data store it uses one the back-end. By merge, I mean if the record exists, update it, if not, insert it. There is SQL syntax for this that can be found here.

If you are getting merge errors in your incrementals, you might want to read my blog post on troubleshooting this issue:

https://datastrides.wordpress.com/2018/09/19/first-post-identifying-duplicates-and-fixing-incremental-merge-errors/

The benefits of incrementals are immense – it will result in much fast processing and snappier report refreshes. Note that if you don’t have the ability to do the merging on the reporting tool end, and you are not dealing with millions of records, you can do the merging into a permanent table on your Warehouse or Mart that will house this data in the format that the report requires.  Then you can have the report pull all of the data. This should only be done with reports that do not use large volumes of data.

As you are running through all of these considerations, be mindful of any changes you are making and how they will affect the reporting tool.  Sometimes the requirement is to not change the report code as much as possible, in which case you should strive to match the old process / extract as closely as possible.  This usually happens when you have a short time frame.  However, if there are performance improvements / bugs you can fix by using the above points, and you have the time, definitely consider them!

Also, ALWAYS include header comments / comment your code! This gives full visibility to what the code does, what changes were made, when, and why they were made.  I use something simple like this, but feel free to whatever fits your aesthetic:

---------------------------------------------------------------
--AUTHOR:  Ryan Kennedy
--CREATED: 02/16/2019
--DESC:    Exposes the data extract for the Very Important Business Report
--UPDT:
--[Ryan K - 2/17/19] Descriptive comment of changes made and why
---------------------------------------------------------------

Once the extract is ready to go, you can move on the Phase 3: Unit testing and QA. (Coming soon)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s