Contact us

    Talk to our Experts


    +92 51 8739288
    info@smart-is.com Smart IS, second floor, Nadra Service Centre, Meridian Software Technology Park, Rehmanabad, Murree Road, Rawalpindi, Pakistan

    Business Intelligence for JDA/BlueYonder Systems:

    Analyzing Need for a Labor Management System

    By Saad Ahmad, Executive VP of Smart IS International

    Abstract

    Blue Yonder (BY) tables contain valuable information that can determine important Key Performance Indicators (KPIs) for a site. Such data exists in tables like “dlytrn” that are extremely large and not easy to query. Moreover, due to deployment decisions it is possible that a customer may have more than one instance of the software that house this data.

    In several cases, customers implement a labor management system (LMS) in order to get important labor statistics. The BY user community generally assumes that in order to analyze labor data — they need to install the complete LMS.

    We have seen several cases where after an expensive implementation of the LMS, customers find it difficult to maintain data. That is because all along, their real objective was simply to get some statistics. This paper does not intend to imply that Labor Management Systems are not needed in general — but if the objective is to analyze the labor statistics, then a BY WMS system already has all the data elements that are needed.

    In the present era of “Big Data”, an array of new approaches has emerged. For instance — if we need to analyze labor data over a long period; do we really need to do time studies? Do we really need to invest in measuring the speed of a vehicle as it moves through the warehouse? Do we really need to know the time it takes to travel from one location to another? While in yesteryears we would have assumed that we do — now we are able to apply statistical principles on large amounts of data to discern these values without investing upfront in the underlying plumbing required for a full LMS implementation.

    This is analogous to studying the dynamics of a forest. If we want to study it, — one approach is to study the processes in a tree and then applying them to the forest. That will work but it is not an efficient way to understand the dynamics of a forest. An alternative would be to consider the forest on its own. Requiring a full LMS to get the labor statistics is analogous to mapping the processes of a tree to understand the forest.

    For instance, if we know from other mechanisms that our shipments do not have quality issues and the amount of associated returns are not a concern; then, we can assume that all the processes associated with the relevant “job codes” are being done per Standard Operating Procedures. With that backdrop a typical “bell curve” can be used to analyze the performance:

    So as opposed to the typical LM approach where we say that performance of an assignment should be based on the upfront metric for the job — we can simply define that measure based on the bell curve. We can thus assign a grade to the performance and use that to define the acceptable performance.

    Similarly, if we have been using the system for years — we already know how long the movements between various warehouse areas have been taking. Since we have that knowledge already; do we then need to measure the vehicle speed? Even in that case we can utilize the above strategy to understand the performance.

    Assignments and Job Codes

    BY LMS introduces interesting concepts of “Assignments” and “Jobs”.

    • Job is a sequence of activities required to perform a specific use case. Some implementations define them more granularly than others. For example, we may say that moving a full pallet from one location to another is a job. Similarly, we may say that performing a pallet build operation is a job. In some cases, we can define pallet pick from certain areas as a specific job. Job then makes the basic unit in order to define the acceptability of user performance. When defining a job, we can define non-system activities as well — that data helps in defining the SOP and the time it should take to complete a single instance of the job.
    • The travel time required for a job is separately computed based on the warehouse map.
    • Assignments are groups of activities that represent an instance of the job.
    • BY LMS assumes that the start time of an assignment is the end time of the previous assignment. That is an interesting concept that simplifies the overall system where we do not need to independently mark the end of every assignment.

    If we analyze the associated setup of a job, we will see that it is based on concepts of “Activity Codes”, and “Areas” — so LM determines these values based on underlying WMS context and concepts.

    Feasibility of Only Using WMS Data

    It is certainly possible to get most of this data from WMS dlytrn table. The data is being captured here already and all we need to do is harvest it.

    Concept of an assignment is a bit tricky since dlytrn itself is too granular — but in 2020, number of rows in a table should not intimidate us. We can build the assignment data from the dlytrn table by objectively defining them. For example — a typical inventory movement assignment can be defined as:

    • For a given user and device — move to the device marks the start of an assignment. We can consider that dlytrn_id to be the assignment number. If there were no moves from the device from this dlytrn row till the previous dlytrn where produce was moved to the device, then it should be the same assignment as the previous. This covers the case of picking up multiple pallets.
    • Move from a device marks an end of an assignment. We can determine the assignment number by looking up the previous movement to the device by the same user and device.

    So, this simple algorithm will allow us to group dlytrn rows in the same way as BY LM groups them. Whereas the LM builds these while the activities are happening — our logic will apply afterwards but the end result will be the same.

    For a given assignment we also know the source areas involved and various activity codes logged in between. This information can be used to deduce the job code.

    With LMS the flow was:

    1. Create the job code.
    2. Assignment is created when work starts. And it is tied to the job.
    3. Activities attach to the assignment.

    In Smart Is’ approach:

    1. Activities happen first and are logged in dlytrn.
    2. We group activities into assignments as an analysis exercise.
    3. Based on the activities in the assignment we deduce the job that was being performed.

    But result is that we can measure performance objectively at the same granularity that LMS provided without the associated overhead. We have some additional benefits as well:

    • We objectively know the “end time” of an assignment and “start time” of next assignment so we can measure the overhead between the two successive assignments. That may yield some interesting truths about the activities on the floor.
    • Since this is an analytical exercise; we can redefine the measures and apply them to historical data.

    To put this concept into perspective, it took us less than 15 minutes to analyze 14.5 million rows in dlytrn to determine the assignments for each row on a development class machine. This implies that while initial analysis of the data will take several hours, ongoing daily process will be completed in a few minutes. This will result in a data set where we will have an assignment identifier associated to the rows in dlytrn to group them together for further analysis — for example:

    Once data is grouped in this fashion, we can do a deeper analysis over time for business use cases. We can utilize the same concept to group the dlytrn data to create various types of assignments — including customized use cases as well.

    Conclusion

    Most importantly, do not even think about purging that billion row dlytrn table sitting in your archive instance. That is not overhead, but a gold mine. Chances are that most of your questions can be answered from that data source and you can compare it over time and across warehouses easily. Our general recommendation is following the extracts for such analysis:

    • Dlytrn, trlract, invact, ordact
    • All shipment related data for dispatched shipments down to inventory details. It should include pckwrk and pckmov tables as well
    • All receiving table data

    Once the data has been extracted from all instances once, we recommend that afterwards we extract daily from the production instance for the last two days. Based on the customer’s infrastructure this data may be housed in customer’s database — but these days it may be more optimal to push it to cloud stores. The advantage of the latter strategy is not only a simplified infrastructure but also better performance for final analysis. For example, a solution like Google BigQuery, can easily provide ad-hoc access to billions of rows. Building dashboards is also simplified in a cloud-based environment.

    Smart IS can assist since we intimately understand all aspects of this domain. We understand the core LM concepts and have in-depth knowledge of the BY WMS. We also understand the intricacies of building large data warehouses and visualizations in a variety of toolsets. We can offer a small fixed-fee project scope to extract existing data to a cloud repository along with the capability to explore the data and some canned dashboards.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    One thought on “Business Intelligence for JDA/BlueYonder Systems:

    Like!! Thank you for publishing this awesome article.

    ปั้มไลค์

    Recent Stories

    AllIsometric
    World Backup Day Blog
    Mar 31,2021 - Uncategorized Read More
    blog cover
    Infographics Done Right
    Feb 02,2021 - Uncategorized Read More
    blog cover
    Data Privacy Day Blog
    Jan 25,2021 - Uncategorized Read More

    Copyright © 2024 Smart-IS International. All Rights Reserved.

    Microsoft is a registered trademark of Microsoft Corporation Oracle, JD Edwards and Peoplesoft are registered trademarks of Oracle Corporation. Blue Yonder is a registered trademark of Blue Yonder Group Incorporated. All trademarks are property of their respective owners.