On the face of it, a company that I joined in the early 2000s had a fully functional operational workforce planning process in place. Their model was one of the more sophisticated ones that I had come across, but it was also logical and not over-burdened with fancy Excel programming. The problems I came across after I took the model over were more to do with the data feeding in and the reporting coming out of it.
Data in the model was sourced from both manual data collection and systematic feeds. A common issue with systematic feeds is that there is potential to assume that because the data is systematic then it is reliable and consistent. But systematic data requires the same process of validation as any other data source, and may also turn out to be much less consistent than you would expect.
I soon became worried that one part of the model, forecasting the workload demand in an auto-pick area of a warehouse, was not working as efficiently as it should. The hours used were always much higher than the hours predicted. KPI reports from the system always showed high performance rates and this fed back into the model. It wasn’t going to be easy to pinpoint the source of my concern since there was no obvious problem with the data.
One of the analysts at the site helped me with validating the data coming from the system. We took observations and used some of the other system reports to validate our main source of data. Eventually I realised that the problem was not an issue with any part of the system (not surprising really), it was a clear case of human intervention that was disrupting the process.
Operator performance was one of the KPIs produced by the reporting and the team leaders were quick to learn that the system could be manipulated in order to inflate these performances. This helped the department to look good and to deliver on their targets, and also reduced the grief the managers would have got if performances ever fell below the standard. Unfortunately for me, over time these false reports had become the norm and I was going to get the job of making sense of it, putting it right and dealing with the consequences.
The problem was that towards the end of the shift the orders going through the auto-pick machine tailed off. At some point pick sections ran out of work and pickers had to move between sections in order to pick the last remaining orders. Eventually it reached a point where pickers could start to be transferred out of the area completely. Managers realised that this tail end had a massive impact on performances and so had colluded to sign out all pickers from the system at the point when the orders began to tail off (to leave them to pick ‘off-system’).
This only became clear when I compared the attendance times from HR records to the productive time reported by the system for operatives in that area. I had to question why there was a consistent pattern of discrepancy? Once the problem had been identified I put forward a solution and we could all move on. I had my data source cleaned up, and the operations managers had a legitimate way of filling in the gaps. Operator performances were protected, but the overall performance of the machine turned out to be around 15% lower than previously thought. This became a priority issue and we created a process improvement group to take a fresh look at all aspects and it wasn’t long before the KPIs started to improve.
This episode taught me all sorts of things about human nature and the reliability of systematic data. But most of all it taught me to back my own judgement and skills in this area. This was a long-standing issue that had failed to be addressed by the business for many years. It struck me from early on that something wasn’t right. Operational workforce planning had definitely become my area of expertise, but I couldn’t yet work out why businesses weren’t giving it a higher priority, given the potential problems if it wasn’t done effectively.
Next time, another experience, before I get back ‘on topic’.