image missing
Date: 2024-08-16 Page is: DBtxt003.php txt00008812

Initiatives
Dean Dorcas

Dean Dorcas ... The Future of Supply Chains ... Understanding Cost to Serve

Burgess COMMENTARY

Peter Burgess

The Future of Supply Chains Previous postNext post Dustin Mattison Understanding Cost to Serve Posted by Dustin Mattison in The Future of Supply Chains on Dec 22, 2014 3:06:55 AM I interviewed Dean Dorcas who discussed Understanding Cost to Serve.

It’s great to speak with you today, Dean. I’m looking forward to hearing your views today on the topic of understanding cost to serve. Can you start by providing a brief background of yourself?

Sure. I started a company—we’re based out of Seattle, Washington—I started a company about 17 years ago called Integrated Management Systems, and we went into distribution centers and other labor-intensive operations, and we would give them a fixed cost. We would send them—the management, the crew—and we’d run those operations for them. In order to do that, we had to do it less expensively than they were running it themselves, so we really focused on what tools and methodologies we needed to drive up productivity fairly significantly in order to create the value that we could then make that operation profitable.

We really focused on different ways of managing the operations, what sort of information our managers needed in order to be effective. The tools we had up until that time we did using spreadsheets—and they were pretty good on the smaller operations—but we started running some distribution centers for some of the larger retailers, and what we found is, the size of the operations and complexity of the work and expectation of the customers were just more than our current tools could provide, so we decided we needed to develop some internal systems that would give us basically three things that we needed internally in order to run those operations.

The first one was cost to serve. We had to get real clear understanding of where every dollar was being spent in the operation, whether it was a direct process, an indirect process, and then what our costs were in order to do those different types of jobs. What is my cost when I’m doing a cross dock for this type of category or for this customer? What about when I’m doing picking or replenishment, et cetera? Really understanding what our cost to serve was to make sure that we were pricing it accordingly, or if we’re losing money on a process or a customer within that process, then we had a good understanding of where those costs were. That was cost to serve, and I can go into that in more detail, but that was the first thing we were trying to achieve.

The second thing we needed to achieve was really getting good understanding and visibility into the performance of our employees, down to an individual level. In those operations, there was a lot of variation in the type of work being done. As an example, a cross dock might take three hours to unload a trailer coming in and divide it up into the outbound trailers, or it might take 30 or 40 hours if there were a lot more cases, a lot of SKUs, a lot of complexity, maybe it was coming in as a shotgun, the freight was mixed, et cetera. A lot of different things went in to how long it should take.

In order to hold people accountable, we had to get down to fair labor standards and really understand how long it should take; then we could see every day how employees were performing. That was critical for us because when we run those big operations, we might have a hundred people on graveyard shift, and we might just know that we didn’t hit our numbers for graveyard shift, but we wouldn’t know why. We’d hear stories about people building false walls in the back of a container at two in the morning and going to sleep, but we didn’t know who was doing it, so we just didn’t have the visibility we needed in order to drive the productivity up. That was the second thing we really needed to develop.

The third thing was a pay-for-performance system on an individual level. Up until that point, we were running pay-for-performance on a team basis, but when you start getting up into large crews like that, team-based incentives just aren’t effective, so we need to be able to break that down and reward our associates; if they hit certain stretch goals, we wanted to reward them with a portion of the savings they created. That was our goal. We ended up building that system internally. We rolled it out at one of the big distribution centers for one of the big retailers, and within about six months, we took them from 73 cases an hour up to 157 cases an hour. We wanted to double their productivity and ended up saving that customer about $1.5 million a year in labor and took that operation to number one in their network for three years in a row. Real big success there for us.

At that point, as we looked around the market, we realized we created something pretty unique, and we made the decision—this was about 2008—to commercialize that software as a cloud-based system, or software as a service, and work with other companies to use those tools to run their own operations. Eventually, we ended up selling off the outsourcing company and focused completely on the software. That’s kind of a background of where Easy Metrics came from; it’s really a spinoff from an outsourcing operation that had developed those tools for their own use.

Can you talk about where these tools are needed and why we need them?

If we break it down into three areas, if we look at cost to serve, what we find is, a lot of times, companies have a macro-level view in understanding their costs. I might know that my costs are 32 percent of my revenue, so my labor costs are 32 percent. What I don’t know is that, say, a third of my customers or a third of my operations, I’m losing money and making money on the other ones, and I don’t know where I’m making money or where I’m losing money. Once I get visibility at that granular level—it’s called activity-based cost—what’s my cost when I look at the actual activity that it takes to deliver that service? What most companies find is that on a good portion, historically, it’s about 30 percent of the operations they’re actually not making any money. They’re doing a lot of work, but they’re not nearly as profitable as they could be.

Getting to that level of understanding really allows managers to start focusing on those areas where there’s a lot of opportunity to improve their profitability. As an example, one of our customers, it was a third-party logistics company, and they were charging 10 cents a unit for a labeling project, what they were doing for their customer; every time they put that label on, it was costing 10 cents. When they ran the cost to serve, they actually found out it was costing 12 cents per unit to do that job, so they were losing about 20 percent every time they were doing the work. By getting that level of understanding, they were able to adjust their pricing. That’s just one example where that could be helpful.

Another one of our customers, it was a manufacturing company. They would bring in kitchen items from Asia, and they would sell it to their customers here. Different customers would ask for different value-add type work to be done to the product that came in. In general, they would charge kind of a fixed-overhead cost for each of those value-added requirements, but when they broke it down using their system, obviously, there were some things that didn’t cost them much to do at all; there were other jobs and requests that cost a lot more. What they found out was, in certain areas they were losing a lot of money trying to give the customer what they needed, and they weren’t pricing appropriately for it. What that customer did was ended up taking that information and tied that back into the sales team’s commission structure. If the sales team was promising certain value-added work to the customer and they weren’t pricing it accordingly, then it would hurt their commissions, or, flipside, if they were pricing accordingly, they would end up making additional commission off that. Giving them a good understanding of what it cost to do that job helped them price it accordingly as well.

Other areas, if you look at a multitenant 3PL, they might know that it’s costing a dollar per unit to pick an order, but some customers might be more labor-intensive or they might be much smaller units for the order size, et cetera, so the price mix might be very different between one customer and another customer. If you just go with overall averages, you end up pricing one customer too high and another customer too low relative to what it’s actually costing you to serve that customer. Having that cost-to-serve visibility helps you make the decisions and price accordingly. It also helps you identify where, maybe, you’ve got some opportunities to drive process improvement within an operation. There are certain areas that are costing you a lot more than what you think they should be, and you can focus on those areas.

The final thing on cost to serve, I’d say, is really getting understanding of your direct costs versus your indirect costs. What we find a lot of times is companies have really no idea how much money they’re spending in those indirect buckets, whether it’s waiting on work or sweeping floors or meetings or quality control, whatever it is. Getting that real understanding of what those areas are costing you; then you can manage them accordingly. A lot of times, companies can drive 5 or 10 percent out of their cost structure just by really actively managing those indirect cost centers.

That was on cost to serve. If you look at it from an accountability side, what we find is within operations, if you look at your top 20 percent of employees and compare them to your bottom 20 percent of employees, when you’re able to hold them accountable against fair standards, you’re going to see a lot of variation between those. Being able to see, on a daily basis, how employees are performing and remove all the uncertainty that is associated around product mix so that you can actually hold employees accountable; then you can help them, one, understand what expectations are and help them understand best practices, et cetera, to try and get them up to that level. By doing that, taking that bottom, say, 50 percent of employees and helping them get up to the acceptable standards can really drive cost out of the system as well.

The reason I say it’s so critical from the standards standpoint is, a lot of companies, what they’ll do is look at lines per hour or cases per hour or some type of single-unit metric. What they’ll find there is, if I go and hold you accountable, I say, “Your lines per hour were too low yesterday,” and you say, “Yeah, but I had that customer that had a very big order, so, obviously, I was picking a lot of cases for every line I processed; therefore, it’s not fair to hold me to that standard.” That employee might have a very legitimate excuse there, and then it makes it very hard to hold them accountable.

Getting that fair standards that looks at more than just a single-unit metric—it might look at cases and lines and orders and locations and travel, et cetera, but it doesn’t matter what order you’re given or what product mix you’re given; you’ve got a fair amount of time to get that work done, and then I can compare how you’ve performed against how long it should have taken you, and now we can have very constructive and meaningful discussions on how you’re performing. I would say on that type of work, when you’re looking at accountability, any company that feels like there’s opportunity to drive productivity higher, to drive down their unit cost of labor, having those accountability tools is going to be very important.

The last section we talked about was the pay-for-performance. What we found is, back during 2008, 2009, a lot of companies were going through the recession; there wasn’t a lot of interest in pay-for-performance. They figured their employees were lucky to have a job, and that wasn’t a concern. Now, six, seven years later, what we started to find was, companies haven’t been able to up the wages of the employees very much recently. There’s a lot more pressure out in the market for employees, so they’re having a harder time recruiting new people in, a harder time holding on to good employees, so they want to be able to reward them with additional pay, but their cost structure within their company doesn’t allow them to just go and increase their cost of labor.

A pay-for-performance system allows them to set goals for the employees, the employees achieve those goals, they save the company money, and then both the company and the employees share in the savings that are created, so it really is a win-win scenario where the employees make more money and, oftentimes, it’s 20 to 35 percent more compensation than they were making before and the company’s cost per unit is actually going down even though they’re-paying employees more because the employees are doing more per hour of work being performed. I would say, any company that needs to drive down the cost of labor, wants to be able to incentivize their employees more, and really wants a fair way of rewarding their top performers for achieving those stretch goals, pay-for-performance system is a pretty powerful approach.

Do you have any final recommendations on how this can be done effectively?

Dean: The biggest thing, when you’re looking at any of these programs, especially when you’re looking at accountability and pay-for-performance, as we talked about, fair-labor standards is really the crux of it. If you don’t have fair-labor standards, as I mentioned in my example, then you end up either having a lottery ticket where I’m going to hit a bonus if I got an easy job to do or there’s no way I’m going to get a bonus if I’ve got a hard job to do. You’ve got to eliminate that uncertainty and that variability. The way to do that is to get to fair-labor standards using multiple metrics.

The key there is how you determine in a very cost-effective manner what those labor standards should be set at. This is where you can look at two different approaches. The traditional approach is you bring in industrial engineers, they use stopwatches, they’re going to analyze and watch on the floor, let’s say, eight hours’ work being done. They’re going to have a very small snapshot of work, and then they’re going to engineer in what they think the labor standards should be there. The downside to that approach is, it’s very costly—a typical facility might cost $100,000 to get engineered labor standards in there—and they become obsolete over time, so you need to bring the engineers back and continue to engage that; that has a cost associated with it. The final downside is that they only see a very small snapshot of what’s going on out there.

What we’ve seen a lot of times is, the type of product being performed, the mix that’s going on, different things have happened, you might not see that in your one day of watching that process. We’ve taken a slightly different approach. We still work with engineers when our customers want to use that, but we use this big data-correlation model that, as we pull in the data into the system, we might get hundreds of thousands of data points of a process that’s being done.

You can model that very inexpensively and come up and see, okay, when this work is being done with these different types of variables, here’s how long it’s taken different people to do that. And the system will tell us how to weight those variables to get the variants as tight as possible, and then you just determine how much of a stretch goal over the current level of productivity you want to set that standard. You can look at a given increase over it and then you’ll share the savings, or you can look at what the top 20 percent of the employees are performing and set the standard based on that.

The nice thing about that is it’s, one, inexpensive; two, it gets you very tight labor standards; you’re letting the data drive where those standards should be; on an ongoing basis, it’s very easy, say, every six months to go back and use all the more data that you’ve gotten in, have that reoptimize your processes, continually to tighten up those standards so that they don’t become obsolete, they don’t get to the point where they’re no longer fair, and you’re constantly keeping them nice and tight and with a high correlation. You’ve now done that and eliminated 99 percent of the cost out of trying to set up and maintain labor standards.

With so much data that’s now available, it’s a new approach to developing those standards in a way that’s very fast and very cost-effective. The ideal situation is if you still have engineers involved; they can then use that big data approach, identify what areas to really focus on, and then use their expertise on a given process to help maximize and optimize how those processes are being run. Really, the best of both worlds.

Thank you, Dean, for sharing today.

Thank you very much, Dustin. I appreciate talking with you again.

SITE COUNT Amazing and shiny stats
Copyright © 2005-2021 Peter Burgess. All rights reserved. This material may only be used for limited low profit purposes: e.g. socio-enviro-economic performance analysis, education and training.