(single minute exchange of die), Poka-yoke (mistake proofing), group technology .... at the same time it reduces the need for safety stocks, and thus helps reduce ...
Focused TQM and Synergy: A Case Study
by Dan Trietsch*
September 1992
_____________________________________________ *
This is a reproduction of Working Paper AS-92-06, Naval Postgraduate School, Monterey, CA 93943 (which is out of print).
Focused TQM and Synergy: A Case Study
Abstract This paper describes an extended TQM framework, illustrated by a short case study. TQM is often associated with leading and motivating people. But one must also have tools and measurable objectives. Industrial engineering concepts such as SMED (single minute exchange of die), Poka-yoke (mistake proofing), group technology and total preventive maintenance help provide these. But these methods are not usually taught as part of TQM. Furthermore, applying TQM across the board without any priorities is not conducive to quick results; and slow results may discourage the use of TQM. TOC (theory of constraints), originally developed as an alternative to TQM, can provide necessary focusing and prioritization, and is thus a valuable complement of TQM. The case study demonstrates the combined power of these methods, and the synergy between them.
Introduction This paper describes an extended TQM framework, illustrated by a short case study. Since TQM is about quality, it is appropriate to begin with a brief discussion about what quality is. A broad definition of quality is the prevention of waste. Waste includes scrap, unnecessary efforts, unnecessary delays, and activities that do not add value (such as storage). This is a very powerful definition in the sense that it highlights the economic value of pursuing quality, and makes a distinction between "luxury" and "quality"; an item may be luxurious and of low quality if there is a lot of waste associated with its production and use, or it may be pedestrian, but of high quality. "Minimizing the Taguchi loss function" [Taguchi, 1986] is a version of this definition. Using this definition, we may want to rephrase Crosby's "Quality is Free," by "Lack of Quality is Expensive." Other definitions are in terms of product/service characteristics. For instance: Deming's "What the customer needs, at a price the customer is willing to pay," or Juran's "Fitness for Use." Here we distinguish between: -
Quality of conformance (which happens to be Crosby's rather limited definition of quality)
-
Quality of design and performance. It is certainly wasteful to produce out of tolerance, so prevention of waste includes quality of
conformance. It is also wasteful to produce items nobody wants, and here's where quality of design and performance comes in. Thus, the definition of quality as prevention of waste encompasses the latter definitions. Other types of waste are non-value-adding activities such as spending time and energy on setting up jobs, or on transporting jobs over large distances, not to mention time and energy spent on rework and on inspection. Hence, TQM's objectives should include the elimination of these types of waste. Our extended TQM framework is based on Deming's teachings [Deming, 1986], with the addition of other improvement ideas. Deming's TQM speaks mainly to the need to change the system to one that promotes quality. Deming's framework traditionally includes the following building blocks: Deming's 14 Points and removal of "deadly diseases": designed to create a working environment that fosters quality.
1
-
The use of the PDCA cycle (plan; do; check; act): to introduce a never-ending cycle of proven process improvements.
-
System Approach: to prevent suboptimization.
-
Profound Knowledge: a necessary condition for improvement, since one cannot improve without understanding the existing process and how various control variables impact it.
-
Control Charts: as a means to monitor processes, provide information whether they are in control, and, if so, what the process capability is.
-
Process/Product Design, or off-line quality control: for instance, Taguchi's methods fall into this arena.
-
Inspection Policy: abolish sampling items for accept/reject decision purposes; instead, determine whether all items should be inspected, or none at all (but, Deming condones sampling a process for the purpose of control charting). In conclusion, one can say that Deming's TQM is a system comprising management theory
(how to create a quality-fostering organization), statistics (how to measure and monitor quality), and theory of knowledge (how to learn continuously so as to be able to improve continuously). This is a powerful and potent combination, as the success of many companies that follow Deming proves. The Deming framework, however, though stressing a system view and prevention of suboptimization, does not say exactly how to achieve these ends. Furthermore, Deming does not really teach how to actually improve a process. Traditionally, what is taught for this purpose is the use of graphic tools such as flow charts, run charts, control charts, check sheets, histograms, causeand-effect diagrams and Pareto analysis. These tools are powerful indeed, but their main power is diagnostic rather than prescriptive. There is a saying that identifying the problem is half the solution, but the traditional TQM framework fails to provide the other half. Therefore, in the opinion of this author, Deming's framework needs to be augmented with focusing methods and prescriptive improvement methods: the subject of this paper. Indeed, most of the companies that implement TQM successfully employ additional improvement methods that do not fall within the framework described so far. Focusing methods are less prevalent, however, with the exception of the use of Pareto analysis to focus defect-reduction efforts. Such a focusing method exists, but it is not usually associated with TQM. In contrast, it is
2
presented as an alternative to TQM. This paper argues that it is most valuable as a complement of TQM. The result is an extended TQM framework that includes focusing and improvement techniques, and is thus more complete. We call the extended framework Focused TQM. Perhaps the most potent improvement ideas were developed by Shigeo Shingo (nicknamed in Japan "Dr. Quality"), and may be classified as industrial engineering ideas. Other industrial engineering ideas are also a useful part of the extended system. As for focusing, operations research techniques may come in handy, but one must admit that they were marketed best by an adversary of operations research. Let us elaborate a bit about these methods, and how they fit the extended TQM framework. We start with Shingo's industrial engineering process improvement methodologies: -
Single Minute Exchange of Die (SMED): designed to compress setup time from (typically) hours to minutes [Shingo, 1985]. The name implies that the setup time should be reduced to between 0:00 and 9:59 minutes, i.e., the number of minutes should be a single digit. Once that goal is achieved, the next goal is One Touch Exchange of Die (OTED). For a brief description of the SMED system, see Appendix B.
-
Poka-yoke (mistake proofing): designed to improve processes by eliminating the possibility of any defects that can be anticipated, mainly by utilizing low-cost gadgets to monitor the process and prevent it from proceeding unless everything is in place [Shingo, 1986]. This makes possible the actual attainment of zero-defects (a very dangerous slogan; a very nice achievement). Other industrial engineering methods that fit the extended framework of TQM are:
-
Group Technology: designed to reduce the waste involved in transporting materials over long distances that is typical in plants with functional layouts. This is often associated with flexible layouts, where machines are equipped with casters and cells are created and dismantled at short notice, as per the emerging needs.
-
Total Preventive Maintenance: designed to eliminate equipment failures due to lack of monitoring the performance of equipment and anticipating breakdowns. These industrial engineering contributions, taken together (with SMED leading the way),
make possible non-stock production [Shingo, 1988], also known as Just-in-Time (JIT). The idea is that with short setups and short transportation distances we can cut down the size of batches, and with good quality built in and reliable equipment we do not need as large a safety stock as otherwise. Deming's TQM, when combined with these industrial engineering concepts, is the system
3
many successful quality leaders have implemented. The quintessential example is Toyota, where Shingo's SMED system was crystallized. Toyota has implemented practically all the elements mentioned above. Toyota's continued success with these methods is almost assured, because by now practically all its employees are well trained in them, and they apply them continuously everywhere. When one looks at the extended system presented here, one may ask, what is the criterion for including a methodology of improvement in the system. After all, there is a myriad of improvement methods that are not included, such as robotics, automation, computerized management systems and so on. While some of these methods may be used by successful companies that are ready for them and for which their contribution is positive, they do not meet one crucial criterion: synergy. The methods picked were those judged to interact synergetically with each other. Thus, a method that may help one part of the system at the expense of other parts of the system would not be included. For instance, SMED helps directly the area in which it is implemented, eliminating waste of time and money, and thus, by definition, increasing quality. At the same time the application of SMED increases the quality of conformance (because it eliminates the need for adjustments that cause unnecessary variations), reduces the lead-time to internal and external customers, and, last but not least, makes possible smaller batches. Thus, SMED meets the synergy test with flying colors. Poka-yoke not only reduces waste associated with the quality conformance problems it prevents, but at the same time it reduces the need for safety stocks, and thus helps reduce lead-time to customers: synergy again. Group Technology reduces transportation distances, i.e., waste. At the same time it makes possible easy paper-less transfers of items from one resource to another one-by-one. Transferring items one-by-one promotes higher quality simply because the internal customer has a chance to alert the internal vendor in case of quality problems while something can still be done about it. Thus group technology not only reduces transportation and paperwork waste, but at the same time it reduces lead-time and creates unexpected quality benefits: synergy again. Finally, preventive maintenance not only reduces waste involved in unnecessary breakdowns: it also reduces vibrations of machines and makes it possible to achieve higher tolerances, thus increasing quality of conformance synergistically. In contrast, other methods of improvement may be less synergetic. Computerized systems, such as MRP II, often impact the whole organization in a negative way. Even a seemingly inherently
4
benign improvement idea, such as creating an OR department, may cause more harm then good if it bogs the organization down in the task of measuring all the details that are necessary to implement typical OR applications. Hence, even methods that may be successful for some companies may not be included in the extended framework. This does not imply that one should not use such methods under any circumstances; just that one needs to be careful to make sure that the result will not be detrimental to the organization as a whole. Since this author considers himself part of the OR community, this statement is not a pleasant one to make. A more pleasant remark is that OR has an important function in our extended framework with regard to focusing. We proceed to elaborate this point further. Latecomers into the quality race do not have the luxury of organization-wide quality culture and specific improvement expertise. For these companies it is sometimes difficult, and invariably demoralizing, to wait patiently for the necessary culture and knowledge to permeate the organization. In fact, some of them may go broke before they manage to achieve competitive level. Often, a company will start implementing TQM, but retreat after a year or two when times are hard and TQM has not yet proved its value in terms of bottom-line results. What such companies need is a more focused approach to quality improvement. Such a focused approach will yield fast measurable benefits, and provide fuel for the continued overall TQM implementation. And here is where operations research (OR) could come in. In fact, this is where operations research does come in, but not necessarily led by the efforts of representatives of the OR community. Instead, OR's contribution to focusing has been most successfully promoted by a self-professed OR adversary, Goldratt,1 who reinvented mathematical programming and sensitivity analysis, albeit in an approximate heuristic format, and packaged them under the title Theory of Constraints [Goldratt, 1990]. TOC calls for (1) identifying the system's constraints (in practice Goldratt limits this to one major production constraint and, possibly, a few marketing constraints); (2) deciding how to exploit the constraints; (3) subjugating the rest of the system to supporting that decision; (4) elevating the system constraints; and, finally, (5) starting the cycle afresh (since after the system constraints have 1
Goldratt is an OR adversary in the sense that he opposes using OR's basics, such as linear programming. For instance, see "The Haystack Syndrome."
5
been elevated new system constraints may emerge). Using TOC, it is possible to focus the initial TQM efforts on bottlenecks, and on resources that impact the performance of bottlenecks [Trietsch, 1992]. The advantage of this technique is that it is geared to achieving relatively quick bottom-line results, while at the same time providing the organization with a success to emulate. By focusing we imply that the people involved in the bottleneck operations will be the first to be trained, and will get the most attention and resources. But this does not imply that the rest of the organization should not be party to the TQM effort. In a typical organization, waste permeates the whole structure, so quality improvement in the sense of reducing waste should take place throughout the organization. Most such improvements require very small monetical investments; instead they are predicated on thoughtware. To release the organizational thoughtware power it is imperative to have the whole organization on board. Some projects, however, do require hardware investments, and these should be prioritized based on focusing analysis. To repeat, though Goldratt claims that TOC should replace TQM, here we suggest using it with TQM merely to focus activities, as discussed above. One should not fall into the trap of having staff specialists improve quality on bottlenecks and neglect getting the whole organization on board the total quality bandwagon. Returning to TOC, while managers marvel at its simplicity and intuitive appeal, operations researchers recognize immediately that the first 3 dictums (identify the constraint, exploit it and subjugate the rest of the system to its needs) are nothing more than a heuristic application of mathematical programming (Goldratt's examples are all solvable by linear programming). As for the last two dictums (elevate the constraint and reiterate), sensitivity analysis has always been based on the same idea. Essentially, then, Goldratt has rediscovered mathematical programming (or, based on his examples, linear programming), and packaged it much better than the OR community ever did, at least in terms of management buy-in. Since the majority of optimization problems, e.g., linear programs, are solved optimally on the boundary of the feasible set, i.e., where some constraints are utilized fully, Goldratt simply confines the search to the set where the major suspected constraint is utilized fully.
6
Nevertheless, Goldratt has done more than just repackage mathematical programming: he reminded management that it has to look at the whole system before attempting major improvement projects. Where there is a tradeoff between capturing the whole system approximately or a subsystem accurately, Goldratt comes down strongly in favor of the systemic approximate solution. In the opinion of this author, he's absolutely right on this point. One must admit that a lot of the existing OR applications have favored the exact local solution over the approximate global one, and in doing this our profession contributed to suboptimization as much as any group of people did. Furthermore, Goldratt has taken mathematical programming in a new direction, where almost no mathematics are needed. Thus he provided a fresh tool for system analysis that can be used by OR laymen and professionals alike. To recap, TOC preaches, in essence, that systems should be managed by concentrating on their constraints, which will ensure that the whole system will be utilized well. This is in sharp contradiction to prevailing practices where we concentrate on local areas and expect the whole to be optimized by virtue of the parts being optimized. This prevailing method leads to suboptimization of two types: mild and severe. Mild suboptimization occurs when resources are allocated among good projects in a manner that fails to maximize the total system benefits. Severe suboptimization occurs when a so-called improvement project helps part of the system but hurts other parts so badly that the sum total of all benefits is negative. Such severe suboptimization is much more prevalent than one might think. For instance, attempts to reduce inventory levels at the Department of Defense caused increases in repair lead times that are far more costly than the alleged savings; or the case of buying from the lowest bidder, which requires no elaboration. On a smaller scale, during recent visits to all naval shipyard machine shops, this author found the following examples, each at more than one shop: (1) getting rid of jib cranes in favor of central bridge cranes only to find that the new cranes, while good at lifting heavy items, are also a new bottleneck that machinists and machines waste much time waiting for; (2) centralizing the location of setup parts to reduce the cost of storage, but causing machinists to waste
7
much time in unnecessary transporting activities. The latter also caused lack of ownership and with it a deterioration of maintenance of these parts. In general, many severe suboptimization projects involve centralizing functions that should be decentralized: the exact reverse of group technology. Any serious quality improvement program must include prevention of severe suboptimization. The prevention of mild suboptimization is also important, but at a lesser priority level. The criterion of synergy, which we employed to judge which improvement methods should be included in the extended TQM framework is designed to prevent severe suboptimization. The use of TOC to focus improvements where they are beneficial to the system's throughput is another highly effective method for prevention of severe suboptimization. Furthermore, it may also provide a first step towards elimination of mild suboptimization. TOC is not likely to prevent mild suboptimization altogether, however, due to the approximations involved in its application. Below we illustrate the broad use of TOC for the naval ship repair system. The purpose of this analysis is to promote decisions that avoid severe suboptimization. In Appendix A we discuss a more detailed model, designed to reduce the mild suboptimization involved in allocating resources between repair facilities and the rest of the navy.
Applying TOC to the Navy Ship Repair System As discussed above, TOC espouses a cycle with five steps that should be repeated indefinitely. 1.
Identify the system constraints.
2.
Decide how to exploit the system constraints.
3.
Subordinate everything else to that decision.
4.
Elevate the system constraints.
5.
If, in the previous steps, the constraints have been broken, go back to Step 1, but don't let inertia become the new system constraint. Let us examine how TOC might apply to the Navy in general and to naval shipyards in
particular. The most important single constraint the Navy has and will always have is the budget.
8
The Navy's job is to maximize its readiness while staying within the budget. This implies maximizing the number of weapon systems that are ready to be deployed. Assuming the Navy is budgeted for a given number of ships, it follows immediately that the Navy should want these ships to be in good repair but not under repair. To do this the Navy should see to it that repair times at shipyards be decreased dramatically2. There is a complicating problem, however. Shipyards are not likely to be highly utilized in the foreseeable future, since the US Navy is downsizing. Suppose now that shipyards comply and reduce their repair time by, say, 33%. Unfortunately, there is no reason to believe that they will repair more ships as a result, because the total number of necessary repairs is limited. Hence, they will be even more idle than they are today. Thus, they will be under pressure to lay off the very workers that are needed to support the new short repair times. Furthermore, it's very likely that the optimal level of workers in each operating shipyard will have to be increased, to reduce the repair time even further (we discuss this point in Appendix By); but should that happen the shipyards will be even less utilized. The next natural step would be to close down more shipyards (one is already on the base closure list), thus increasing the utilization of the remaining ones. Looking at the problem from the point of view of the shipyards community, they are facing a very tough decision. As a group, should they comply with the need to reduce repair times successfully, they increase the danger of being closed down. As individual shipyards, however, their best course might be to reduce their repair times as much as possible, and hope that other shipyards would be less successful in this endeavor and would be the ones chosen to be closed down. That is, TOC leads to an unpleasant conclusion that the shipyards should be pitted against each other in a competition for survival. Considering that downsizing is imminent, however, this conclusion may be more palatable. It just points to downsizing opportunities that will not impact the Navy's readiness. 2
In the late fifties Mitsubishi Nagasaki Shipyard held the world record in how fast it could deliver a [then] large tanker: 4 months. Compare this with 7 months elsewhere in Japan and in Germany, and 10 months at the United Kingdom. This did not satisfy them; they called in Shigeo Shingo, and he helped them reduce it further to 3 months and eventually to 2 months. It is highly unlikely that Naval Shipyards today cannot achieve proportional reductions if the world-record-holder could.
9
The problem can be ameliorated by looking for additional markets for the shipyards. But this is complicated by written and unwritten rules as to what the federal government is allowed to do in the market. To recap, the Navy can probably increase the number of ready-to-deploy ships for a given budget (or decrease the expenditures necessary to maintain a given number of ships) by taking the following actions: (1)
Streamline the ship repair processes (including better purchasing policies, working more in parallel, assign workers with the correct skill level to the job3 and reducing waste by implementing SMED, group technology and preventive maintenance).
(2)
Increase the number of workers in key trades on the critical path, so that the work package can be finished sooner.
(3)
Either keep all shipyards open, for future contingencies, or close some of them down, thus achieving the largest possible savings. In the former case it is important to look for additional markets where the shipyards might compete.
Applying SMED to Machine Shops: A Case Study The conclusion from the constraint analysis above is that as far as the overall Navy interest is concerned, shipyards should reduce the repair time of ships. Repairing a ship is a large project, and, as is usually the case with projects, it has a critical path of activities. A major sub-project on this critical path is the set of activities that have to be carried out while the ship is in dry dock. During this period, valves, pumps and other items are dismantled, repaired at the machine shop, and returned to the ship. In parallel, other parts, such as electrical components, are repaired at other shops. The machine shop repairs, however, are the longest, and therefore the machine shop is on the critical path of the ship repair. Typically, the machine shop requires about six months for its work package, while the other shops can finish their share in four months. Thus, the machine shop can cut up to two months off its 3
This is one of the major methods by which Shingo was able to achieve the lead time reductions at Mitsubishi Nagasaki.
10
repair time and still remain critical. Any further cuts must be accompanied by similar cuts at other shops to be of any overall benefit. During the two months, however, in a sense, the machine shop drives the whole shipyard. Thus, the value of a machinist's day during the time the ship is on dry dock far exceeds the value usually associated with it. In fact, for one shipyard at least, the value of a machinist's day during the busy period is 60 mandays. Even when the fact that the machine shop is under-utilized more than 50% of the year is taken into account, we still find that the value of a machinist's day is between 20 and 30 mandays. Thus, another result of TOC analysis is that we are led to investigate possibilities to reduce the repair times at the machine shop in particular. The first idea that comes to mind is to add machinists (equipment is not a problem), but there are two difficulties involved with this solution: (i) the shipyards are under political pressures to do the opposite, and machinists have been laid off in the recent past instead of being hired; (ii) with the exception of recently laid off machinists that are still on the job market, good machinists are not easy to come by, and it takes a long time to train new machinists. In fact, one of the problems machine shops face at shipyards is that if they loan machinists to other shops for extended periods, many machinists elect to leave the machine shop for good: the conditions at the other shops are much more attractive. The conclusion is that we must try to improve the machine shop's operations with the existing machinists. This is where SMED comes in. Shipyards' machine shops have what can best be described as a custom job shop environment. SMED, however, as developed by Shingo, is mainly geared towards repetitive manufacturing, and focused mainly on setting up machinery between lots. In the machine shop environment, we wanted to use the same ideas for setting up individual jobs on machines. But while most SMED applications are predicated on the fact that the same limited set of setups will be repeated again and again, at the machine shop there are many more different setups which render this approach impossible. Thus, this was not a trivial task. Indeed, at one shipyard the effort led to very few achievements. Hindsight reveals that the problem there was mainly lack of buy-in on the shop's part. The fact that this shipyard has been targeted to be closed down, and indeed was eventually selected to be closed down, may also have
11
had something to do with it. At another shipyard, however, where TQM was more entrenched, the task of trying to implement SMED was taken on by an enthusiastic team of machinists led by a highly effective general foreman. The result here was an unqualified success, which later led to adoption of SMED ideas at other shipyards as well. The key to this success was ownership of the people involved in the setups, combined with knowledge supplied from the outside (by this author). In contrast, in the first shipyard the project was in the hands of staff members (of the engineering persuasion), and not in the hands of the real process owners. The implementation process began with a formal presentation of the theory of SMED (see Appendix). Then, the team decided to measure how much time machinists spend on setups. The result was approximately 46% of their time, i.e., almost half their time was spent on non-value adding setup activities. It now became clear how important it was to reduce the setup time. Furthermore, documented data could be presented to management in support of the necessary improvements. Pareto analysis of this time expenditure revealed two unexpected major culprits, above and beyond the setup itself. These were looking for setup materials and dealing with blunt cutting tools (either by sharpening them or by replacing them). In fact, these two activities, when taken together, consumed 26% of the machinists' time, while the setup itself took 20%. The team used the data to justify the purchasing of much better cutting tools, which removed the second problem at a very minor cost.4 Dealing with the first issue, eliminating search time, was more time consuming; actually, it is still in process today. The team decided to reorganize the storage in the shop in such a manner that setup materials will become visible. Not less importantly, a house cleaning effort transformed the way the shop looks, freed passageways, and unearthed many equipment items which were lost for a long time. (When such an item is not found, the machinist, after wasting time to look for it, spends 4
In the federal government service one cannot obtain quality tools without a documented justification for them. Thus, until this effort, the cheapest tools were purchased, with the result that they added a lot of time both to setting jobs up and to executing them. Another synergetic benefit of the better tooling was better tolerance holding. All in all, the cheap tools cost the shipyard very much indeed!
12
additional time to come up with a substitute solution. The result, often, is a jury-rigged setup that is not conducive to achieving quality of conformance.) In several cases, it was discovered that the shop had a glut of equipment exceeding its needs by much, but it still used to take a long time to find it when necessary. The team investigated ways to reduce the setups themselves in parallel to improving storage and cleanliness. They knew that a major opportunity was transforming internal setup to external setup. Often, while machinery was working the machinist stood by idly. If the machinist could set up the next job during the operation on the current one, a much higher utilization of both machine and machinist would result. Another avenue they were looking at was the use of one-turn fasteners and more efficient clamping methods. One commercially available solution for transforming internal setup to external setup is the use of palletizing. Palletizing is a direct offshoot of Shingo's SMED, and it involves machine tools with built-in extra tables on which the new job can be set up while the current one is processed. When the current item is done, the tables are exchanged in a very quick operation, and the machine starts processing the next part. Then, the machinist can remove the old part from its table and set the next part on it. Palletizing, however, is limited to machine tools so equipped. It was definitely out of the question for the existing machinery. Equipped with understanding of the SMED principles, a much cheaper and more flexible solution was found among the commercially available solutions. It involves the use of intermediary plates that can be attached to machine tables at predetermined positions with high accuracy. Now, a machinist can set a job on one such plate, while the machine is processing a job set on another plate. In short, palletizing without expensive palletizing equipment. This solution was developed originally at FMC, where it is used for repetitive manufacturing. It is highly applicable, however, in a job-shop environment as well. Incidentally, note that this system provides palletizing across machines, which may mean that fewer setups will be required for a part that has to be processed on more than one machine. The team decided to experiment with this solution. (The PDCA cycle approach prohibits wholesale commitment to a new solution without trying it first on a small scale.) Because of government purchasing lead-times this experiment took about a year to set up. but the result was an unqualified success, leading to adoption of this method on a much wider scale.
13
The search for
better fastening devices led to another relatively low cost commercial application: modular fixturing. Modular fixturing is ideal for processing small non-recurrent batches, since it allows the initial setup for the first part to be utilized for the consecutive parts. It is a good system even for single items, however, because it provides good fastening and clamping gadgets. Traditionally, however, modular fixturing is performed directly on machine tables, to which a plate equipped to accept the various parts is permanently (or semi-permanently) attached. The obvious final touch was combining the use of intermediary plates with modular fixturing. Thus the modular fixtures are treated as intermediate plates that can be set up in parallel to the machine working. Utilizing the system has provided some unexpected benefits as well. For instance, traditionally when milling long parts that have to be clamped by vises, one attaches two vises to a machine table, and then adjusts them to be parallel to the table. Under the intermediary plates system, in contrast, one attaches two plates pre-equipped with vises to the ends of the table, and the vises are already parallel to each other. Furthermore, their location is qualified (i.e., a numerically controlled machine can "know" where the parts are clamped). Eliminating the need to adjust not only saves much time, it also enhances quality of conformance. Another unexpected benefit is that this system allows cheap preempting. With such a system in place preempting may even become a recommended procedure. For instance, suppose we have some low priority jobs to do, and we have some slack time. We can start a low priority job without worrying whether a high priority job might arrive. When one does, we set it up on an intermediary plate, and only then, if necessary, stop the machine and shunt the low priority job aside for a while. This makes possible better utilization of slack time without the usual waste associated with preempting. So far, the combination of the methods described above has brought about savings of approximately 20% of machinists time, but the team expects savings of 33% once the new system is fully implemented. The value of the current savings exceeds one million dollars a year in that shipyard alone. Should they achieve their expected savings level of 33%, it would be equivalent to increasing their number by 50%, and the investment in hardware should be returned within two months.
14
When applied to the other naval shipyards, who now decided to follow suit, the expected direct savings will be several million dollars per year. But, potentially, this is just the tip of the iceberg. Recall that machinists are on each ship's critical path through the shipyard. This means that there are extended periods when the whole shipyard is "driven" by the machinists. Thus, saving 20% of machinists' time gets multiplied several fold in terms of indirect savings across the whole shipyard, as discussed above. Note that we do not anticipate a savings of 33% at all shipyards, for two reasons. First, some of them started off with better storage of setup materials, so they do not have as much waste to eliminate in this arena. Second, since they do not really own the improvement process to the same extent that the first shipyard does, it is not likely they will reap the full benefit.
Conclusion As we have seen, the use of TOC indicates that it is very advantageous to improve the machine shop processes at shipyards. We've also seen that SMED ideas made possible very significant improvements of this type. The potential value of applying these ideas at all shipyards, a process that is under way now, is almost staggering. We're talking about direct savings of several millions of dollars per annum, leveraged several fold because of the criticality of the machine shop operations in the ship repair process. Unfortunately, we cannot conclude this paper on this optimistic note. In reality, there is always a danger that because of the improvements machinists will be let go, or at least their number may be allowed to attrite. So this achievement is predicated on management's understanding of management by constraints (which would lead them to keep the machinists under the present circumstances). But while individuals in the system may understand this message, at least intuitively, one cannot say that it is a foregone conclusion that the decision makers in the system share that understanding. In fact, their past performance shows lack of such understanding: jobs
15
were eliminated across the board without regard to their criticality, and how difficult it might be to hire or train replacements in the future. Thus, the results that are due to leveraging are in real jeopardy. With the navy downsizing it can be shown that it would be better to maintain some shipyards at high manning levels while allowing a fraction of them to be closed rather than keep all of them open but with low manpower levels. To see this requires the realization that the cost of repairing a ship far exceeds the price tag: in reality it includes the value of having the ship out of commission for the duration of the repair. In short, one has to take into account the value of shorter repair times. And shorter repair times can be achieved by process improvements as well as maintaining enough capacity in each operating shipyard. The possible savings to the system under this scenario are potentially enough to reduce the navy budget by 3% to 6% while maintaining the level of readiness constant. This will be achieved by shortening the repair time of ships, thus requiring less of them in the fleet, as well as savings associated with closing some shipyards. Nevertheless, even if every decision maker in the system would understand and accept this, the politics involved in such a decision will probably prevent it from being pursued. The next best thing is to keep all shipyards open, with high manning levels at the critical trades. But this would lead to low utilization and another set of political problems. Thus, the real big savings are not likely to be pursued in the near future. Another related issue that we did not address has to do with a current constraint that can and should be removed: purchasing. To cut lead-times we have to purchase parts from JIT suppliers. This has a bright side and a dark side. The dark side is that the unit price may be higher, and that Congress may object (preferring to suboptimize by getting the lowest price, however long it takes and how shoddy the product may be). The bright side is that the better quality expected is likely to pay for itself several folds.
16
References Deming, W.E. (1986), Out of the Crisis, MIT Center for Advanced Engineering Study. Goldratt, E.M. (1992), The Haystack Syndrome, North River Press, Inc. Goldratt, E.M. (1990), Theory of Constraints, North River Press, Inc. Shingo, S. (1988), Non-Stock Production: The Shingo System for Continuous Improvement, Productivity Press. Shingo, S. (1986), Zero Quality Control: Source Inspection and the Poka-yoke System, Productivity Press. Shingo, S. (1985), A Revolution in Manufacturing: The SMED System, Productivity Press. Taguchi, G. (1986), Introduction to Quality Engineering: Designing Quality into Products and Processes, Asian Productivity Organization. Trietsch, D. (1992), Some Notes on the Application of Single Minute Exchange of Die (SMED), Technical Report NPS-AS-92-019, Naval Postgraduate School, Monterey, CA 93943.
17
APPENDIX A: A BRIEF OVERVIEW OF SMED This appendix is based on Shingo (1985). Shigeo Shingo developed SMED for the repetitive manufacturing environment, where production is in lots. There, the advantage of SMED is that it makes possible smaller lots (e.g., lots with less than 500 units, in contrast to more than 10000). Shingo defines setup time as the time elapsed from the moment the machine finishes the last item in one lot, until it starts the first good item of the next lot, and is ready to continue without further adjustments. The time to set up each unit, if any, is not part of the setup according to this definition. A possible reason for this is that for presses and many other mass production machines the time to switch from one unit to the next, in the same lot, is often negligible. Changing over from one lot to another, however, is often very time-consuming. In machine job shops, however, it often takes a long time to set up each unit even if it is part of a lot. Fortunately, the same principles that make possible fast changeovers from one lot to another, often make it possible to switch products within the same lot faster. Therefore, we need not make a conceptual distinction between setting up a machine for a new type of part (Shingo's setup definition) and changing a part on the machine in the middle of a lot. Of course, these two are different from each other in the details, so they are not identical setups. In conclusion, the time the machine is idle while changing over from one part to the next will be the setup we're concerned with, whether the parts belong to the same lot, or not.
The Conceptual Stages of SMED Shingo identifies four stages in SMED application: 0.
Preliminary Stage: Internal and External Setups are Mixed.
1.
First Stage: Separate Internal and External Setups.
2.
Second Stage: Convert Internal Setup to External Setup.
3.
Third Stage: Streamline Both Internal and External Setups. We now discuss these four stages in more detail.
0.
Preliminary Stage: Internal and External Setups are Mixed. SMED differentiates between internal setup and external setup. Internal setup consists of setup operations that can only be done while the machine is stopped. Such operations may include mounting or removing dies and fixtures. External setup includes setup operations that can be done while the machine is running. Operations like transporting dies and fixtures to and from the machine and ensuring the correct tools and parts are on hand and functioning before changeover are all examples of external setups. During the preliminary stage no effort is invested in differentiating external and internal setup, so external setup is often performed while the machine is stopped.
18
1.
First Stage: Separate Internal and External Setups. Recall that we defined setup time starting when the last good product is completed by a resource and ending when the first good product is produced consistently, without further adjustments. It becomes clear now that setup time cannot be less than the internal setup, and should not be more than the internal setup. Therefore, care should be taken to perform all external setup while the machine is running (either before or after the changeover). For example, transporting the new die to the press should be done before the changeover begins, and transporting the old die to storage should be done after the changeover. These two activities are both external setups. If the operator cannot do these jobs while the machine is running, it may be justified to assign another person to do it. Typically, the first stage requires very small investments in hardware, if at all. Instead, as any other improvement effort, it requires thoughtware, i.e., brain power.5 The benefit, again typically, is a savings of about 50% of the setup time. 2.
Second Stage: Convert Internal Setup to External Setup. Once no external setups are performed while the machine is waiting, a good way to reduce the setup further is to analyze the internal setup carefully and see which activities can be transformed to external setup. For example, Shingo cites a case where by using an extra table for a planer the new job could be set up while the old one was running with the other table on the machine. Then, the tables were exchanged in internal setup. This may require a small investment in hardware (an extra table and a crane), but it can alleviate the need for an extra planer (compare this to palletizing). Other examples: preheat a die that was usually heated as part of internal setup; store vacuum in a large tank and impart it to a setup by opening a valve. 3.
Third Stage: Streamline Both Internal and External Setups. So far, all we've done was to shift work from the time the machine is down to time when the machine is running. We did this by separating internal and external setups, and by transforming internal setups to external setups. Nevertheless, though the setup itself may be cut significantly, the total amount of work may not be reduced. Thus, if we intend to increase the number of setups, we may find that we don't have enough manpower to perform the external setups. Viewed from another point, lots have to be at least large enough to last while the external setups are performed. At the third stage we reduce the total setup effort by streamlining all setup activities, both internal and external. Examples: avoiding the need for adjustments by providing stops that locate the new die exactly at the right spot; using dies with standardized external dimensions that avoid the need to adjust the stroke of the press; using one-turn fasteners, instead of traditional nuts and bolts, which require many turns; using rollers and pushing the old die out by inserting the new die in. Shingo cites mechanization as a fourth stage, but says that it is very rarely justified. According to Shingo, one can typically reduce setups from up to 24 hours down to three or four minutes without any mechanization. Then, mechanization can gain another minute or two, and reduce the setup to the range of one to three minutes. But, the investment is not likely to be justified 5
Fortunately, brainpower, though precious, is a potentially ubiquitous commodity. It just needs to be unleashed, and under TQM and modern management in general this has already started to happen.
19
when viewed this way. Usually, however, we cite the whole reduction from hours to minutes as the justification for mechanization, and this is simply not correct. To get a fair comparison, one should compare the best solution available with and without mechanization. Furthermore, when one starts a mechanization project without streamlining the old process first, one is not likely to achieve the best results with mechanization either. In such cases it's likely that mechanization will take the setup down to a level higher than could have been achieved without it. This is known as "paving the cowpath," and has been the downfall of many mechanization and computerization projects. Note that the second and third stage can actually be done in parallel. That is, it is not necessary to transform internal setup to external setup before streamlining some setup activities. Nevertheless, the conceptual order implied by the stages as presented above is good. First, shift as much work to times when the machine is running (stages 1 and 2), then, when we have a more rational process, we can concentrate on detailed small improvements.
20
APPENDIX B: A SYSTEM OPTIMIZED MODEL FOR SHIPYARDS This appendix is based on rough numerical data, with the objective of illustrating the points. Exact data was not available to this author at the time of this writing. Taking the Navy's system as our total system, one may say that the Navy budget includes a percentage (say 30%) of overhead and the rest are costs that vary proportionally with the number of ships. Assume further that the air wing is considered separately. One would still come up with a very significant cost per day per ship, to the tune of several hundred thousand dollars a day. Let us assume, for an average ship, that we are talking about $10 million per month. Let us assume further that the cost of overhauling such a ship is $100M, and the required availability (i.e., the time the ship is available to the shipyard) is 20 months. So, the real total cost of this overhaul to the Navy is $300M. If the shipyard can overhaul the ship in 12 months, but asks for an increase of less than $80M, this is still a profitable proposition for the Navy. Why should the cost be more for a shorter availability? First, as we'll show below, it's likely we'll need to keep the manpower level at its present level or higher, to support the shorter availabilities required. So, somebody must keep the present manpower paid (and maybe an even larger work force). Unless the Navy elects to close down some of the shipyards, this implies that the shipyards will idle for long periods, and therefore will have to charge proportionally more per manday (i.e., the labor of one person per one day) when busy. If we do not actually increase the work force, however, this will only bring the cost back to the original level, because basically it will allocate the same costs to the same availabilities. Nevertheless, it is also likely that the shipyard will have to purchase items based on quality and delivery time more so than based on price tag, which would mean that some expenses might be higher. Such higher expenses should pay for themselves, at least in terms of increased readiness (by reduced availability), or we would not incur them; but they could increase the out-of-pocket portion of the total cost. Thus it is a distinct possibility that the price tag for each repair will be increased. Be that as it may, to induce the shipyards to bid to the best advantage of the system as a whole, they will now have to bid a price, P, and a lead-time, T, such that P+10T will be minimized. T probably has some kind of (perhaps temporary) technological lower limit, but basically, for a given level of inherent waste in the production system, we may assume over a wide range that T is inversely proportional to the manpower in the shipyard, and part of P is directly proportional to the same number (because we have to pay them all year round). For our shipyard, let us assume that we have 6000 workers (not counting management--which we'll treat as fixed overhead), and its throughput is 4 ships a year. Hence each ship has to support the cost of 1500 people for one year, or $75M. It also has purchases and overhead associated with it of $25M. Hence, today, at 20 months availability, the cost is $100M. Assume now that we achieve the level of efficiency where the only short-term way to reduce the lead-time further is to put these mandays in the ships faster. Such conditions lead to a cost function of the form P=A/T+B-C⋅T, where A, B and C are positive constants. For the example above we'll specify: P=1500/T+45-T. Here, 1500/T represents the manday cost, as an inverse function of T, the repair time; if we decrease T we have to hire more people (so we can put in the necessary number of mandays during a shorter period), and indeed the element 1500/T will increase. The fixed cost, 45, may represent the ship's share of the shipyard's overhead, plus purchases required for the ship, at the lowest price but not necessarily quickest delivery. The element -T indicates that some costs (notably purchasing and raw materials
21
inventories) may go up to make it possible to decrease T. For T=20 this yields 100, as required. For shorter T values we have to pay more for purchasing (perhaps) and inventories. For instance, for T=10 the result is P=185. Now we need to optimize as follows: min{P+10T}=1500/T+45+9T 2
This yields 1500/T =9, or T=13, approximately. At this value P+10T is approximately 277.5, as compared with 300 before (at 20 months). This solution implies a work force increase of 20/13, i.e., to slightly above 9000. Of course, what we really want to do is to reduce the availability with the same number of workers first, and then check whether we need to increase or decrease their number. But analysis of the type discussed above indicates that reductions in force at shipyards may very well be counterproductive (suboptimization of the severe type). To get some more mileage out of this example, let's see what would happen if we could meet the lead time of 13 months with the workers that are available. This is equivalent to saying that we reduce the required mandays by 35% (13/20=65%); such reductions are feasible if we eliminate just a fraction of the waste that exists today due to overly complex processes and purchasing-induced problems (including long lead time and low quality). That would imply that the cost charged would be 107 (the additional 7 due to increased purchasing costs). The function to minimize then will be: min{P+10T}=975/T+45+9T 2
At a total cost of 237 for 13 months. Optimization yields T =108, or T=10.41. That is, the number of workers should still be increased by a factor of 13/10.41. The cost at this level will be 232.35. 77% of the original cost of 300, but with an overhaul price tag of 128.26 instead of 100. Even if we can get the supplies for the same price as before, we'll still have to pay 118.675, because we increase the number of workers. Incidentally, for this particular numerical example one has to reduce the required manday input to 42% of the original before the number of existing workers would exceed the optimum. But note that at this level the utilization of shipyards will drop to 42% of its current low level. That is, based on these rough figures it is highly likely the shipyards have too few workers, regardless of their low utilization.
22