The current issue and full text archive of this journal is available at www.emeraldinsight.com/2040-4166.htm
IJLSS 5,3
An outline of the “Control Phase” for implementing Lean Six Sigma
230
Ashok Sarkar SQC and OR Division, Indian Statistical Institute, Mumbai, India
Received 4 August 2013 Revised 30 December 2013 Accepted 16 February 2014
Arup Ranjan Mukhopadhyay SQC and OR Division, Indian Statistical Institute, Kolkata, India, and
Sadhan Kumar Ghosh Department of Mechanical Engineering, Jadavpur University, Kolkata, India Abstract Purpose – The purpose of this paper is to develop a guideline of the control procedure and tools depending on dominance pattern. In Lean Six Sigma (LSS) implementation, the control phase plays a vital role in sustaining the gains achieved from the improvement phase. The process control schemes should be developed by studying the process dominance pattern as suggested by Juran. Design/methodology/approach – Discussion has been made on identification of various methods with the help of a few real life examples for effective LSS implementation. Findings – The dominance pattern helps in identifying the control mechanism. However, with the advent of new business processes, the dominance pattern needs a little bit of modification. Research limitations/implications – The case studies mainly are from the manufacturing sector and one from the service sector, where authors have studied the control mechanism. There exists scope of future research in service sector for adequate representation. Originality/value – The treatise provides a road map to the practitioners for an effective implementation of the control phase in LSS. It is also expected to provide the scope of future work in this direction for both researchers and practitioners. Keywords Lean Six Sigma, Regression, Control chart, Control phase, Design of experiment, Process dominance Paper type Research paper
International Journal of Lean Six Sigma Vol. 5 No. 3, 2014 pp. 230-252 © Emerald Group Publishing Limited 2040-4166 DOI 10.1108/IJLSS-08-2013-0044
1. Introduction Today Lean Six Sigma (LSS) is emerging as one of the most popular quality/business improvement strategies in manufacturing as well as in service organizations (Snee, 2010). LSS is a blend of Six Sigma methodology and Lean approach to enhance the process effectiveness as well as efficiency (Antony et al., 2003). The LSS adopts the define-measure-analyze-improve-control route of the Six Sigma through which lean tools are combined with Six Sigma tools (Arcidiacono et al., 2012) for improving the existing processes. The processes to improve and its measures are identified in the define phase, the present status is assessed in the measure phase, and the potential causes are identified and validated to root causes in the analyze phase. The improve phase then addresses the
solution(s) in terms of the improved performance level if appropriate remedial measures are adopted corresponding to the root causes thus established. In the control phase, the sustenance of the gains thus achieved is addressed by using the appropriate statistical, analytical and engineering techniques. Essentially, the statistical tool that is used in the control phase is control chart. The use of such control charts is an integral part of Six Sigma training for black belts and green belts and, given the finite amount of training time available, discussions are invariably confined to the basic Shewhart principles for chart construction and application (Goh and Xie, 2003). The other analytical tools used in the control phase are mainly standardization and mistake proofing. Through literature survey it has been found that in most of the cases the control phase deals with standardizing the process flow, updating the standard operating procedure (SOP) (Saravanan et al., 2012) and sometimes even constructing the run chart (Rehman et al., 2012). The control chart can be perceived to be more useful when the baseline sigma level of a process is lower while undertaking the journey toward attaining the Six Sigma level of performance. However, after attaining the six-sigma level of performance, a process becomes superbly capable to produce only 3.4 defects per million opportunities. Such a fantastically high level of performance does not warrant the usage of the conventional attribute control chart (p or np), but a modified version that is known as the cumulative count of conforming chart (Goh and Xie, 2003). To decide on the control systems to be used in a particular situation, specific systems for controlling characteristics need to be related to the underlying factors that dominate a process as suggested by Juran and Godfrey (1999). In this paper, the authors have adopted the case-study research methodology and studied multiple real-life case studies to understand the influence of different dominance patterns on the aspect of “control” for implementing LSS with a view to extending or refining the existing theory. The case studies primarily pertain to manufacturing, including one from the service sector. It may be worthwhile to note here that the objective of this work is to delineate as to how the dominance pattern influences the control mechanism of a process. The coming sections of this paper have been arranged in the following manner. Case study research methodology is explained in Section 2, and Section 3 gives a summary of the pertinent literature. Section 4 deals formally and systematically with the aspects of control in both manufacturing as well as service sectors with the help of a few case examples. Section 5 deals with drawing of conclusions and the need for conducting further study. 2. Case study research methodology The case study research methodology provides tools for researchers to study complex phenomena within their contexts. Case studies afford researchers opportunities to explore or describe a phenomenon in the context using a variety of data sources (Baxter and Jack, 2008). It allows the researcher to explore individuals or organizations, simply through complex interventions, relationships, communities or programs (Yin, 2003) and supports the deconstruction and the subsequent reconstruction of various phenomena. Types of case studies might be explanatory, exploratory and descriptive, and designs can be single- or multiple-case studies (Woodside, 2010). The principal objective of case study research is to achieving a deep understanding of processes and other concept variables, such as participants’ self perceptions of their own
“Control Phase” for implementing Lean Six Sigma 231
IJLSS 5,3
232
thinking processes, intentions and contextual influences. Case study research is an inquiry focusing on describing, understanding, predicting and/or controlling the individual (Woodside, 2010). According to Lee (1999), the unit of analysis in a case study is the phenomenon under study, and deciding this unit appropriately is central to a research study. Doing case study research means identifying a topic that lends itself to in-depth analysis in a natural context using multiple sources of information (Hancock and Algozzinev, 2006). Merriam (2001) suggests that insights gleaned from case studies can directly influence policy, procedures and future research. Case studies can be used for different types of research purposes, such as exploration, theory building, theory testing and theory extension/refinement (Voss et al., 2002). 3. Literature survey The LSS methodology is used to improve both manufacturing and service processes and is perceived as a business strategy (Arnheiter and Maleyeff, 2005). The causal function y ⫽ f(x) is established, where y is the output variable and x’s are the input variables (Firka, 2010). To sustain the gain, in the control phase, one can use different types of monitoring scheme for y and control scheme for x’s. However, prior to deciding and introducing a monitoring and control scheme, it is important to understand the process by taking into account the predominant features or characteristics of a set of y and x variables in a specific situation. Juran and Godfrey (1999) describes that the control subjects are so numerous that planners are well-advised to identify the vital few control subjects so that they will receive appropriate priority. One tool for identifying the vital few is the concept of dominance. Operating processes are influenced by many variables, but often one variable is more important than all the rest combined. Such a variable is said to be the “dominant variable”. Knowledge of which process variable is dominant helps planners during allocation of resources and priorities. The “concept of dominance” and the category of dominance of a process are as follows: • setup-dominant; • time-dominant; • component-dominant; • worker-dominant; and • information-dominant. The above list prescribed by Juran and Godfrey (1999) is augmented by including the “method-dominant” by the authors of this paper. Therefore, the processes are classified into six categories instead of five and the relevant illustrations are provided in this paper for exercising control over each category for general comprehension of the practitioners as well as researchers in the field. Different dominance patterns call for different strategies of control. An operator or worker-dominant process can be controlled through operator training for developing the requisite skill. In case training and skill development do not address the problem properly, then mistake-proofing or poka-yoke can be thought of. For time dominant process where output variable depends over time, application of the Shewhart control chart helps in controlling the process. It may be noted here that the Shewhart control chart works well to detect the process shift. However, when the output variable is
drifting over time or data are auto-correlated, the control chart need to be modified. It may be worth to adjust such processes by using the engineering process control (EPC) technique rather than the statistical process control (SPC) techniques (Castillo, 2002). The techniques used to control such process are the beta () correction technique (Taguchi, 1988) and sloping control chart (Montgomery, 2007). It may be noted here that the SPC uses measurements to monitor a process and look for falling of points beyond control limits coupled with identifying non-random patterns of variation in a process. EPC, on the other hand, uses measurements to prescribe changes and adjusts the process inputs intended to bring the process outputs closer to targets. By using feedback/feedforward controllers for process regulation, EPC has gained a lot of popularity in continuous process industries. However, the amount of correction needed to adjust the process to target by taking care of process variation is explained in the -correction technique (Taguchi, 1988). For a method-dominant process, the process output variables depend on the process input variables. For example, in a computer numerical control (CNC) machining the process, output variable “dimensional accuracy” is dependent on the input variables “temperature”, “pressure”, etc. The desired value of each process input variable needs to be established along with the corresponding tolerances for exercising appropriate control by suitably devising an SOP. At the time of development of an SOP, the tolerances of all the process variables need to be documented. In case of the information dominant process, one needs to see how the information is passed to the process in a seamless way. For the component dominant processes, quality of the input materials and components is of utmost importance. Many assembly operations and food formulation processes are component-dominant. The corresponding control aspect should address the verification of component quality, supplier relations as well as supplier capability before use. The set-up dominant process has high reproducibility and stability for the entire length of the batch to be made and hence requires set-up approval before production proceeds. The pre-control chart can be used effectively to control such stable processes. However, the dominance patterns of service processes are different than that of the manufacturing process. The process flow and responsibility of people associated with it needs to be well-defined for proper function. The improved process need to be documented through a flowchart, and the responsibility of the associated people should be defined by using a responsible-accountable-consulted-informed (RACI) matrix (Jacka and Keller, 2009). The RACI matrix helps to align the human elements in a service process with their role and responsibilities. 4. Control methods – applications 4.1 A case example of transactional process One Indian organization manufactures air circuit breaker (ACB), which is an electrical component for national and international markets. For the international market, the product is sold by its associates in different brand names and codes. The organization received complaints about the dispatch of the wrong ACB. On investigation, it has been found that although the ACB conforms to the pertinent specification, there remains mistake while putting label on it. Though the problem appears to be simple, it has huge implications particularly for the off-shore clients, as the company has to send competent technicians for the purpose of changing the labels of ACB alone. The cost of replacement
“Control Phase” for implementing Lean Six Sigma 233
IJLSS 5,3
234
Figure 1. The existing planning process of ACB order handling
Figure 2. The existing practice of facia and label
and inventory charges has been worked out to be 5,478 euro for a particular order. Of course, apart from this tangible loss, there remains an intangible goodwill loss as well. To address the problem, a Six Sigma project was taken up to identify the root causes for eliminating the problem altogether. The description of the pertinent process is like this. On receipt of an order from a customer alias associate, the associate’s product code is translated into the organization’s own product code. The product codes are based on the ACB configuration that the organization manufactures. It is possible that two associates want two different products for different markets having the same configuration. To reduce the complexity, all customer orders are translated to standard product codes of the organization. This translation from the customer order code to the standard product code is done by using a cross-reference table. In this cross-reference table, for each client, the product number is matched with the organization’s product code. Thereafter, the order details are fed in the enterprise resource planning system of the organization and the breaker assembly card is prepared. In the breaker assembly card, the Engineering department enters the label and facia details by reading the same cross-reference table. After manufacturing and testing, these labels and facia are pasted on the product and are sent to the store for packing and dispatch. The corresponding process flowchart is shown in Figure 1. One can characterize this process depicted in Figure 2 as the information-dominant process. Had the same information been used at the time of order entry as well as during facia and label printing, the problem could have been avoided. The solution proposed in this regard is to generate the organizational code as well as the label and facia simultaneously to facilitate pasting of the label and facia onto the breaker assembly card. This will eliminate the chances of human error to get over the problem. The corrective action thus adopted is depicted in Figure 3. The corrective action has eliminated the problem of dispatching wrong product to the customer altogether. To control the process, the flowchart has been modified and the organization code, label and facia are generated simultaneously and pasted on the order Received Customer Order
Translate Customer Arcle Number to Organisaon Category Number
Feed Order Specificaon in SAP
Handover Order Details to Enggineering
Allot Serial Number of Breakers
Release to Stores for Dispatch
Inspecon and Tesng
Manufaturing and Assembly
Entry of Breaker Facia and Release Level on the Card
Prepare Breaker Assembly Card
document itself. Subsequently, the concerned operator has to detach the label and facia and paste it on to the breaker assembly card. This case study portrays an illustration of how to control information-dominant processes to avoid human errors leading to customer dissatisfaction. 4.2 A case example of manufacturing process (paint shop) In a scooter-manufacturing plant, the quality of paint finish is an important quality characteristic. As the products are primarily meant for the young customers, aesthetic of the products plays a vital role in attracting the customers. All the painted parts are generally tested against the associated performance and aesthetic criteria. The aesthetic tests include visible paint defects and finish. The finish includes three major factors, for example: (1) Gloss. (2) Distinctness of image. (3) Orange peel – a quality characteristic, measured through a parameter called “R-value” using high-precision equipment called “finish meter”. The meter is rolled across the surface and measures point by point the optical profile of the surface across a defined distance. The instruments analyze the structures according to their size and report it as “R-value”. Higher the “R-value” better is the finish. The “R-value” has a lower specification limit (LSL) of 6 units.
“Control Phase” for implementing Lean Six Sigma 235
The parts requiring painting are loaded in a jig to undergo pretreatments like degreasing and water rinsing to remove dust and oily impurities. A conductive primer is then applied onto the parts to promote adhesion between substrate and subsequent coats. The two coats that follow the primer coat are termed as base coat and top coat, respectively, yielding the required color, aesthetic and performance properties to the part. The painted part undergoes a baking process to accelerate the curing. After baking, the finished part is inspected for the presence of any painting and finish defects. The paint finish quality of the painting process was poor and resulted in rework. The cost of rework was estimated as USD 10,000 per annum. A six sigma study was taken up for improving the paint finish. Data were collected for 54 components on R-value. To assess the process performance, the distribution of “R-value” has been explored. Several distributions have been tried, and among them, it has been observed that the normal distribution fits the data quite well. The process performance has been evaluated, and it
Figure 3. Modified scheme for putting facia and label
IJLSS 5,3
236
has been found that the nonconformance is 60.65 per cent. The average “R-value” is found to be 5.744 with standard deviation of 0.833. The process has been analyzed, and the process parameters or the factors affecting the “R-value” have been identified through brainstorming and technical discussion. The factors thus emerged are – paint viscosity (X1), thinner evaporation rate (X2), paint gun to component distance (X3), KV (X4), auto atomization pressure (X5), auto gun pattern pressure (X6), auto gun paint flow (X7) and manual gun atomizing pressure (X8). Out of the above factors, the thinner evaporation rate (X2) depends on the selection of material and is thus considered as the attribute type of variable. The eventual regression model thus emerged is given in the following equation 1: R ⫺ value ⫽ 19.287 ⫺ 0.830X1 ⫹ 0.216X2 ⫹ 0.039X5 (1)
⫺ 0.049(X1 ⫺ 17.52)(X5 ⫺ 24.8)
The predictive model is with R2 value of 0.77. The estimates of the regression parameters, standard error, t- and p-values are provided in Table I. Based on the above model, the levels of the parameters or factors have been determined through the usage of the EXCEL Solver so that the target for the “R-value” (minimum 6.00) is achieved. The prescribed levels of the parameters for the painting process are – paint viscosity (X1) at 17 seconds, thinner evaporation rate (X2) type A and auto-atomization pressure (X5) of 30 psi. The expected “R-value” and the 95 per cent confidence interval (CI) based on the above model have been found to be 6.70 ⫾ 0.46. Once the desired or optimum levels of the process parameters are arrived at, it is a crucially important question in the control phase as to how to maintain the process parameters at the optimum levels to sustain or hold the gain in the response, which is “R-value” in this case example. It is to be remembered that the process parameters are susceptible to natural variation, and one needs to take into account the same while developing the associated control plan. The variation expected in the paint viscosity (X1) is around ⫾ 0.2 seconds and that auto-atomization pressure (X5) is around ⫾ 1 psi. It is to be recollected that the thinner evaporation rate (X2) is an attribute type of parameter and the optimum level corresponds to the supplier for whom the response (“R-value”) is higher. Keeping the parameter X1 at 17.2 ⫾ 0.2 and X5 at 29 ⫾ 1, the expected nonconformance of the response (“R-value”) is found to be 5.48 per cent (54,890 PPM) through Monte Carlo simulation. To identify the sensitivity of the tolerance, a full factorial design of experiments is carried out by halving the tolerance levels. The corresponding experimental layout and the results thus achieved through the Monte Carlo simulation are furnished in Table II.
Term Intercept X1 X2 [16] Table I. The estimate of regression X5 parameter (X1-17.52) * (X5-24.8)
Estimate
Standard error
t-value
p-value
19.287 ⫺0.830 ⫺0.216 0.039 ⫺0.049
2.861 0.164 0.056 0.016 0.057
6.742 ⫺5.064 ⫺3.856 2.382 ⫺0.866
0.000 0.000 0.002 0.031 0.400
The effects of tolerances of paint viscosity (X1) and auto-atomization pressure (X5) are 12,865 PPM and 1,503 PPM, respectively. Consequently, it can be concluded that the expected nonconformance in PPM is more sensitive to paint viscosity (X1) tolerance than auto-atomization pressure (X5) tolerance. Quite naturally, it is suggested to exercise more stringent control over paint viscosity (X1). To sustain the above improvement, the control plan is updated with paint viscosity and auto-atomization pressure. The process is subject to periodic audit as part of the existing quality management system to sustain the improvement. This case study illustrates how to control a method-dominant process.
“Control Phase” for implementing Lean Six Sigma 237
4.3 A case example of the grinding process In engine valve manufacturing, the grinding operation plays a major role, as the valve functioning depends on the surface finish and the dimensional accuracy. Grinding is a process through which the material is removed from the component by a cutting surface termed as grinding disc or tool. The amount of material removed depends on the material stock i.e. the difference between the initial and the final diameter. In the concerned process of an organization, the dimensional accuracy turned out to be the principal issue. The extent of rejection due to lack of dimensional accuracy was around 5 per cent. The factors having a bearing on the dimensional accuracy are identified as: • RPM of the grinding wheel; • dressing frequency of grinding wheel; • feed rate; • coolant speed; • coolant flow; • coolant temperature; and • tool wear, etc. The control over the factors other than tool wear can be considered to be reasonably satisfactory because of the operational guidance through the CNC system. After studying the effects of the factors, the corresponding levels have been chosen based on the SOP. To study the effect of tool wear on drift in dimension, 50 consecutive components are produced and measurements are recorded for the outer diameter (OD), the specification of which is 10.675 ⫾ 0.025 mm. The run chart depicting the trend due to tool wear is given in Figure 4. The process is modeled by using the linear equation:
Paint viscosity 17.2 ⫾ 0.2 17.2 ⫾ 0.1 17.2 ⫾ 0.2 17.2 ⫾ 0.1
Yt ⫽ 10.7 ⫹ 0.000786 t ⫹ t
(2)
Auto-atomization pressure
PPM
29 ⫾ 1 29 ⫾ 1 29 ⫾ 0.5 29 ⫾ 0.5
54,890 41,611 52,973 40,523
Table II. Result of simulated experiment
IJLSS 5,3
238
Figure 4. Run chart of OD
where Yt is the dimension (OD) of a sample at the tth point of time and t is the error component with zero mean and variance of 2t (0.00182). As the process has a continuous drift, it is needed to offset the drift by introducing an appropriate adjustment. Because frequent adjustment can adversely affect productivity, it is decided to use the tolerance limit (2 ⌬ ⫽ 2 ⫻ 0.025 ⫽ 0.05 mm) for designing an appropriate process control plan as follows: • Start the process at (Lower Specification Limit (LSL) ⫹ 3t) ⫽ 10.6554 mm. • Continue the process until it reaches (Upper Specification Limit (USL) ⫺ 3t) ⫽ 10.6946 mm, where USL means upper specification limit. • Adjust the process after 50 components to bring it down to 10.6554 mm. This interval of adjusting the process after every 50 components has been found by dividing the difference between (USL ⫺ 3t) and (LSL ⫹ 3t) by the slope of the equation (2). [t represents the error term in the model]. To make the control procedure operator-friendly, a process control chart is evolved based on the concept of pre-control chart (Ledolter and Swersey, 1997). In case of the pre-control chart, the control limits are generally constructed at 50 per cent of tolerance interval (⌬). However, in the suggested control procedure, the lower and upper control limits are prescribed as (LSL ⫹ 3t ⫽ 10.6554 mm) and (USL - 3t ⫽ 10.6946 mm), respectively. The modification of control limits has been carried out to accommodate the effect of tool wear out of the process as suggested in the regression control chart (Mandel, 1969). This also helps the management to estimate the number of units that can be produced before any readjustment. The corresponding chart is given in Figure 5. After introducing the modified pre-control chart, the rejection due to dimensional inaccuracy came down from 5 to 1 per cent. It may be noted here that the control charts provide limits based solely on the observed process variation and therefore provide a means to detect process changes.
“Control Phase” for implementing Lean Six Sigma 239
Figure 5. Modified pre-control chart
However, in some situations, it is important to detect the presence of only those changes which might reflect the presence of nonconforming items. For such cases, control limits may be derived using a combination of the observed frequency and the product specifications. The process is then permitted to change as long as the nonconforming items do not become imminent. The chart gives alarm signals only when nonconforming items threaten. Modified control charts, acceptance control charts and pre-control charts are examples of such charts. (Juran and Godfrey, 1999). Although pre-control has the advantage of simplicity, it should not be used indiscriminately. The procedure has serious drawbacks compared to SPC charts (Montgomery, 2007, Ledolter and Swersey, 1997), but preferred as an alternative to X-bar R chart (Traver, 1985). However, pre-control is a feedback-adjustment scheme; it can only be effective if the process drifts slowly or jumps and sticks. Although it is often compared to SPC, the goal of pre-control is to identify the need for adjustment. It is neither useful for process monitoring nor for the identification of the action on special causes (Steiner et al., 2008). More sophisticated control and feedback schemes, such as proportionalintegral-derivative (PID) controllers (Castillo, 2002), are alternatives that may yield better results. However, while pre-control signals the need for an adjustment, it does not include an adjustment rule which is required to implement the system in practice. For such an adjustment rule, the beta correction factor (Taguchi, 1988) provides an appropriate guideline. The justification for the use of pre-control as a control mechanism in this connection is that the number of components produced within the limit without any adjustment is around 50 when the process drifts slowly due to tool wear-out. Since the rate of production can be considered to be fast enough, it is difficult to use “control chart for tool wear” as suggested by Montgomery (2007). Hence the modified pre-control scheme under discussion is used. The use of this modified pre-control chart will not lead to unnecessary tampering as the goal here is not to produce non-conforming component by using the full tolerance. The effect of such scheme on process capability indices (Cp or Cpk) is given by Sarkar and Pal (1997).
IJLSS 5,3
240
4.4 A case example of turning process In a turning process, cylindrical jobs/work-pieces are turned to a specified dimension coupled with producing a smooth finish on the metal. The concerned organization manufactures shock absorbers. One of the major components of a shock absorber is the metal tube, which is manufactured through the turning operation. A tool termed as “form tool” is used for the turning operation for which one of the critical quality characteristics is the inner diameter (ID) of the tube, the specification of which is 42.000-42.039 mm. The dimension and setting of the form tool play a major role in deciding the tube ID. It may be worthwhile to mention here that it is infeasible to adjust the tool dimension during turning operation to achieve a target dimension of a job. To assess the effect of tool wear on a job, a study has been planned. The underlying process parameters are maintained at the prescribed level as per the SOP. After every 50 pieces, 5 components are collected (subgroup size ⫽ 5) and the ID is measured for each component. In this manner, 50 subgroups have been collected to construct the time series plot for average and range chart for the purpose of process monitoring as provided in Figure 6. From Figure 6, it can be concluded that the process is non-stationary by nature and consequently the ordinary Shewhart control chart, like X-bar R chart will not be applicable for controlling the ID. The various approaches of the addressing tool wear is discussed by Montgomery (2007). The inherent variation () of the process is estimated by using the R-bar/d2 formula, and a regression equation is fitted on average to estimate the tool wear out. The model thus fitted is as follows: ˉ ⫽ 42.04 ⫺ 0.00083 t ⫹ Y t t
(3)
ˉ is the average of subgroup t and is the error with zero mean and variance 2t where Y t t (0.0000952). The inherent variation () of the process is estimated to be 0.0006. The optimal tool replacement policy thus emerged is mentioned in the following: • Start the process after setting the sample average at (USL – 3) at 42.0372 mm. • Continue the process until it reaches (LSL ⫹ 3) at 42.0018 mm maintaining a sampling interval (gap between two consecutive subgroups) of 50 successive components. • Replace the form tool and set up the process after the average ID becomes 42.0018 mm. ˉ The sloping central line () and the three-sigma limits around the central line for Y t controlling the average ID are depicted in the form of a sloping control chart, given in Figure 7 to detect and eliminate other assignable causes, like high-coolant temperature, machine vibration, loose job holding, etc., if occasional spikes are noticed in control chart plotting beyond the three-sigma limits. The estimated tool life can be calculated by making use of equation (3). For this problem, the tool life is estimated to be 2,100 components: Tool Life ⫽
(USL ⫺ 3ˆ ) ⫺ (LSL ⫹ 3ˆ ) ⫻ 50 0.000873
“Control Phase” for implementing Lean Six Sigma 241
Figure 6. Time series plot for average and range chart
where 0.000873 is the slope obtained from equation (2) and 50 is the sampling interval. It may be noted here that this approach essentially assumes that resetting of the process is expensive and as such priority needs to be given to minimize the number of adjustments made to produce the parts within specification limits instead of reducing the overall process variability (Quesenberry, 1988).
IJLSS 5,3
242
Figure 7. Sloping control chart for tube ID
4.5 A case example of a manufacturing process (CNC machining operation) In a machining operation, the piston rod got rejected to the tune of 10 per cent due to mismatch in the dimension of its diameter. One of the primary reasons of rejection is high variation in diameter caused by too frequent adjustment of the machine setting by the concerned operator. Even though the operator has the noble desire to bring the dimension in the close vicinity of the target value, too frequent unscrupulous intervention by him resulted in more variation. The control of the process calls for addressing two important issues: (1) frequency of inspection; and (2) amount of adjustment or offset to be made. To address the above issues, a study has been planned by using the Taguchi’s Beta correction technique. Measurements on piston diameter have been recorded under controlled condition for 192 components produced in a consecutive manner. During the collection of data, no adjustment is made on the process. To identify the changes in the process, the data are nested in the following manner: • between 96 components; • between 48/96 components; and • between 24/48/96 components. A nested ANOVA is carried out to determine the sampling frequency for detecting a significant change in the piston diameter (Montgomery, 2012). The nested ANOVA is presented in Table III.
From Table III, it can be concluded that the significant change occurs after every 48 pieces. Hence, the suggested sampling frequency is 24 pieces. In the beta correction method, the amount of adjustment is ⫺ (X – T) instead of a full adjustment of (X – T), where X is the observed value, T is the target value and the factor  is calculated based on equation (4):
 ⫽ 0,
when (X ⫺ T)2 ⱕ 2 1 (X ⫺ T) ⫽ 1 ⫺ , otherwise; where F ⫽ F
关
兴
(4)
2
“Control Phase” for implementing Lean Six Sigma 243
The 2 thus estimated from the ANOVA table is given in the following:
ˆ 2 ⫽ (0.00097192 ⫹ 0.00000204 ) / (184 ⫹ 4) ⫽ 0.00000518 Table IV gives a ready reckoner based on equation (4) to guide the operators for the amount of adjustment to be made for a given deviation between the observed and the target piston diameter. The above adjustment procedure has been introduced for day-to-day operations after imparting the requisite training to the concerned operators. The rejection level has reduced to almost 0 from about 10 per cent. The case studies illustrated in Sections 4.3, 4.4 and 4.5 are primarily related to time-dominant processes of different nature. The first two deals with the processes for which frequent intervention for the necessary adjustment is not a feasible option, and the use of full tolerance is preferred. The corresponding control mechanism aims at reducing nonconformance instead of laying emphasis on reducing variability. The third case study aims at deciding the amount of adjustment when the process drifts. Here, the aim is to reduce variability as well as nonconformance. Source
df
Sum of square
Mean square
F-ratio
p-value
Between 96 components Between 48/96 components Between 24/48/96 components Error Total
1 2 4 184 191
0.00069769 0.00016827 0.00000204 0.00097192 0.00183992
0.00069769 0.00008414 0.00000051 0.00000528
8.292 164.837 0.097
0.102 0.000 0.983
Serial number 1 2 3 4 5 6 7 8
Table III. Nested ANOVA
Observation
(X⫺T)
F ⫽ (X⫺T)2/ 2
 ⫽ 1–1/F
Adjustments
6.261 6.262 6.263 6.264 6.265 6.266 6.267 6.268
0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008
0.193 0.772 1.737 3.089 4.826 6.950 9.459 12.355
0.000 0.000 0.424 0.676 0.793 0.856 0.894 0.919
0.000 0.000 0.001 0.003 0.004 0.005 0.006 0.007
Table IV. The ready reckoner for process adjustment
IJLSS 5,3
244
4.6 A case example of abrasive wheel manufacturing process Abrasive or grinding wheels are manufactured by bonding abrasive minerals. The important components of manufacturing abrasive wheel are abrasive grains, strengthening material and bonding material. Often, glass-fiber discs are used as strengthening material. Generally, the bonding (resin) and the strengthening (fiber) materials are procured from suppliers. These materials are proprietary by nature. The manufacturing of wheels consists of three stages – mixing of the abrasive material, pressing the disc and baking in a furnace. During baking, the abrasive grains are bonded together. A small-scale abrasive manufacturing company received customer complaints on breaking of wheels. The returned lot is checked for all the properties and compared with the earlier lot having no customer complaint. It has been discovered that the “deflection of wheel at breaking load” is totally different between the two groups: with complaint and without complaint. The summary statistics of the two groups are provided in Table V. The difference in the standard deviation as well as the mean between the two groups is found to be statistically significant (p ⬍ 0.005). It is obvious from Table V that the abrasive wheels containing customer complaints have lower average deflection value and the corresponding variability is also lower compared to those containing no customer complaint. To explore the causes of lower deflection value, a 23 full factorial experiment has been carried out by taking into account three critical factors or parameters – fiber grade (J and S), resin type (W and X) and baking temperature (160° and 220° C). The letters or the numerical values within parentheses represent the corresponding levels. The other associated factors are kept unchanged during conducting the experiment. In each experimental trial, five samples have been tested for defection. The experimental layout and the results thus obtained are given in Table VI. The above data have been analyzed through the ANOVA technique. It can be observed from Table VI that the type of fiber plays a major role in deciding the deflection value. The effect of fiber is estimated to be 6.07 units. The corresponding ANOVA table is presented in Table VII. Group
Table V. Summary statistics of deflection property
Table VI. Experimental layout and response
Sample size
Average
SD
155 25
8.99 5.46
1.46 0.676
Without complaint With complaint
Trial number
Fiber
1 2 3 4 5 6 7 8
J S J S J S J S
Temperature 160 160 220 220 160 160 220 220
Resin W W W W X X X X
Deflection at breaking load 9.5 6.0 12.0 5.0 9.0 5.5 10.0 3.5
11.0 5.5 11.0 4.0 12.0 4.0 11.5 4.0
11.0 6.5 13.0 4.0 10.0 5.0 10.0 4.0
10.5 5.5 13.0 5.0 11.0 5.5 12.0 5.0
8.0 6.0 14.0 4.5 11.5 6.0 10.5 4.5
It can be observed from Table VII that apart from the main effect of fiber, the interaction of fiber and temperature turns out to be statistically significant. Generally, when an interaction is large, the corresponding main effects have little practical meaning (Montgomery, 2012). Keeping in view this aspect, Figure 8 depicting the interaction between fiber and temperature has been drawn to choose the corresponding optimum levels of fiber type J and at 220° C temperature to obtain the highest deflection value. Because the underlying process is a material-dominant one, the right selections of fiber along with the level of the associated baking temperature are important for sustaining the gain in deflection value leading to achieving operations with no customer complaint. The control mechanism thus developed can be characterized by procuring of fiber type J and awarding to the supplier a one-year contract for supply on a day-to-day basis. In addition, the control plan has been updated for temperature, which hitherto is found to be non-existent.
Source
df
Sum of square
Mean square
F-value
p-value
Fiber Temp Fiber*temp Error Total
1 1 1 36 39
369.06 0.06 16.26 36.88 422.26
369.06 0.06 16.26 1.02
360.30 0.05 15.87
0.000 0.816 0.000
“Control Phase” for implementing Lean Six Sigma 245
Table VII. Analysis of variance for deflection
Figure 8. Interaction plot of fiber and temperature
IJLSS 5,3
246
4.7 A case example of the manufacturing process (boring operation) In a machining operation, there was a problem of rejection of machined castings due to non-alignment of axis in the drilled hole. The casting is placed on the rest pad and then tightened on it by using hydraulically operated clamp. The boring bars/tools are placed on the job for the boring operation. During the boring operation, small metal chips come out and fall on the rest pads of the machine. After completing the boring operation, the concerned operator declamps the casting and cleans the rest pads using high jet pressure air. After cleaning the tool bits, boring bar and rest pad of the machine, the next casting is placed for operation. An improper cleaning leads to sticking of the small metal chips onto the rest pad. The chips generate an inclination causing non-alignment of the axes. And this creates a problem in the placement of the castings resulting in rejection. Quite naturally, the quality of boring depends on the cleaning effectiveness. The placement of gear casting with or without metal chips is shown in Figure 9. To eliminate the rejection, it is necessary to clean the rest pad after each operation. However, sometimes metal chips remain on the rest pad inadvertently. Imparting training on “how to clean rest pad” to the operators turned out to be futile. Hence, it was decided to use the poka-yoke concept to eliminate the problem. An air feedback system is implemented. The system is designed to sense any gap between the rest pad and the job component. In case no gap is detected implying proper resting of the job, back pressure is built up causing a pressure switch to get operated to generate the “OK” signal. The machine starts its normal operation subsequent to receiving the “OK” signal. The air feedback system is depicted in Figure 10. If an error occurs due to improper placement of any job component onto the rest pad, then the installed poka-yoke system detects the error prior to any defect being generated. 4.8 A case example of service process (medical test) While processing the applications for insurance policies, the requirements are identified for some applicants for medical test. The medical test to be undergone may be known a priori by the customer at the time of filling in the proposal. However, the specific medical tests are to
Figure 9. Gear carrier casting placement on rest pad (a) proper way and (b) improper way with unwanted metal chip
“Control Phase” for implementing Lean Six Sigma 247
Figure 10. Air-feedback system
be carried out by the concerned customer after identification of it during processing of the proposal at the processing center. As per the pertinent regulatory norm, a proposal has to be converted to a policy within 30 days. Otherwise, the application will be rejected. Hence, any delay in carrying out the medical test has got an adverse impact on the issuance of the policy because this time-component is a part and parcel of the total processing time. At present, about 90 per cent of the medical tests get completed within a period of seven days as stipulated by the concerned management of the insurance company. These conforming cases do not result in any loss of business. The causes for the problem of the delay in medical tests for the remaining 10 per cent cases have been identified as the following: • the presence of process bottlenecks in the form of non-value-added (NVA) activities; and • the absence of clarity on responsibilities assigned to various functional teams of the organization. The process has been redesigned to minimize, if not eliminate, the effect of NVAs. An RACI matrix is developed to bring clarity with regards to the responsibilities to be assigned to various functional teams of the organization. The RACI matrix thus developed and the modified flowchart of the process meant for undertaking the medical tests are given in Appendix. Successful implementation of the control procedure increased the extent of conformance from 90 to 98 per cent for the medical tests to be completed within the stipulated seven days’ period (Table VIII). 5. Conclusion and future work In the first case study, it has been found that the process is more of an information-dominant type characterized by a seamless flow of information that eliminates altogether the label-related reworks. The control mechanism has been launched accordingly. The second case study deals with a method-dominant process for identifying the tolerances of significant input variables. The control of the same is done by developing an appropriate SOP. However, the modus operandi of the control of
IJLSS 5,3
248
Activities
Departments Central Branch Medical Medical Financial Development Branch processing operation team center consultant manager manager Administration
Medical initiation R Report generation R Assign medical center Letter printout Receiving reports Checking checklist Sending it to central operation Daily tracker maintenance Training to medical center Ensure SLA of courier (company) Ensure SLA of courier (MC)
R R R R R R
I I A A A, I A, I A, I A, I R, A A R
I
Table VIII. RACI matrix for implementation of medical test process Notes: R ⫽ responsibility; A ⫽ authority; C ⫽ control; and I ⫽ inform
I
I I
I
R
process parameters has not been discussed in the case study. Although the control mechanism includes specifying the value and tolerance of the input variables, the methodology to be adopted to achieve the specified tolerances of the input variables (x’s) needs to be addressed so that the specified outcome (y) is obtained on a day-to-day basis within its stipulated tolerance. Practitioners should decide on the control mechanism at the “improve phase” by comparing different alternative control mechanisms along with the pertinent cost implications. For example, temperature can be controlled by displaying it through indicators or by using a thermostat or by using PID controller. The third and fourth case studies deal with time-dominant processes characterized by tool wear-out, where frequent adjustment is not feasible. In such situations, the control mechanism should aim at minimizing the number of adjustments made to keep the parts within specifications instead of reducing the overall variability. The fifth case study deals with a time-dominant process where frequent adjustment is feasible. The amount of change needed is decided by adopting the -correction methodology. The control plan thus arrived at is nothing but a process-adjustment ready reckoner. The -correction methodology integrates EPC and SPC. The sixth case study deals with a worker-dominant process. It demonstrates as to how a “poka-yoke” based on feedback control results in elimination of rejection altogether. The seventh case study deals with the selection of raw material. Because the item is a proprietary one, it demonstrates as to how a long-term relationship with the supplier turns out to be an effective control mechanism. The process needs adjustment as the source of material changes. The last case study pertains to the service industry where clarity among various functions is of crucial importance. A RACI matrix in conjunction with the pertinent process flowchart helps one to bring clarity for the associated responsibilities to improve the process performance. Based on this treatise, one can conceive that the right assessment of the dominance pattern of a process is a prerequisite for exercising an appropriate control plan. Subsequent to adequately understanding the dominance pattern of a process, the specificity of the related control plan or procedure is to be established by conducting an appropriate study for implementing the plan or procedure on a regular basis. The statistical or other tools to be used for conducting such studies for processes having
Sr. no
Dominance
Tool
Comment
1 2
Information Method
Mistake-proofing or poka-yoke Standardization
3
Time
4
Material
Pre-control (modified) Sloping control chart Beta Correction technique Standardization
Ensure adequate information flow Process parameters needs to be standardized. Prioritization of parameters is essential Wear out pattern need to be estimated
5
Worker
Poka-yoke
6
Method (Service)
Standardization and RACI matrix
Material selection to be carried out through appropriate study Process should be studied before deciding type of poka-yoke For service operation
different dominance patterns are provided in Table IX for ready reference. The know-how as to how these tools are to be used for conducting such studies is demonstrated through the case examples pertaining to manufacturing industries of different process dominances as well as the service industries. Studies need to be conducted for evolving the control plan or procedure for service processes characterized by the following dominances or their appropriate combinations in a specific situation: • time-dominant; • worker-dominant; • information-dominant; and • method-dominant. It is possible that the pattern of dominance in a manufacturing process is not very conspicuous with respect to a single attribute because it may be characterized by a combination of various process dominances. The authors feel that there exist immense scope to explore further research in this behalf to adequately deal with these complexities. References Antony, J., Escamilla, J.L. and Caine, P. (2003), “Lean sigma”, Manufacturing Engineer, Vol. 82 No. 2, pp. 40-42. Arcidiacono, G., Calabrese, C., and Yang, K. (2012), Leading Processes to Lead Companies: Lean Six Sigma: Kaizen Leader and Green Belt Handbook. Springer, Verlag Italia. Arnheiter, E.D. and Maleyeff, J. (2005), “The integration of lean management and Six Sigma”, The TQM Magazine, Vol. 17 No. 1, pp. 5-18. Baxter, P. and Jack, S. (2008), “Qualitative case study methodology: study design and implementation for novice researchers”, The Qualitative Report, Vol. 13 No. 4, pp. 544-559, available at: www.nova.edu/ssss/QR/QR13-4/baxter.pdf Castillo, E.D. (2002), Statistical Process Adjustment for Quality Control, Wiley, New York.
“Control Phase” for implementing Lean Six Sigma 249
Table IX. Suggested control system
IJLSS 5,3
250
Firka, D. (2010), “Six Sigma: an evolutionary analysis through case studies”, The TQM Journal, Vol. 22 No. 4, pp. 423-434. Goh, T.N. and Xie, M. (2003), “Statistical control of a Six Sigma Process”, Quality Engineering, Vol. 15, No.4, pp. 587-592. Hancock, D.R. and Algozzine, B. (2006), Doing Case Study Research: A Practical Guide for Beginning Researchers, Teachers College Press, New York, NY. Jacka, J.M. and Keller, P.J. (2009), Business Process Mapping: Improving Customer Satisfaction. Wiley, New York. Juran, J.M. and Godfrey, A.B. (1999), Juran’s Quality Handbook, fifth edition, McGraw-Hill, New York, NY. Ledolter, J. and Swersey, A. (1997), “An evaluation of pre-control”, Journal of Quality Technology, Vol. 29 No. 2, pp. 163-171. Lee, T.W. (1999), Using Qualitative Methods in Organizational Research, Sage, CA. Mandel, B.J. (1969), “The regression control chart”, Journal of Quality Technology, Vol. 1 No. 1, pp. 1-9. Merriam, S.B. (2001), Qualitative Research and Case Study Applications in Education, Jossey-Bass, San Francisco. Montgomery, D.C. (2007), Introduction to Statistical Quality Control, John Wiley & Sons, New York, NY. Montgomery, D.C. (2012), Design and Analysis of Experiments, Wiley India, Delhi. Quesenberry, C.P. (1988), “An SPC approach to compensating a tool-wear process”, Journal of Quality Technology, Vol. 20 No. 4, pp. 220-229. Rehman, H.U., Asif, M., Saeed, M.A., Akbar, M.A. and Awan, M.U. (2012), “Application of Six Sigma at cell site construction: a case study”, Asian Journal on Quality, Vol. 13, No. 3, pp. 212-233. Saravanan, S., Mahadevan, M., Suratkar, P. and Gijo, E.V. (2012), “Efficiency improvement on the multicrystalline silicon wafer through Six Sigma methodology”, International Journal of Sustainable Energy, Vol. 31 No. 3, pp. 143-153. Sarkar, A. and Pal, S. (1997), “Process control and evaluation in the presence of systematic assignable cause”, Quality Engineering, Vol. 10 No. 2, pp. 383-388. Snee, R.D. (2010), “Lean Six Sigma – getting better all the time”, International Journal of Lean Six Sigma, Vol. 1 No. 1, pp. 9-29. Steiner, S.H., MacKay, R.J. and Ramberg, J.S. (2008), “An overview of the Shainin System™ for quality improvement”, Quality Engineering, Vol. 20 No. 1, pp. 6-19. Taguchi, G., (1988), Systems of Experimental Design: Engineering Methods to Optimize Quality and Minimize Cost. Quality Resources, White Plains, NJ. Traver, R.W. (1985), “Pre-control: a good alternative to XR Charts”, Quality Progress, Vol. 18 No. 9, pp 11-14. Voss, C., Tsikriktsis, N. and Frohlich, M. (2002), “Case research in operations management”, International Journal of Operations & Production Management, Vol. 22 No. 2, pp. 195-219. Woodside, A.G. (2010), Case Study Research: Theory, Methods, Practice, Emerald Group Publishing, Bingley. Yin, R.K. (2003), Case Study Research: Design and Methods, 3rd ed., Sage, Thousand Oaks, CA.
Appendix
“Control Phase” for implementing Lean Six Sigma 251
Figure A1. Flow chart for medical test
About the authors Ashok Sarkar is Technical Officer at the Indian Statistical Institute, Mumbai. He has rich experience in implementation of quality initiatives, e.g. Six Sigma, LSS, SPC and design of experiments in various organizations over the past two decades. His areas of research interest are
IJLSS 5,3
252
issues pertaining to implementation of operations management across any organization. Ashok Sarkar is the corresponding author and can be contacted at:
[email protected] Arup Ranjan Mukhopadhyay is Sr. Technical Officer at the Indian Statistical Institute, Kolkata, India. He earned a PhD degree in Six Sigma from Jadavpur University, Kolkata. He has published many articles and papers in different national and international journals of repute. His job involves teaching, consultancy, training and applied research. Sadhan Kumar Ghosh is Professor at the Mechanical Engineering Department, Jadavpur University. He earned a PhD in Engineering from Jadavpur University. He also acts as the coordinator for the activities of center for quality management system at Jadavpur University.
To purchase reprints of this article please e-mail:
[email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints