A Markov Chain Application to Manpower Supply ... - Google Sites

2 downloads 133 Views 221KB Size Report
... and the materials they rely upon, and to build a common research platform that ..... Paper presented at the ORSA/TIM
A Markov Chain Application to Manpower Supply Planning Author(s): Stelios H. Zanakis and Martin W. Maret Source: The Journal of the Operational Research Society, Vol. 31, No. 12, (Dec., 1980), pp. 1095 -1102 Published by: Palgrave Macmillan Journals on behalf of the Operational Research Society Stable URL: http://www.jstor.org/stable/2581821 Accessed: 23/07/2008 13:28 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=pal. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.

JSTOR is a not-for-profit organization founded in 1995 to build trusted digital archives for scholarship. We work with the scholarly community to preserve their work and the materials they rely upon, and to build a common research platform that promotes the discovery and use of these resources. For more information about JSTOR, please contact [email protected].

http://www.jstor.org

J. OplRes.Soc.Vol.31.pp.1095to 1102 PressLtd1980.Printed inGreatBritain Pergamon Research ? Operational SocietyLtd

A

Markov

to

Manpower

0160-5682 80 1201-1095S02.00 0

Chain

Application Supply

Planning

STELIOS H. ZANAKIS* and MARTIN W. MARET * Division of School of Business and Management, Organizational Sciences, Florida International University, Miami FL 33199, U.S.A. and West Virginia College of Graduate Studies, Industrial Engineering & Systems Analysis Program Institute, WV 25112, USA Markov chains have been suggested since the early sixties for modeling manpower supply, mostly in military, governmental and public agencies. An actual industrial application is presented. This will be of interest to managers as well as practitioners and students of operational research. The approach , use and validation of the Markov model are discussed, along with some limitations and extensions.

INTRODUCTION Interest in manpower planning methods has increased during the evident by the growing number of journal articles, and a dozen books national and international conferences on manpower planning. This been supported by developments in management science/operations

last decade. This is and proceedings of has growth largely research and com?

puters. The purpose of manpower planning is to best match future manpower needs (demand) and resources (supply) in the light of multiple objectives; conditions, e.g. economic proas well as organiztrends, people skills inventory, government duction/sales regulations, ation history and policies regarding personnel hiring, training, promotion, firing and retirement. Successes1 and failures2 have been reported for forecasting personnel demand workload, sales or economic indicators, using (i) regression models based on anticipated and (ii) Delphi procedures.3 Personnel supply in an organization can be forecasted using Markov chains to model the flow of people through various "states" (usually skill or position levels,1,4,5 minority status6 and sometimes Other supply models are discussed years of service5,7,8). by The majority of Markovian noted in the literature Hopes.2 manpower applications comes from military, governmental and public agencies rather than businesses. Although statistical tests for validating Markov chain models have been known for more than 20 years,9 they have been used only in few applications.6,7,10-12 This paper presents a Markov chain application to model the manpower supply of over 1000 engineers in a department of a large chemical company. The approach, use and validity of the model are discussed along with some extensions. MODEL

DEVELOPMENT

To model the flow of personnel through an organization as a Markov chain, the analyst must define the stage interval and states, collect data, estimate the transition probability matrix (TPM) and validate the model. First, the analyst must decide what stage interval (time period) to use, e.g. weekly, will monthly, yearly, etc. A small time interval (larger sample over a certain horizon) (constant TPM) will be probably yield more accurate estimates of TPM but stationarity less likely. One model may turn out to be quite good, while another quite poor. In general, the choice of stage interval should be guided by the planning horizon and objectives of the study; and if a cycle or seasonal variation exists, this should not span more than one time period (stage). Here hiring quotas, budgets and long-range plans for the organiz* This work was conducted while the first author was at the West Virginia College of Graduate Studies. 1095

of the Operational

Journal

Research

Society,

Vol 31, No. 12

ation

are prepared on a yearly basis. For these reasons yearly transactions, for which personnel data were available, were selected. The second step in Markov chain modelling is the selection of exhaustive and nonbe states to which an can classified. The number of states should overlapping employee be neither too small (confounding nor too that small effect) samples produce big (causing poor estimates of TPM). Sfri a graded manpower system, the states can be physically defined by skill or position levels (non-absorbing states) as well as by gains and losses Other factors be added such as minority status for equal oppor? (absorbing states). may studies6 or of service tunity employment length (tenure) in a grade to refine the model.7 Research has shown that manpower decreases with increasing tenure, age, skill wastage and responsibility, and it is greater among women than men.13 Such a level of detail was for this model. Wastage by tenure often follows a two-parameter judged unnecessary distribution.13'14 lognormal probability The states used are defined in Table 1 that shows the TPM calculated from 1973-76 annual transition data. There are 8 absorbing states (by type of loss) and 8 non-absorbing states (4 by position level and 4 by type of gain). Instead of the usual practice of treating gains as a separate input vector, they were made part of the TPM to provide a simple and unified picture of all transitions. From personnel history data, the annual transitions by state were tabulated manually. Some difficulties in data collection arose due to information limitations. To avoid such difficulties and possible errors a computerized data record personnel system was rec? ommended and is now used by the organization. one can easily transitions, Having counted the total (4 year) number of individual estimate the TPM by converting entries to row proportions in Table 1). It is (given known9 that this calculation a estimate the true TPM if maximum likelihood of yields the process is stationary, i.e. the TPM is constant with respect to the stage variable (here the necessary sample size to have also been suggested for determining time). Procedures estimate the TPM of a Markov chain within a desired accuracy.15 If detailed flow data or likelihood can be used to obtain maximum are not available, numerical procedures least square estimates of the TPM from aggregate data (number of people in each state at each time period).12'16 Table 1. Employee transition probability matrix

AA A B C FROM H? RH? TI? NE?

Q

R65

RE

CA

0.0119 0.0075 0.0164 0.0380

0.0000 0.0060 0.0041 0.0053

0.0000 0.0120 0.0032 0.0040

0.0000 0.0015 0.0041 0.0040

? ? ? ?

? ? ? ?

? ? ? ?

TO

NE

TO ? 0.0595 ? 0.0511 ? 0.0401 0.0247 0.0073 ? ? ? ?

? ? ? ?

DD

0.0238 0.0000 0.0025 0.0007 ? ? ? ?

AL

AA

A

B

C

? ? 0.0000 0.8929 0.0119 ? 0.0015 0.0180 0.8889 0.0135 ? 0.0696 0.8527 0.0000 0.0025 ? ? 0.1033 0.7988 0.0080 ? ? ? ?

? 0.0043 0.0775 0.9177 ? 0.0513 0.3333 0.6154 0.0474 0.2133 0.4076 0.3318 ? ? 0.0119 0.9881

N.B.?Columns H, RH, TI, NE have been omitted because transitions to them are impossible. Symbols used: ? Transition impossible. Types of losses Type of employee by position in organizational hierarchy AA Upper level managers and senior technologists Q Quit R65 Retirement?age 65 A Managers, group leaders, or technology equivalent RE Early retirement B Professional levels JSkilled CA Company action C \Less skilled and new Employees. TO Transfer-out Types of gains NE To non-exempt payroll H Hires DD Death or disability RH Re-hires AL Absent with ieave TI Transfers-in (from other departments) NE From non-exempt payroll. 1096

and M. W. Maret?A

S. H. Zanakis

MODEL

TESTING

Statistical

for testing Markov procedures (constant TPM over time) mg stationarity chains.9 The x2 test was non-absorbing of the here to test the stationarity applied Let:

Markov

AND

Chain Application

VALIDATION

include x2 tests for verifychain assumptions and first order process (one period memory) in extended Markov chains and to absorbing TPM and its individual elements.

s: number of nonabsorbing states states m: total number of absorbing + non-absorbing T: number of stages (time periods) observed nij(t): number of persons that moved from state i to j during

period

t

so that / x_ ^ ~

Y p^

p..(t) = nij(t)/ni(t)

A _ ~ Pij

y ftl

of people available of period t;

gives the total number state i at the beginning

/ x u^

of people gives the proportion state i to j during period t;

m / V V / f= i j = i

Then at the a significance (1) The (i,j) transition

that moved

in

from

is the hypothesized constant (over t) transition i to j at any single from state probability is shown in Table 1. This TPM t. period (stage)

( \

level, over time if

is constant

probability

E^(0[Py(0-Py]2/Py ro oo ro rjvo r-4 on r-4 (N r-H vo

H

?^

o

?^

U

T-4 VO vo r~rt ^l-

PQ

Q Q

i/~> u~> i/~> vo

ro

vo

ro

i/~> u~> u~> i/~> m

ro

rt

rt

rj-

rj-

rt

rj-

rj-

rj-

rj-

rj-

rj-

O

OO OO OO ON ON

O

vo

i/~> VO VO VO VO VO

a

1098

S. H. Zanakis

and M. W. Maret?A

Markov

Chain Application

Table 3. Long-range predictions by level

* This is the row sum of elements in N. Analytical results for length of service distributions by state under a constant recruitment policy have recently been obtained by Glen20. R is the left 8x8 part of TPM (transitions to absorbing states). Q is the right 8x8 part of TPM (transitions to non-absorbing states). of remaining in the same grade increases with the probability the "average employee", level to for level AA. from for C 0.8928 0.7988 seniority The estimated TPM has 6 zero entries, because during the 4-year observation period Two of them are known to be almost zero (AA to CA, AA to they had no transitions. (AA to R65, AA to RE, A to DD and B to C) are AL). The remaining 4 probabilities data and/or management small but not zero. More historical input should be used to of For example the retirement probabilities obtain better estimates for these transitions. AA's may be modified based on the age profiles of AA's and the dates they are eligible for regular or early retirement. distributions The major use of a Markov chain model is to predict future manpower short and to V be used evaluate alternative vectors can Different input (gain) (by state). the TPM obtained Predictions are entrance by the by premultiplying policies. long-term = V are in counts the V vector (AA, A, B, C; H, RH, TI, NE); people position input For of the end of the previous period and the new entrants are policy dependent. rate at a annual 2 such Markovian shows Table 3% growth predictions aiming example, from 1976 to 1981. of total population A trial-and-error approach similar to that of Harden and Tcheng17 may be used to total population obtain the entry quotas if the end-of-year target is known. Alternaneeded to achieve a desirable number of entrants the exact determine one may tively, this will of state. distribution However, require the inversion of a people by end-of-year difficulties. Moreover, in matrix, which is often near singular, thus causing computational is a Markovian restrictions. Therefore and there are additional suggested goals reality Goal Programming approach, which is discussed elsewhere.18 shown in Table 3, are obtained by using established theory.19 projections, Long-range those

1099

Journal

of the Operational

Research

Society,

Vol. 31, No. 12

For example, a new hire is expected to stay with the company for 12.52 years before 0.428 or 0.262 respectively). leaving, most likely transferred out or quit (probabilities Markov chain results must be interpreted with caution because the estimated TPM will never be perfect (some non-stationary individual entries and unreasonable zero as in this case). Even with such imperfections these results could be judiprobabilities In practice, management would usually want to modify the ciously. used by management. transition probabilities that they control (e.g. promotions, entry transitions, etc). Markov chain analysis could be used to provide early retirements, "what if" type questions. MODEL

REFINEMENT

AND

transfers insights

out, into

EXTENSIONS

In order to improve the performance of the previous Markov model, several modifica? of TPM stationarity and tions may be attempted. First, to satisify better the assumptions of states, one may?at the expense of useful detail?combine states in independence question such as: quits and company action; hires and rehires; regular and early retirement (which are often negatively The resultant 7X13 TPM also passed, at correlated). elements proved to be a = 0.05, the x2 test for stationarity; now only two individual namely from B to Q + CA and B to TO. The former is investigated nonstationary; below. The latter is within management's control and consequently can be manipulated It is interesting the same to note that the new TPM produced accordingly. essentially results as those given in Tables 2 and 3. Depending on the objectives of the analysis, the states could instead be broken down by college degree vs non degree, age, length of service or minority status, but this was deemed unnecessary here. time dependent. Another model extension is to make the transition probabilities Since, the is is total TPM TPM there no need to consider overall, non-stationary stationary, As noted would make them less appealing. models, like Dent's,21 whose complexity the The is the are reason that as earlier, department grew there quits non-stationary. declined were less quits and more promotions; the as conversely, department population there were more quits and fewer was Simple linear regression C and and B) (from promotion functions of growth. Population,

promotions. to take advantage of this observation. Quit employed were modelled as linear to B to B, (C A) probabilities N(t), growth per period was expressed as (i) Total gain

Table 4. Transition probabilities as a function of population growth

1100

S. H. Zanakis

and M. W. Maret-

A Markov

or loss AN(t) = N(t) - N(t - 1); (ii) Accelerated ? AN(t ? 1): and (iii) Cumulative acceleration Gt =

t = AN(t) X A2/V(/) /-1

Chain Application gain

-

or

loss

A2N(t)

= AN(t)

A/V(0).

The last independent variable produced the best regression results, which are shown in Table 4. An attempt to introduce one period time lag (Pt = a + bGt-i) produced poor results. These findings agree with those of Vassiliou11 for manpower wastage in two British engineering firms. are the excellent promotion models and the improved predictions to quits Noteworthy for the last two troublesome as more years. Better regression models may be obtained historical data are accumulated. The regression models in Table 4 can be used to modify the corresponding in the TPM for a given goal in total population. Markov probabilities analysis can then evaluate the effect of different anticipated profile counts in next year's total employment. CONCLUSIONS Historical data of transitions to and from position levels, loses and gains in an organiz? ation can be used to develop a Markov chain manpower model. Longer periods of are not always advantageous; observation of the TPM but, they yield better estimates because of policy or organizational which changes, increase the risk of nonstationarity leads to poor predictions. models may be used to obtain better estimates of Regression and wastage probabilities in organizations promotion growth or declining experiencing trends. In any event, the transition should be tempered with population probabilities to obtain reasonable results. Sensitivity analysis could be useful management judgement in that regard.22 The Markov chain model can provide valuable insight into predicting future organiz? ation manpower losses and position level distribution for different hiring quotas and total population that history will repeat itself, growth targets. In addition, assuming long-range projections can be obtained that may reveal a need to modify current person? nel policies. The common problem of determining hiring quotas to satisfy different confiicting socio-econo-organizational goals that exist in reality is examined elsewhere.18

REFERENCES 1K. M. Rowi.ani) and M. G. Sovlriign (1969) Markov-chain analysis of internal manpower supply. Ind. Relat. 9, 88 99. :R. F. A. Hopis (1973) Some statistieal aspeets of manpower planning in the civil service. Omega 1, 165 180. 3G. T. Mii.kovk n. A. J. Annoni and T. A. Mahoniy (1972) The use of the delphi procedures in manpower forecasting. Mgmt Sci. 19, 381 388. 4G. L. Nii-i.son and A. R. Young (1973) Manpower planning: A Markov chain application. Puhl. Pers. Mgmt 133 143. 5J. W. Mi:r( K (1970) A Markovian model for projecting movements of personnel through a system. In Personnel Management: A Management Science Approach (P. Grh-ni.aw and R. Smith, Eds). International Textbook, Scranton. PA. U.S.A. hN. C. Churchii.i. and J. K. Shank (1975) Accounting for aflirmative action programs: a stochastic tlow approach. Actg Rev. 50, 643 656. "G. W. Liison (1979) Wastage in a hierarchical manpower system. J. Opl Res. Soc. 30, 341 348. HK. M. Uyar (1972) Markov chain forecasts of employee replacement needs. Ind. Relat. 11, 96 106. 9T. W. Andirson and L. A. Goodman (1957) Statistical interference about Markov chains. Ann. Math. Stat. 20, 89 110. 10P. Sai.i-s (1971) The validity of the Markov chain model for a class of the civil service. Statistician 20, 85 110. 11P. C. G. Vassii.iou (1976) A Markov chain model for wastage in manpower systems. Opl Res. Q. 27, 57 70. l2S. H. Zanakis (1975) Parameter estimation via pattern search with transformations. In Contemporary Perspectives in Decision Sciences (T. F. Anthony and A. B. Carroi.i., Eds). SE AIDS, Columbia. SC 4 6 l\\. Young (1971) Demographic and ecological models for manpower planning. In Aspects of Manpower Planning (D. J. Barthoi.omiw and B. R. Morris, Eds) E.U.P.. London. 14D. J. Barthoi.omi-w (1973) A model of completed length of service. Omega 1, 235 240. 1101

Journal

of the Operational

Research

Society

Vol 31, No. 12

15S. Pierson (1975) Determining sample sizes for estimating transition probability matrices of finite Markov chains. Paper presented at the ORSA/TIMS Joint National Meeting, Chicago, U.S.A. 16W. Dent and R. Ballintine (1971) A review of the estimation of transition probabilities in Markov chain. Aust. J. Ag. Econ. 15, 69-81. 17W. R. Harden and M. T. Tcheng (1971) Projection of enrollment distribution with enrollment ceilings by Markov processes. Socio-Econ. Plann. Sci. 5, 467-473. 18S. H. Zanakis and M. W. Maret (1979) A Markovian goal programming approach to aggregate manpower planning. J. Opl Res. Soc. To be published in Vol. 32, No. 1. 19J. G. Kemeny et al. (1959) Finite Mathematical Structures. Prentice-Hall, Englewood Cliffs, NJ, U.S.A'. 20J. J. Glen (1977) Length of service distributions in Markov manpower models. Opl Res. Q. 28, 975-982. 21W. Dent (1973) The estimation of nonstationary transition probabilities. Mgmt Sci. 20, 308-312. 22G. Worm (1972) Sensitivity analysis for absorbing Markov chains. Paper presented at the 8th Annual TIMS SE Conference,Knoxville, TN, U.S.A.

1102