Fuzzy systems and neural networks in software engineering project ...

12 downloads 95383 Views 2MB Size Report
May 3, 1993 - Cite this article as: Kumar, S., Krishna, B.A. & Satsangi, P.S. Appl Intell (1994) 4: 31. doi:10.1007/BF00872054. 29 Citations; 181 Downloads ...
Journal of Applied Intelligence 4, 31-52 (1994) © 1994 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.

Fuzzy Systems and Neural Networks in Software Engineering Project Management SATISH KUMAR Department of Physics and Computer Science, Faculty of Science, Dayalbagh Educational Institute, Dayalbagh, Agra UP 282005, India

B. ANANDA KRISHNA M. Tech. student, Regional Engineering College, Surathkal, Karnataka, India

PREM S. SATSANGI Professot; Department of Electrical Engineering, Indian Institute of Technology, Hauz Khas, New Delhi 110016, India. Received October 20, 1992; revised May 3, 1993

Abstract. To make reasonable estimates of resources, costs, and schedules, software project managers need to be provided with models that furnish the essential framework for software project planning and control by supplying important "management numbers" concerning the state and parameters of the project that are critical for resource allocation. Understanding that software development is not a "mechanistic" process brings about the realization that parameters that characterize the development of software possess an inherent "fuzziness," thus providing the rationale for the development of realistic models based on fuzzy set or neural theories. Fuzzy and neural approaches offer a key advantage over traditional modeling approaches in that they are model-free estimators. This article opens up the possibility of applying fuzzy estimation theory and neural networks for the purpose of software engineering project management and control, using Putnam's manpower buildup index (MBI) estimation model as an example. It is shown that the MBI selection process can be based upon 64 different fuzzy associative memory (FAM) rules. The same rules are used to generate 64 training patterns for a feedforward neural network. The fuzzy associative memory and neural network approaches are compared qualitatively through estimation surfaces. The FAM estimation surfaces are stepped, whereas those from the neural system are smooth. Also, the FAM system sets up much faster than the neural system. FAM rules obtained from logical antecedent-consequent pairs are maintained distinct, giving the user the ability to determine which FAM rule contributed how much membership activation to a "concluded" output. Key words: Fuzzy associative memory, feedforward neural network, software engineering, resource estimation, manpower buildup index.

1. Introduction

It is now well known that large-scale system projects, once started, get surreptitiously out of hand, and it is not unusual for the project to double its cost by delivery time [1]. Schedule slippages and escalating costs, coupled with the ever increasing demand for software, its poor reliability, and the eventual inability of managers to meet project goals or schedules are factors that

have led to the software crisis, where projects slip imperceptibly but inexorably out of control [2, 3]. For the manager, managing on quicksand has become a way of life; for the customer, getting progressively accustomed to an ever increasing budget is commonplace. It is therefore important to devise techniques to help the project manager keep the project under control. One of the conditions to make this possible is for the software development plan to be based on

32

Kumar,Krishna and Satsangi

realistic estimates. The software estimating process requires the basic understanding that software development is not a "mechanistic" process, in that the tasks are not all visible or measurable, as in the case of a deterministic quantity. Software development comprises a large number of tasks with strongly coupled interactions of considerable complexity, each possessing a degree of uncertainty. The relationships between controlling variables are best modeled empirically due to the inherent "fuzziness" in the estimation of these variables. Thus, these tasks are not all capable of being objectively measured, and at best they are assessed through group consensus of experts in the area. Software cost and development time prediction will remain no better than a probabilistic estimating method until we accept the inherent "fuzziness" of variables and develop more realistic models for estimation based on fuzzy set theory [4-6]. A very strong case for effective modeling of the dynamics inherent in the system development methodology, and for understanding factors that management can control and factors that are limited by the process itself, is thus made, and one can resort to powerful modeling methodologies such as system dynamics [7] for this purpose, with the objective of providing the manager with a framework to make reasonable estimates of resource, cost, and schedule [8-10]. Apart from this, sound measurement techniques must be devised and substantiated by available project data. Such estimation models not only do the job of estimating the schedule, effort, cost, etc., but also provide an essential framework for software project planning and control. They provide important "management numbers" concerning the state and parameters of the project, which are critical for resource allocation and which help progress towards a more organized software product. Software development estimation models are "fuzzy information systems" providing information to a decision making unit.

2. Software Measurement Techniques Measurements on software can be broadly classified into two major categories: direct measures and indirect measures. Direct measures such as

cost, effort, source lines of code (SLOC), speed, memory utilization, and size are made objectively. Indirect measures are more subjective or fuzzy in nature in the sense that they are based on an individual's estimation or on compromised group consensus. Typical indirect measures would include functionality, quality, efficiency, and reliability. Other broad classifications of software metrics have been proposed in the literature [11]. For example, one could classify metrics into those pertaining to the technicality, productivity, or quality of software. Alternatively, one could have size-oriented metrics based on SLOC estimates, and metrics that bypass SLOC. Size-oriented metrics are controversial and not universally accepted as the best way to measure software development productivity ~ [12]. Some of the more popular size-oriented models are macroscopic in nature: they deal with estimation techniques for software project resources such as cost and effort. These models are empirical and are dependent to a large extent on historical data. Empirical resource models could be static single-variable, static multi-variable, dynamic multi-variable, or theoretical [13]. More notable examples of empirical models are the Constructive Cost Model (COCOMO) [14] for project cost estimation, and the Putnam Estimation Model [15, 16] for project effort and development time estimation. The Putnam Estimation Model is a dynamic multi-variable model that assumes a specific distribution of effort over the development life cycle. The model has been derived from manpower distribution encountered on large projects (with efforts greater than 30 person-years, 1-10,000 KLOC (kilo lines of code)). Extrapolation to smaller projects is also possible. Assuming a Rayleigh-Norden [17] manpower loading over the development life cycle, a fundamental equation called the software equation has been derived, which relates the effort, development time (time to full operational capability (FOC) of the software), and size of the project, using an overall productivity index (PI) for the organization based on historical data. Estimates for the size of the current project are typically done using delphi polling of experts in the application domain. The software equation is used in con-

Fuzzy Systems and Neural Networks

junction with another equation called the manpower buildup equation, which also relates size, effort, and development time, using a manpower buildup index (MBI) that is dependent upon the manpower loading profile that the project manager has selected. Using these equations, estimates for the effort and the minimum development time can be made with a degree of accuracy dependent on input variability (of parameters such as size, PI, and MBI).

3. Fuzzy and Neural Estimation Systems One way of realistically modeling a complex system is to allow some degree of uncertainty or "fuzziness" in its description--which entails an appropriate aggregation or summary of various entities within the system. The statements obtained from this simplified system are less precise, but their relevance to the original system is fully maintained. This principle is, in fact, precisely the basic concept of the fuzzy set, an idea that is both simple and intuitively appealing and that forms, in essence, a generalization of the classical or crisp set. Crisp sets dichotomize the individuals in some given universe of discourse into disjoint groups. An unambiguous distinction exists between the members and non-members of the class or category represented by the crisp set. However, there are a number of variables encountered in practice that cannot be objectively assessed or measured. Consider the case of software engineering metrics for the quality of a software system. The quality of a software system can be assessed by looking at metrics for maintainability, correctness, integrity, and usability. How does one quantify the "maintainability''2 of a software system? Consider three concepts of maintainability as represented by three crisp sets: LOW, MEDIUM, and HIGH. Could the maintainability of a software system be unambiguously assigned membership in any of these three sets? As it turns out, no objective metric exists that can deterministically quantify the maintainability of a software system and place it unambiguously as a member of any one of these three sets. 3 However, it may be more reasonable to say that the

33

maintainability of a software system may be MED I U M to a small degree, and HIGH to a larger degree. We cannot objectively estimate the maintainability of a software system, but we can indicate the degree to which we believe it is a member of any specific set. Such a metric becomes an immediate candidate for fuzzy estimation4 [5]. We accept inherent uncertainty in order to develop more realistic models. A fuzzy set can be defined mathematically by assigning to each possible individual in the universe of discourse a value representing its grade of membership in the fuzzy set. This grade corresponds to the degree to which that individual is similar to or compatible with the concept represented by the fuzzy set. Thus, individuals may belong to the fuzzy set to a greater or lesser degree as indicated by a larger or smaller membership grade. These membership grades are very often represented by real number values ranging in the closed interval [0,1]. Thus, a fuzzy set representing the concept of HIGH correctness 5 of a software system might assign a degree of membership of 1 to a software system with a 0 defects/ KLOC post-installation annual defect rate, .8 to an annual defect rate of 20 defects/KLOC, .4 to an annual defect rate of 30 defects/KLOC, and 0 to an annual defect rate of over 75 defects/ KLOC. These grades signify the degree to which each defect rate approximates our subjective concept of HIGH correctness, and the set itself models the semantic flexibility inherent in such a common linguistic term. Because full membership and full non-membership can still be indicated by 1 and 0, respectively, we can consider the crisp set to be a restricted case of the more general fuzzy set for which only these two grades of membership are allowed. Fuzzy sets subsume crisp sets. Fuzziness is well described by Kosko's geometrical "sets as points" view [5]. He describes a set F(2X)--the set of all fuzzy subsets of X as a cube, and a fuzzy set as a point in a cube. The set of all fuzzy subsets equals the unit hypercube I" = [0,1]". A fuzzy set is any point in the cube I n. Vertices of the cube I" define crisp sets. So the ordinary power set 2x is the set of all nonfuzzy subsets of X, and equals the Boolean n-cube B" = {0,I}n. Fuzzy sets fill in the lattice B" to produce the solid cube I".

34

Kumar,Krishna and Satsangi

A fuzzy system defines a mapping between cubes. So if we consider a fuzzy system S, then S is a transformation S : I n - - > I p, where I n contains all the fuzzy subsets of the domain space (or input universe of discourse), X = {xl ....... x,}, and IP contains all the fuzzy subsets of the range space (or output universe of discourse), Z = {zl ....... ,Zp}. In general, a fuzzy system S maps families of fuzzy sets to families of fuzzy sets. Neural systems also estimate function maps f: X - > Y from several numerical point samples (xh y~). In recent years, the backpropagation supervised learning algorithm for perceptron-like feedforward neural networks has been successfully applied to solution of classification problems by effecting such maps [18, 19]. Neural network literature abounds with both theories and applications [20-24], claiming connectionism to be the panacea for future computational problems. Such systems are different from fuzzy systems that estimate the map f: X - > Y from a few fuzzy set samples or fuzzy associations

ductivity modeling, and selection of a planning point. Sections 7 and 8 discuss the fuzzy associative memory and neural network models. Section 9 compares their performance, and section 10 concludes the article.

4. Manpower Distribution Modeling The effort distribution for a large-scale project generally takes On a classic shape first described by Lord Rayleigh, as shown in figure la. Norden later studied these curves and substantiated them with empirical data collected from a number of system development projects [17], as a result of which they are often referred to as RayleighNorden curves.

(Ai, Bi).

Fuzzy and neural approaches offer a key advantage over traditional modeling approaches: they are model-free estimators. Neural networks generate smooth continuous maps from the input space to the output space, whereas fuzzy estimators generate stepped input-output surfaces. These surfaces provide a rationale or basis for comparison of the performance of both types of estimators. The neural network approach is good when there are only numerical data, whereas the fuzzy approach is excellent when the data are in the form of accurate structured knowledge (such as antecedent-consequent pairs). One can also generate hybrid fuzzy neural systems or adaptive fuzzy systems [25], where neural networks analyze numeric data to generate fuzzy associative memory rules, which can then be used to generate associations between fuzzy sets in the input space and fuzzy sets in the output space. The present article opens up the possibility of applying fuzzy estimation theory and neural networks for the purpose of software engineering project management and control, using the Putnam Estimation model as an example. Sections 4, 5, and 6 discuss preliminary ideas of manpower distribution modeling, organizational pro-

TIME

MANPOWER V5 TIME PLOT

CUMULATIVE- EFFORT V~ TIME PLOT

M~npowCr MY/YR

td

0"4Kf~ TtME(YR) (b)

td

~IMEIYR)

(c)

Fig. 1. The curves described by equation (1), originally applied by Lord Rayleigh to describe other scientific phenomena, have been found to fit the manpower distribution pattern of software development reasonably well, at least within the "noise" of data points.

Fuzzy Systems and Neural Networks The manpower distributions of large software projects have generally been found to follow a Rayleigh-Norden profile upon which is superimposed sufficient "noise" (as shown in figure lb), the latter being present within the system for a variety of reasons such as "inadequate or imprecise specifications, changes to requirements, imperfect communication within the human chain, and lack of understanding by the management of how the system behaves" [15]. Based on an extensive historical software project database comprising projects ranging from 30 MY to 1000 MY, Putnam has noticed that in large-scale software development the manpower distribution reaches its maximum very near the delivery time td. Before this time, effort is spent on specification, design, coding, testing, and qualification, while after this time the manpower costs correspond to maintenance, modifications, and other on-site work. On the basis of empirical data and the Rayleigh-Norden model, it has been well established that t2

re(t) = _K t2 t e - 2t~

[1]

where m(t) is the manpower in man-years per year (MY/YR), K is the total generic life-cycle effort in man-years (MY), and t is the project duration in years (YR). During the course of a software project, the cumulative cost, C, (measured in terms of the effort expended in man-years and found by integrating equation (1)), increases from zero, following an S-shaped curve of the form

C(t) = K

1 - e ~

[2]

From figure lc, it can be seen that the manpower cost at delivery time is 39% of the total cost expended on the project, with the remaining cost being directed towards on-site maintenance and modification. At td years, the cumulative manpower growth rate reaches a maximum called the peak manning too: K mo - td ~/e

[3]

35

The slope of the manpower distribution equation at time t = 0 is defined as the difficulty, D, of the project: K D = --

[4]

t5

which is measured in persons per year. The effort and time derivatives of the difficulty, D, indicate that a given software development is essentially time sensitive [26]. Putnam's observations [16, 26] suggest that if the project scale is increased, the development time also increases such that the quantity Do = (K/t3) clusters around six different values: 8, 15, 27, 55, 89, and 233. Do is LOW for entirely new software with many interfaces and interactions with other systems, MEDIUM for new standalone systems, and HIGH if the software is rebuilt from existing reusable code. Do is referred to as the manpower buildup parameter and is proportional to the time derivative of the difficulty, D, of the project. We thus have the manpower buildup parameter (MB parameter) defined as MB parameter = Do

K t3

q

[5]

where the effort K is in MY and td is in years. The manpower buildup parameter varies slightly from one organization to another depending on the average skill of the analysts, programmers, and management involved. Do also has a strong influence on the shape of the manpower distribution--the larger its value, the steeper the curve and the manpower buildup. Since Do is empirically defined in accordance with the nature of the software system under development, it seems reasonable during the estimating process to use its maximum value, which provides the minimum development time, td(mm~. The manpower buildup parameter is thus a key management factor. It is a measure of the staffing style of the organization in question and is dependent on a number of factors, which include • the task concurrency of a software project; • the schedule pressures for delivery; and • the complexity of the application in question.

36

Kumar,Krishna and Satsangi

Table l. MB parameter translation to MBI. MB parameter

MBI

7.3 14.7 26.9 55.0 89.0 233.0

1 2 3 4 5 6

terms of exponentially more defects, and therefore considerably reduced quality. Clearly, then, schedule compression is an expensive proposition, and the manpower buildup can be increased provided project budget ceilings are not violated.

5. Organizational Productivity and the Software Equation

The MB parameter can be translated into a data-determined scaled "management number" called the manpower buildup index (MBI) using the translation table 1. Various MBI levels indicating different staffing profiles are shown in figure 2. Level 1 is a low, slow staff buildup, typical of sequential task execution often caused by much fundamental new design of the system, algorithms, and logic. Level 1 also takes the longest and costs the least. Level 6 is often referred to as a "Mongolian horde" staffing style, characterized by completely parallel task execution, with resource estimates known well in advance. Level 6 is the fastest and the most expensive. The manpower buildup index has considerable economic implications, and thus, the final selection of the MB! for a specific project is a function of the resource constraints within the organization. For example, increasing the MBI from 1 to 3 in an effort to compress the schedule would more than double the total cost of the project. The reason for this is that the number of human communication paths for an MBI of 3 is about six times higher than that for an MBI of 1, something that would eventually manifest itself directly in

Putnam [15, 16, 26] also found that an empirical relationship exists between the overall productivity of software projects and their respective difficulty. It was found that the projects were grouped along three parallel lines on a logarithmic plot of the productivity versus the difficulty. Also, projects run in the same environment were grouped along the same line. After conducting a statistical analysis on the data, this fundamental behavior was formalized by the equation 2

Pr = C, D- ~

where D is the difficulty, Cn is a state of technology proportionality constant, and Pr is the productivity defined by size of delivered source code total manpower required to produce code

1

LEVEL 6 LEVEL 5 LEVEL

4

LEVEL

3

LEVEL 2 LEVEL I

TIME

Fig. 2. MBI illustration.

[7]

In the expression for productivity, the total effort to be considered is that which is expended up to the time to, i.e., 39% of the total life-cycle effort, K. With this consideration, from equation (7) we may immediately derive the software equation [26]: 4

SS = E K ~ to~ STA FE

[6]

[8]

where E is the technology environment factor, or the organizational productivity parameter. The software equation is a relation that links the functionality that has to be generated, to the time and resource expenditure (which are "management variables") required to create it. The productivity parameter defined in the software equation above embraces many factors of the software development process, such as the influence of management; methods used; tools,

Fuzzy Systems and Neural Networks techniques, and aids used; skills and experience of team members; machine service; and complexity of application type. The productivity parameter for systems on the Putnam database has been found to exhibit an exponential behavior. This is not surprising, given the very broad variability in time, effort, and errors. The range of values spans from a few hundred to hundreds of thousands, clustering around certain discrete values that follow a Fibonacci-like sequence. Because these numbers are not well understood by commercial managers, a simple integer scale is generally used to represent them, and these translated numbers are referred to as the productivity index (PI) of the organization. Table 2 shows this family of numbers. Values of PI from 1 to 25 are adequate to span the universe of all software projects seen so far, and table 2 can be extended whenever an organization becomes efficient enough to require it. It can be seen that not much information is required to calculate the PI: the total new and modified source lines of code, the total man-years of effort, and the total elapsed calendar years spent on a project are the only pieces of information required.

Table 2. Productivity parameter-PI translation. Productivity parameter

PI

754 987 1220 1597 1974 2584 3194 4181 5186 6765 8362 10946 13530 17711 21892 28657 35422 46368

l 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

37

The PI is a macro-measure of the total development environment. Low values of PI are associated with low productivity environments, poor tools, or high complexity of software. High values are associated with good environments, tools, and management, as well as well-understood projects of low complexity. Average PI values range from 2-3 for complex applications such as microcode design and real-time embedded systems to 12-15 for scientific and business application systems. The economic leverage of an organizational productivity improvement is significant because the PI represents an exponential family.6

6. The Planning Zone It is extremely important for software managers to understand that there is a minimum time required to complete any software development. This minimum time is a function of • the number of lines of code required to implement the functionality of the system; • the inherent complexity of the application type; • the efficiency of the development group; and • the selected manpower buildup index. If one rearranges the software equation, one can plot the log of K and log of td, as shown in figure 3. The software equation is defined by an estimate of the number of lines of code required to implement the system and the measured efficiency of the development organization (PI). Similarly, the MBI equation can also be plotted (once again on a log K-log td scale) as shown in figure 4. The MBI defines the maximum effective application of manpower for a given design complexity. Both these equations may be solved simultaneously (figure 5) for a given project size and a PI representative of an organization's capability, the region to the left of these lines representing the infeasible region. Hence, there is a minimum time, td(min~, that also represents the maximum cost solution for the system, requiring the most people. It should be noted that td(m~'l)is obtained for the maximum recommended value of the MBI for the application in question, whereas td could represent the delivery time for a more relaxed value of the MBI that could have been selected in order to avoid a violation of one

38

Kumar, Krishna and Satsangi

Log EfforL

7 S'

log

1/3 4/3 SLOE= E ~k(K) ~Ld

m~.~. . . .

~llllllllllllll

F,o..ibL¢ reglon for deveLopment

Log tlme Cmin) minimum Lime for deveLopmerrL td

Log Time='

Fig. 3. Software equation. Fig. 5. Simultaneous solution of software equation and MBI equation.

or more of the constraints within the organization. The fundamental goal of the Putnam Modeling methodology is the construction of a planning zone from where the project manager can construct a manning profile and develop a project plan. Construction of a planning zone is preceded by a size estimate (or nominal size range), which is typically dependent upon the initial statement of user requirements and takes into account the case for reusable code (deleted, modified, or added). An estimate of the technology environmental factor, E, is made from similar past projects of the same organization using the software equation. In the absence of a historical database for the organization concerned, similar software projects developed in a similar environment of an equivalent organization are used as a reference for the first estimate. As has already been discussed, the manpower buildup is decided depending upon the applica-

K Do = t - ~

The manpower buildup index definesthe maximum effective a.ppticution of manpower for cL given design complexity.

Log ~ime

Fig. 4. MBI equation plot.

tion in question, by choosing values from those given by Putnam (table 2). No rationale for choosing a particular manpower buildup index exists, because the choice depends upon at least three variables, the estimation of each of which is subjective and uncertain, making the manpower buildup index an immediate candidate for fuzzy or neural system estimation. Existing MBI estimation techniques employ heuristic opinions of managers without giving due consideration to the aforesaid factors, which have a strong bearing on the quality of the software. For example, managers settle on the largest MBI value that does not violate the project budget and delivery date. The planned delivery date is set to be earlier than the actual delivery date demanded by the customer, by a period of time dictated by the acceptable risk of slippage. A Monte Carlo simulation is conducted to solve for the maximum effort/minimum time solution, taking into account the uncertainties in the input variables. This simulation generates effort and minimum time solution probability distributions, which are employed to finally settle a planned delivery date [16]. Such an estimation process does not explicitly take into account any of the aforementioned three parameters upon which the MBI intrinsically depends, or the fact that the MBI has strong implications on the quality of software finally produced (i.e., higher MBI values tend to yield poorer-quality software). In fact, sometimes it might be more appropriate to select a value of MBI that is not the highest allowable (decided by cost constraints), but a lower value that simultaneously considers the task concurrency, appli-

Fuzzy Systems and Neural Networks cation complexity, and schedule pressure, and does not violate the delivery time constraint. This would take into account the implications of the MBI on software quality. The proposed fuzzy associative memory (FAM) or neural network model would respond in conformance with embedded knowledge to suggest an optimal MBI value. Embedded knowledge in the form of fuzzy logical rules or neural network input-output training patterns can be carefully designed to take into account various factors on which the MBI depends, as well as managers' heuristics, as discussed in section 7. The FAM or neural network model would provide the manager with an estimate of MBI with which planning could proceed. The planning zone is built up from five constraints: cost constraints; time constraints; peak manning constraints; manpower buildup constraints; and difficulty constraints. Using the software equation, a suitable planning point P can finally be selected. As far as the scope of the cost estimation model is concerned, the user interacts with the management primarily to decide the application and to provide the system requirements and the earliest and latest delivery dates (tm,nand t .... respectively). The management then makes its decisions on critical project constraints. Such constraints necessitate the setting up of a specific manpower loading profile (manpower buildup index (MBI)). This involves establishing a cost ceiling and deciding on the overall feasibility of the project in terms of the available manpower, acceptable schedules, project difficulty, technical knowhow for the application, monetary resource availability, and various human factors. Various management trade-off opportunities exist, which can be exploited whenever there is extreme market pressure to deliver the software product, and/or the schedule for the development of the system is unreasonably short. These tradeoffs can be classified into two broad categories: short-term and long-term tradeoffs. Short-term trade-offs include product feature-function trade-off and project staffing style trade-off. Long-term trade-offs encompass organizational productivity capability improvement strategies [16].

39

The above discussion leads us to believe that the MBI provides powerful leverage to the "manager-on-quicksand," and selection of a suitable index becomes critical when there are many constraints that require simultaneous satisfaction. The manpower buildup has strong implications for the system delivery date, project cost, manpower requirements, risk of slippage, and quality of software. It is therefore necessary that the estimation of this parameter be done correctly. The manpower buildup parameter is governed primarily by three factors: 1. the task concurrency of the project (a high task concurrency implies a high MBI); 2. the application complexity (real-time systems with many interactions would require a low MBI, and business applications built from reusable code modules could have a high MBI); and 3. schedule pressures of the organization (a shortage of time for a particular project for which the delivery dates are not extendable would require a high MBI). One can easily see that these three factors are difficult to quantify objectively, and can be better described using "fuzzy" ranges of values such as "LOW," "MEDIUM," or " H I G H . " Estimation of the MBI thus involves a somewhat intuitive decision-making process, something that makes it an immediate candidate for neural-networkbased or fuzzy set theoretic-based estimation.

7. The MBI Estimation Fuzzy Associative Memory Model The simplest fuzzy associative memory (FAM) encodes FAM rules or associations (Ai, Bi), which associate the p-dimensional fuzzy set B with n-dimensional fuzzy set A. These are comparable to simple neural networks but need no adaptive training, and structured knowledge can be encoded directly. In general, an FAM system F : I n - > I p encodes and processes in parallel an FAM bank of m FAM rules (A1,BI) ...... (Am,Bin). Each input A to the FAM system activates each stored FAM rule to a different degree. The minimal FAM that stores (Ai, Bi) maps inputs A' to B', a partially activated version of Bi. The more A' resembles A~, the more B' resembles B~.

Kumar, Krishna and Satsangi

40

Fuzzy sets contain elements with degrees of membership. A fuzzy membership function my : z - > [0,1] assigns a real number between 0 and 1 to every element z in the universe of discourse Z. This number, my(z), indicates the degree to which the object or data z belongs to the fuzzy set F. Equivalently, mF(z) defines the fit (fuzzy unit) value of element z in F. FAM rules would be based on a definition of fuzzy set values of the input fuzzy variables-task concurrency, inverse application complexity, 7 schedule pressure, and the output fuzzy variable--the MBI. Tables 3a and 3b show the fuzzy set values defined for these fuzzy variables. Fuzzy membership functions can have different shapes: triangular, trapeziodal, or even gaussian. In practice, fuzzy engineers have found that triangular and trapeziodal shapes help simplify calculations and help capture the modeler's sense of fuzzy numbers [25]. We have used triangular membership functions for both input and output fuzzy variables, as defined in figure 6.

\

Table 3b. Fuzzy set values of the output fuzzy variable MBI Manpower Buildup Index (MBI) L

Low cost, low error rate, low manpower, fundamental design Low-medium cost, increased error rates, higher manpower Medium costs, error rates considerable High costs, alarming error rates, high manpower Mongolian horde projects--"get the project out on time," reuse technology

LM M

MH H

L: LOW; LM: L O W - M E D I U M ; M: MEDIUM; MH: MEDIUM-HIGH; H: HIGH

\

Table 3a. Fuzzy set values of the input fuzzy variables TC, IAC, and SP

0

2

5

B

12

15

1B

TC UNIVERSE OF DIBCOUI~SE W : 0 - 20

20

TASK CONCURRENCY

Task Concurrency (TC) L LM MH

Low task concurrency Low to medium task concurrency Medium to high task concurrency High task concurrency

H

L

Entirely new software with many interfaces and interactions with low MBI implications New stand-alone systems with high functionality with low-medium MBI implications New stand-alone systems with low functionality with medium-high MBI implications Software rebuilt from reusable code with high MBI implications

LM

MH

H

Schedule Pressure (SP) L

:

LM MH H

: : :

Low schedule pressure Low to medium schedule pressure Medium to high schedule pressure High schedule pressure

L: LOW; LM: LOW-MEDIUM; MH: M E D I U M - H I G H ; H: HIGH.

MN

H

rn(tAC)

Inverse Application Complexity (IAC) L

LM

0

2

5

8

12

15

INVERSE APPLICATION 1

L

LM

lAD

1B 20

UNIVERSE OF DISCOURSE COMPLEXITY X ; 0 -20

MH

H

m{BP)

~

0

-

BP

UNIVERSE OF DISCOURSE Y ;0-20

SCHEDULE PRESSURE

rn(MBI]

1

1,5

2

2.5

3

4

4,5

MANPOWER BUILDUP INDEX

5

5.5

6

-

MIBI

UNIVERSE OF OISCOURSE Z ; 1 -6

Fig. 6. Membership functions defined on input and output fuzzy variables.

F u z z y Systems and Neural Networks

The input universe of discourse for each of the three variables was assumed to range from [0-20],

TC = L

=

TC = LM

[ AC

\IAC

L

LM I

TC

{Wo, w, . . . . . . . . W20} i

|

SP = {Y0, Yl . . . . . . . . Y20}

LM

3

and the output variable from [1-6],

6

L

LM

7

15 M

L

LM

4 LM

~1

M

14 M

SP~

H 3

LM

L l°M

L I

MH

2

L

IAC = {Xo, Xl,- ...... X2o}

41

L 81

M M

11

12

16 MH

L

LM L

M'

MH L 25M

LM zz I

LM

26

M

M29 H

MH

|~

3c M

H

19

LM n

M

27

M

20

M M

24

28 MH

31 32 MH MH

MBI = {z,, z 2........ z6}. Next we specify the fuzzy rule base. Based on heuristic estimates of managers, an appropriate set of FAM rules can be generated for the estimation of the MBI. Group consensus or a Delphi polling approach could be adopted for this purpose. Subsequent to the formulation of the FAM rules, a consistency and completeness check would have to be carried out (as is usually done in the case of expert system development) to yield a set of valid rules to be embedded in the fuzzy associative memory. For purposes of exposition, the rule base we designed comprises 64 FAM associations. The rules may have to be fine-tuned for a specific development team or application. The FAM associations can be listed as a n t e c e d e n t - c o n s e q u e n t pairs, as shown in the FAM bank of figure 7. The FAM system, as shown in figure 8, consists of a bank of different FAM associations. Each association (A,B) corresponds to different numerical FAM matrix. Individual numerical association matrices are generated using correlation-minimum encoding [27] and are referred to as fuzzy Hebb matrices: mij

=

min (ai, bj)

[91

where a~ is the ith element of A and bj is the jth element of B. In m a x - m i n composition, we replace pairwise multiplication with pairwise minima, and replace column (row) sums with column (row) maxima. The composition operator is denoted by ,,o.,, The m e m o r y matrix M is generated by the fuzzy outer-product M = As 2 B

[10]

where A T is the transpose of row vector A. Appendix 1 further clarifies the concept of m a x - m i n composition through an illustrative example.

TC = MN

TC = H S fp~C

IAC

L n LM

LM MH H 34 3~ S6 LM M M

37 38 LM M 41

M 45 M

42

M

39 M 43

MH

46 47 MH MH

40 MH

L

L 49 LM MH H SO 5~ s~ LM M M MH

LM

M 53

44

MH

57

58

MH

M

H

61 62 MH MH

48 H

54 M MH

5S 56 MH MH 59

MH

60

H

63 H

64 H

Fig. 7. The MBI estimation FAM bank (associations are listed as antecedent-consequent pairs).

In our problem, since there are three antecedents (TC, IAC, and SP) and one consequent (MBI) in each FAM rule, we treat each rule as three individual a n t e c e d e n t - c o n s e q u e n t pairs. Multi-antecedent rules are handled using decompositional inference [27]. Consider for example, FAM rule I:

(IF TC IS LOW A N D IAC IS LOW A N D SP IS L O W THEN MBI IS LOW) which can be written more compactly as (A 1, Bj, C1; D]) where: A~ is the membership function of the fuzzy set '"LOW" on the universe of discourse of TC; B] is the membership function of the fuzzy set " L O W " on the universe of discourse of IAC; C, is the membership function of the fuzzy set " L O W " on the universe of dis-

42

Kumar, Krishna and Satsangi

FAM Rule 1 r ...........

"I,

V CENTROIDAL L ~ DE FUZZ I VICAI" I ON /

i

Iw J I x --

iky r . . . . . . . ._R_t~t_~_m ......

....

u ...............

1FA-,M- -

/D m

i

Fig. 8. M B I e s t i m a t i o n F A M s y s t e m .

course of SP; and DI is the membership function of the fuzzy set " L O W " on the universe of discourse of the output variable MBI. This compound FAM rule can be decomposed into three fuzzy Hebb matrices: MITC.MBI

=

MIIAc,MBI

=

AIT _oD1

[11]

BIT _oD1

[12]

MIsP,MB1 = C I T o

D1

[13]

Fuzzy Hebb matrices split the compound FAM rule. Sixty-four compound FAM rules thus generate 192 individual numerical association matrices. The 64 FAM rules obtained from the above method are not condensed into one single matrix but are maintained distinct. This consumes a lot of memory but avoids "crosstalk" and gives the user the ability to determine which FAM rule contributed how much membership activation to a concluded output. It also provides knowledge-base modularity, i.e., the user can add or delete FAM-structured knowledge without disturbing stored knowledge. These are advantages over neural systems that encode the same associations. This separate storage of FAM rules also brings out another distinction between FAM systems and neural networks. A fit vector

input A activates all the FAM rules in parallel, but to different degrees. If A only partially "satisfies" the antecedent associant A, the consequent associant B only partially activates, and if it does not satisfy at all, then B equals a null vector, whereas in the case of a feedback neural network the output vector would be a nonnull vector, possibly a result of the dynamical system of the network falling into a spurious attractor in its state space. This is desirable for metrical classification applications but not tbr inferential problems [27]. Recall takes place by composing an input fit vector with each of the fuzzy Hebb matrices. Such inputs could be constructed based on the subjective estimates of managers for each of the three input fuzzy variables on a scale corresponding to the universe of discourse. Consider the following three binary input vectors (as in BIOFAM's (binary input-output FAM's) [27]) exciting each of the fuzzy Hebb matrices of the kth FAM rule: PTc = (0,0,0,0,1,0,0 ........ 0), with a 1 in the 5th position, I6]AC = (0,0,0,0,0,1,0 ........ 0), with a 1 in the 6th position, and

Fuzzy Systems and Neural Networks lisp = (1,0,0,0,0,0,0,. ...... 0), with a 1 in the Ist position. These inputs are recomposed with each of the numerical association matrices that split up the kth FAM rule (Ak, Bk, Ck; Dk): I) k ( T C , I A C , S P )

= [I5Tc 2 MkTC,MBI] A [I6IAC _oMkIAc,MBI] f-) [ilsp o MksP,MBI]

= min (as, b6, c0 AND mD(Z ) for all z in Z

valued. We do not use normalization in practice because we invariably defuzzify the output distribution D to produce a single numerical value on the output universe of discourse Z [27]. The appropriate defuzzification scheme is the fuzzy centroid defuzzification scheme. A real valued output, the fuzzy centroid Zc of the fitvector D with respect to the output space MBI, is directly computed as a (normalized) convex combination of fit values: p

[14]

E Zj m D (Zj) Zc = j=lP

The recalled fit-vector output D equals a weighted sum of the individual recalled vectors

2

The fuzzy centroid is unique and uses all the information in the output distribution, D. Figure 9 graphically portrays the centroidal defuzzification scheme for two FAM rules: 38 and 55. Inputs to the three fuzzy sets are 16, I0, and 6. Membership grades in each of the three fuzzy sets are decided, and the minimum membership value of the three antecedents selected to clip the fuzzy set of the consequent of that rule. Clipped output fuzzy sets are combined using the centroidal defuzzification scheme as explained above.

[15]

k

where tk'S are the non-negative weights that summarize the credibility or strength of the kth FAM rule (Ak, Bk, Ck; Dk). These weights may be varied to design an adaptive FAM system. In practice, tk'S are chosen equal to 1. In principle, though not in practice, the recalled fit-vector output equals a normalized sum of the Dk vectors. This keeps the components of the D unit interval

m (IAC),

m(TC) MH

m(SP) LM

[

~AM RULE38 ; llf TC is MH and[m ( M B I ) tlAC sLM and LMIsPisLMthen I I

! FA. ~ULE ~" m ( AI C )

rn (TC)

/

'

MH

:

TC-1

1

[ /

{ITM

IAC /

~

INPUT TC

o

INPUT IAC

o

.

[

SP

~

M

I

[ If TC rs H and IaCl ..I s MH and 5P. s mCMBI) "l LM then MBI ,SM,I

r n (5P )

H

[16]

(zi)

j=!

Dk:

D = ~ tk D k

43

IytH

~-T

/ m(MB)

INPUT SP

', i

!

I i

ii

i ~

i MBI '.

Ii

[

{{

o

NOTE: FAM RULE ANTECEDENTS COMBINED WITH ANO USE A MINIMUM FIT VALUE TO I ACTIVATE CONSEOUE NTS. I

t

CENTROID OUTPUT

Fig. 9. Correlation minimum inference with Centroid defuzzification for the case of rules 38 and 55.

MBt

44

Kumar,Krishna and Satsangi

8. Neural Network MBI Estimation Model

A feedforward neural network is suitable for solving inferential problems and accepts "intuitively" judged numeric inputs of task concurrency, schedule pressure, and inverse application complexity, to give the best estimate for the MBI that should be selected based on its learned knowledge. For this purpose, the inputs have been divided into four categories as in the case of the fuzzy FAM model low (L), low-medium (LM), medium-high (MH), and high (H). As in the case of the FAM, table 3 shows how such a categorization can be effected for each of the three inputs. For each possible combination, a specific desired output MBI is chosen, yielding 64 possible cubic decision regions, examples of which are shown in figure 10. The desired MBI for each of the cubes is decided by letting the cube closest to the origin have an MBI of 1, and the cube farthest from the origin have an MBI of 6. Elsewhere in the region, the MBI's are distributed as shown in table 4. In order to train the system effectively, it is necessary to first make a suitable selection regarding the number of nodes that should be selected in the hidden layer of the neural network. In accordance with Mirchandani's theorem, 8 if h = 7, then M(h,d) = 64. This partition matches that which is desired and therefore suffices exactly in the present case. We finally selected eight hidden nodes to provide the network with further partitioning capabilities. Figure 11 shows

SCHEDULE

PRES5URE

MBI=I

"]

. "I --"I - - M

~-~ - -~ _÷_

([.I,4MH L,L,L)~ LH'I~~i L~[~[ %~" ""-/-I/~? -14 .~I~.~ ItIS[/ I~II'I"II//~-" I~'~~iIt-i[~!1ii~[I[["i~" --11" I"~,~-I-/~ i

I

I i I i f/

-~ I A" I F

r

(H,H,H) BI-6

,+,I

I L["

°'}'+ -'~'-'-Jt

~TASK CONCURRENCY

APPLI CATITOYN CDMPLEXf Fig. 10. M B I decision s p a c e partitioning.

the structure of the network selected. Sixty-four training patterns were used to train this system, although to achieve a clearer partitioning of the input space, a larger number of training patterns would be required. Appendix 2 further clarifies the backpopagation algorithm used to train the network. To generate the training patterns of table 5 from table 4, the center point of each micro-cube partition was considered as the training pattern for that specific micro-cube. Assuming that the macro-cube scale varies from 0 to 1, the four partitions along each of the three axes become 0-0.25, 0.26-0.5, 0.51-0.75, and 0.76-1.00. For Table 4. Partitioning t h e M B I D e c i s i o n S p a c e IAC a

TC b

SP c

MBI d

L L LM L L LM LM L L LM MH L MH LM L LM H LM H MH L MH H MH H LM MH H MH H H H MH

L L L MH LM L LM H LM L L MH MH MH LM L L LM LM LM MH L L MH L MH LM LM MH MH LM H H

L LM L L LM LM LM L MH MH LM MH L LM H H LM H LM MH H H MH MH H H H MH H MH H MH H

1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 6 6

IAC

TC

SP

MBI

L L MH LM

LM L L LM

L MH L L

2 2 2 2

L H L LM MH MH LM MH L LM H LM MH LM L MH H L H LM MH H MH H LM H H

L L MH MH LM L LM LM H H LM H MH MH H H MH H H H H MH H H H MH H

H L LM L L MH MH LM LM L L LM LM MH MH L L H L MH LM LM MH LM H H H

3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 6 6

aInverse application complexity. bTask c o n c u r r e n c y . cSchedule p r e s s u r e . d M a n p o w e r buildup index.

Fuzzy Systems and Neural Networks

example, for the cube closest to the origin, the desired MBI is 1, and the three inputs for the training pattern become IAC = 0.125, TC = 0.125, SP = 0.125. The desired MBI outputs for the training patterns are scaled down from the actual MBI values of table 4 using the scaling equation

227T,'T7 s;Zi2 INVERSE

MBI

MBI(desired) = (MBI -

ESTIMATE

INPUT NODES

B HIDDEN 1

1)/5

[171

Inputs to the network are thus provided on a scale of 0-1, and outputs are also obtained on the same scale. Translation to the actual MBI value is done using the inverse scaling equation

coNc22;. 3

45

NODES

OUTPUT NODE

MBI = 5 * MBI(output) + I

[18]

Fig. 11. Structure of the feedforward neural network for MBI estimation.

Table 5. Neural network training patterns TC

IAC

SP

MBI

TC

IAC

SP

MB1

0.125 0.125 0.125 0.625 0.375 0.375 0.125 0.125 0.375 0.625 0.125 0.625 0.375 0.125 0.375 0.875 0.375 0.875 0.625 0.125 0.625 0.875 0.625 0.875 0.375 0.625 0.875 0.625 0.875 0.875 0.875 0.625

0.125 0.375 0.125 0.125 0.375 0.375 0.875 0.375 0.125 0,125 0.625 0.625 0.625 0.375 0.125 0.125 0.375 0.375 0.375 0.625 0.125 0.125 0.625 0.125 0.625 0.375 0.375 0.625 0.625 0.375 0.875 0.875

0.125 0.125 0.625 O. 125 0.125 0.375 0.125 0.625 0.625 0.375 0.625 0.125 0.375 0.875 0.875 0.375 0.875 0.375 0.625 0.875 0.875 0.625 0.625 0.875 0.875 0.875 0.625 0.875 0,625 0.875 0.625 0.875

0.0 0.2 0.2 0.2 0.2 0.4 0.4 0.4 0.4 0.4 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 1.0 1.0

0.125 0.375 0.125 6.125 0.375 0.125 0.875 0.125 0.375 0.625 0.625 0.375 0.625 0.125 0.375 0.875 0.375 0.625 0.375 0.125 0.625 0.875 0.125 0.875 0.375 0.625 0.875 0.625 0.875 0.375 0.875 0.875

0.125 0.125 0.625 0.375 0.125 0.125 0.125 0.625 0.625 0.375 0.125 0.375 0.375 0.875 0.875 0.375 0.875 0.625 0.625 0.875 0.875 0.625 0.875 0.875 0.875 0.875 0.625 0.875 0.875 0.875 0.625 0.875

0.375 0.125 0.125 0.375 0.375 0.875 0.125 0.375 0.125 0.125 0.625 0.625 0.375 0.375 0.125 0.125 0.375 0.375 0.625 0.625 0.125 0.125 0.875 0.125 0.625 0.375 0.375 0.625 0.375 0.875 0.875 0.875

0.2 0.2 0.2 0.2 0.2 0.4 0.4 0.4 0.4 0.4 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 1.0 1.0

Kumar,Krishna and Satsangi

46

6/["

Tc= LOW

TC = L O W - M E D I U M

/

MBI

MB1

N

IAC

Fig. 12, F u z z y associative memory estimation surface: Case T C = L O W . IAC

Fig. 14. F u z z y associatwe memory estimation surface:

9. A Comparison of FAM and Neural Network Estimation Models using Estimation Surfaces

Case T C = L O W - M E D I U M .

Figures 12, 14, 16 and 18 show various estimation surfaces for MBI estimation using the FAM. The data for these surfaces were generated using TC = LOW ( 0 , 0 5 )

a software package for BIOFAM simulation. The software is capable of providing three membership functions, namely, triangular, gaussian, and trapezoidal. The software is also capable of providing both types of encoding schemes, namely, correlation-product and correlation-minimum encoding.

6

TC = LOW-MED[UM ( 0.35 )

MB

' ,1

i MBI

1

0 .

1.3

l j/"j IAC

Fig. I3. F u z z y associative memory estimation surface:

Fig. 15. F u z z y associative memory estimation surface:

Case T C = L O W (0.05).

Case T C = L O W - M E D I U M

(0.35).

Fuzzy Systems and Neural Networks As indicated, each graph corresponds to a specific fuzzy set defined on the input variable TC. Figures 13, 15, 17 and 19 show corresponding snapshots of the MBI neural system estimation surface. Each of the four surfaces has been generated for a specific value of TC, as indicated on the diagrams. We notice that outputs from the FAM systems are stepped, whereas those from the neural system are smooth. The stepped response of the FAM models our innate judgment more realistically than the smooth output of the neural network, in the sense that software project managers tend to think in terms of integer MBI values and not in term of real valued numbers. Also, they do not change estimates of MBI values in a smooth fashion, but rather maintain the same values over a certain range of intuitively judged inputs before they "feel" it necessary that the MB1 should switch to a new value. In this sense, the stepped FAM response represents the real-world situation better than that of the neural network. The FAM system sets up much faster, also. On a PC-AT 386sx computer, the FAM system took 11 seconds to set up, whereas the neural network required thousands of iterations and

TC=

MEDIUM

47

TC = MEDIUM - H I G H ( 0.7 )

!

HBI

1.6

Fig. 17. Neural network estimation surface: Case MEDIUM-HIGH

TC

=

(0.7).

hours to train. In addition, as Kosko points out [25], the FAM system is "computationally lighter" than the neural system; the former requires only comparing and adding two real numbers, whereas the latter requires multiplication, addition, or the exponential of real numbers. TC

-HG IH

6

=

HG IH

t MBI MB#

r

I

E

120

IAC

~AC

Fig. 16. Neural network estimation surface: Case MEDIUM-HIGH.

~'~

TC

=

Fig. 18. Neural network estimation surface: Case HIGH.

TC

=

Kumar, Krishna and Satsangi

48

TC = HIGH ( 0 . 9 5 )

MBI

1 .'

~ ?

0

../

IAC

Fig. 19. Neural n e t w o r k estimation surface: Case TC = H I G H (0.95).

Also, as has already been mentioned, FAM rules obtained from logical antecedent-consequent pairs are not condensed into one single matrix (as in the bidirectional associative memory [27]), but are maintained distinct. Crosstalk, as commonly encountered in neural systems, is avoided at the cost of storage space, giving the user the ability to determine which FAM rule contributed how much membership activation to a "concluded" output, with the simultaneous provision of knowledge-base modularity. Thus the user can add or delete FAM-structured knowledge without disturbing stored knowledge. The above benefits are advantages over a pure neural network architecture for encoding the same associations.

10. Conclusions

The software engineering problem of estimating the manpower buildup index (MBI) given the three fuzzy parameters of inverse application complexity (IAC), task concurrency (TC), and schedule pressure (SP) arguably belongs to the class of problems that can be realistically modeled using fuzzy estimation theory. This article shows how fuzzy FAM's can be effectively applied to the domain of software project management and control for the estimation of the MBI.

This problem can also be solved using feedforward neural network classifiers trained using the backpropagation algorithm to give satisfactory results. Software engineers today must see their problems differently than they did in the past. They must accept the inherent fuzziness of variables in complex software systems in order to be able to develop accurate estimation models for software engineering project management and control. They can no longer continue to formulate such models based on purely deterministic engineering approaches. Problems must be modeled subjectively to realistically develop estimates for controlling factors that are based on ill-defined parameters. Fuzzy systems and neural networks best fit the bill to solve such problems, although they suffer minor drawbacks: memory requirements for fuzzy systems, and training time for neural systems. The reliability of either the FAM or the neural network models is dependent upon the rule base (for the FAM) and the training patterns (for the neural network) encoded within the system. Fuzzy rules need to be appropriately formulated through group consensus of experts and then subsequently validated for consistency (in terms of redundant rules, conflicting rules, subsumed rules, unnecessary premise clauses, and circular rules) and completeness (in terms of unreferenced attribute values, illegal attribute values, unachievable premises, unachievable final conclusions, and unusable intermediate conclusions). Similarly, training patterns can be derived based on group consensus to yield patterns that best match our intuitive judgment. Once set up, the models can be verified on real projects. Subsequent to the completion of such projects, the rule base/pattern set could be fine-tuned to reflect the organizational capability and project characteristics more realistically. Such iterations would yield increasingly reliable FAM/neural network models. With the universal explosion of knowledge, we need to know how to simultaneously simplify complexity, utilize information, and deal with uncertainty, in the form of general principles and representations that are valid in any given context. In this context, we strongly believe that a combination of neural and fuzzy representation of structured knowledge will be certainly one of

Fuzzy Systems and Neural Networks the best ways to deal with this explosion of knowledge and to give us a better means of using it to attain our desired objectives in diverse fields.

Appendix 1. Fuzzy Vector Matrix (Max-Min) Composition ,.o, [4, 27] For row-fit vectors A and B, and fuzzy nxp matrix M,

A-~M=B The recalled component, bj, is obtained by taking the fuzzy inner product of fit vector A with the jth column of M: bj = max (min (a~, mij)) l

Suggest Documents