MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION RUSSIAN ASSOCIATION OF ARTIFICIAL INTELLIGENCE ULYANOVSK STATE TECHNICAL UNIVERSITY (RUSSIA) DARMSTADT UNIVERSITY OF APPLIED SCIENCE (GERMANY) KREFELD UNIVERSITY OF APPLIED SCIENCE (GERMANY) TECHNICAL UNIVERSITY VARNA (BULGARIA) BREST STATE TECHNICAL UNIVERSITY (BELARUS)
INTERACTIVE SYSTEMS: ProblemV of Human-Computer Interaction Collection of scientific papers
ULYANOVSK 2013
UDC 681.518 (04) LBC 32.96я43 И73 Editorial board: Peter Sosnin, Prof. (The responsible editor, Ulyanovsk State Technical University), Vladimir Maklaev, PhD (Ulyanovsk State Technical University), Ekaterina Sosnina, PhD (Ulyanovsk State Technical University), Nick Voit, PhD (Ulyanovsk State Technical University)
И73
UDC 681.518 (04)
Interactive Systems: Problems of Human - Computer Interaction. – Collection of scientific papers. − Ulyanovsk: USTU, 2013. − 355 p.
The collection of scientific papers consists of reports presented during the 10th international conference «Interactive Systems: Problems of Human-Computer Interaction» (24-27 September, Ulyanovsk, Russia). The main accent is focused on the problems, tasks, models, tools and technologies that use Human-Computer interaction.
ISBN 978-5-9795-1136-8
Composite authors, 2013 Design, USTU, 2013
CONTENTS 2
CONTENTS P. Sosnin EXPERIMENTING WITH UNITS OF A HUMAN BEHAVIOR IN COMPUTERIZED MEDIUMS................................................................................. 7 A. Mathes, A. Schuette, M. Bron, V. Shishkin, D. Stenyushkin SECURITY CONCEPT FOR A CLOUD IN A NON-DESTRUCTIVE TESTING ENVIRONMENT ..................................................................................... 21 I.G. Perfilieva, N.G. Yarushkina, T.V. Afanasieva, A.A. Romanov WEB-BASED TIME SERIES FUZZY MODELING FOR THE ANALYSIS OF ENTERPRISE ECONOMICS ................................................................................ 29 P. Sosnin, Y. Lapshov, V. Maklaev PSEUDO-CODE PROGRAMMING OF WORKFLOWS IN CONCEPTUAL DESIGNING OF SOFTWARE INTENSIVE SYSTEM ....................................... 40 I. Arzamastseva USING OF THE METHODS OF BUILDING OF TERM SYSTEM IN INTELLIGENT WEB-BASED REPOSITORY .................................................... 53 I. Arzamastseva, O.Gavrikova THE USE OF LINGUISTIC INSTRUMENT IN CONCEPTUAL MODELING IN REQUIREMENTS DOCUMENT DEVELOPMENT PROCESS .................. 60 I. Arzamastseva, T. Kulakova DEVELOPMENT OF LINGUISTIC PATTERN FOR INDUSTRY SPECIFICATION .................................................................................................... 65 A.V. Konovalov, S.V. Arzamastsev, S.I. Kanyukov, O.Ju.Muizemnek A CONCEPT OF CREATING INTEGRATED INTELLECTUAL CAPD FOR THE PROCESS OF FORGING..................................................................... 69 A. Afanas’ev, R.Gainullin METHODS AND TOOLS OF THE ANALYSIS OF GRAPHICAL BUSINESS PROCESS NOTATIONS IN DESIGN OF THE AUTOMATED SYSTEMS ..... 76 A. Afanas’ev, D. Bragin ONTOLOGY-BASED APPROACH FOR CONTROL OF IDEF DIAGRAMS .............................................................................................................. 82 A. Afanas’ev, D. Bragin, R.Gainullin CLIENT-SERVER MODEL FOR ANALYSIS OF DIAGRAM LANGUAGES .......................................................................................................... 85
3
CONTENTS
A. Afanas’ev DIAGRAMMATIСA IN COMPUTER-AIDED DESIGN OF COMPLEX COMPUTER-BASED SYSTEMS ........................................................................... 89 M. Bron, G. Raffius, A. Schuette, V. Shishkin, D. Stenushkin DISTRIBUTED SYSTEM FOR ULTRASONIC NON-DESTRUCTIVE TOMOGRAPHY ...................................................................................................... 91 G. Burdo, B. Palyukh, A. Sorokin APPROACHES TO DEVELOPMENT OF INTERACTIVE SYSTEM OF THE PRODUCT QUALITY CONTROL IN MULTIPRODUCT MACHINERY MANUFACTURING................................................................................................ 98 A. Afanas’ev, N. Voit THE DEVELOPMENT OF GRAPH APPROACH TO DESIGNING THE INTELLIGENT COMPUTER-BASED TRAINING SYSTEMS ....................... 104 M. Guseva IMPORT DATA MODULE OF THE “ULSTU CURRICULA SYSTEM” ...... 113 A. Danchenko THE ONTOLOGY OF MONITORING THE DOCUMENTS FOR PLANNING LEARNING PROCESS ......................................................................................... 116 R.Gainullin ONTOLOGY METHOD OF SEMANTIC ANALISYS AND CONTROL UMLDIAGRAMM .......................................................................................................... 122 B. Paliukh, I. Egereva MULTISTEP ALGORITHM OF ALTERNATIVES SEARCH IN AN INFORMATION CATALOGUE .......................................................................... 129 S. Elkin, K. Larin, V. Shishkin A VIRTUAL TEST BENCH AS EMBEDDED EQUIPMENT EXECUTIVE MODEL ................................................................................................................... 133 A.S. Zuev ABOUT THE TENDENCIES AND PROSPECTS OF DEVELOPMENT OF THE HUMAN-COMPUTER INTERACTION VIRTUAL ENVIRONMENTS ................................................................................................. 141 D. Kanev ANALYSIS OF AUTOMATED LEARNING SYSTEMS ARCHITECTURES AND ARCHITECTURE DEVELOPMENT OF INTELLIGENT TUTORING SYSTEM BASED ON MOBILE TECHNOLOGY ............................................. 152
CONTENTS
4
S.I. Kanyukov, A.V. Konovalov CONTROL OF CAPD FOR PRESS FORGING VIA USER ACTIONS .......... 163 P. Bibilo, N. Kirienko, V. Romanov ARCHITECTURE OF A SYSTEM OF SYNTHESIS OF PIPELINED LOGIC CIRCUITS............................................................................................................... 172 Z. Luchinin, I. Sidorkina LSM TREE AS AN EFFECTIVE DATA STRUCTURE FOR DOCUMENTORIENTED DATABASE ...................................................................................... 174 S. Malysheva ABOUT THE METHOD OF TRAINING COST REDUCTION IN TASKS OF SYMMETRICAL OBJECTS DETECTION IN DIGITAL IMAGES ............... 177 A.S Melnichenko, A.A Kuprijanov ARCHITECTURE OF VIRTUAL BUSINESS ENVIRONMENT A GEOGRAPHICALLY DISTRIBUTED AUTOMATED SYSTEMS ON THE BASIS OF THE PROVISIONS OF THE DEMO – METHODOLOGY ........... 184 A.A Kuprijanov, A.S. Melnichenko, U.V. Suganova, A.V Sandalova, N.A Butakov THE MAIN DESIGN DECISIONS OF THE INTELLECTUAL SYSTEM OF DESIGNING OF SCHEMES DIVISION ............................................................. 193 I. Mikhaylov A NEW METHOD OF AUTOMATIC DEMODULATION OF LINEAR BARCODES ............................................................................................................ 201 D. Mikhailov, G. Emelyanov SENSE’S STANDARDS AND TRANSMISSION OF KNOWLEDGE FOR THEIR ESTIMATION APPLYING THE OPEN-FORM TESTS..................... 210 V. Nahapetyan HUMAN-COMPUTER MULTI-TOUCH INTERACTION USING DEPTH SENSORS ................................................................................................................ 225 S.N. Ulazov, V.V. Rodionov A RUNTIME IMPLEMENTATION OF MARKOV ALGORITHMS USING THE VISUAL C# LANGUAGE ............................................................................ 233 Yu.V. Safroshkin ON THE PROBLEMS AND PROSPECTS FOR MIND-SENSES DIALOGUES .......................................................................................................... 238
5
CONTENTS
K.Svyatov DOCUMENT MANAGEMENT FEATURES IN WIQA.NET SYSTEM ......... 242 Sysoeva L.A. PROCESS APPROACH TO MANAGEMENT OF THE E-LEARNING SERVICES .............................................................................................................. 247 A. Tece
A TEST-BED FOR 3D INTERACTION IN 3D DESKTOP VIRTUAL ENVIRONMENTS ................................................................................................. 256 Y. Titov CONSTRUCTION OF VIRTUAL ENTERPRISE USING CALS TECHNOLOGIES.................................................................................................. 271 V. Khorodov ANALYSIS OF LATENT DEFECTS IN TECHNICAL SYSTEMS ................. 275 I. Lyozina, V. Khokhlova COMPARISON OF ALGORITHMS FOR THE INITIALIZATION OF NEURAL NETWORKS BY USING AN AUTOMATED SYSTEM FOR FORECASTING THE STOCK MARKET INDICES ........................................ 279 D. Tsygankov, I. Gorbachev, A. Pokhilko FORMATION OF FUNCTIONALLY ADAPTED CAD SYSTEMS OF WAVEGUIDE SHF DEVICES ............................................................................. 284 Ju. L. Tchigirinsky ABOUT TRUST LEVEL TO RESULTS OF COMPUTER-AIDED TECHNOLOGY DESIGN ..................................................................................... 289 V.Shikorin, Е.Sidiryakov ABOUT SOME FEATURES OF INFORMATION SITES’ DESIGN .............. 294 A. Yankovskaya(1,2,3), N. Krivdyuk(3) APPLICATION OF COGNITIVE GRAPHICS MEANS FOR DECISIONMAKING AND SUBSTANTIATION OF DECISIONS IN INTELLIGENT SYSTEM .................................................................................................................. 296 A. Yankovskaya(1, 2, 3), M. Semenov(4), D. Semenov(3, 4) COGNITIVE TOOL FOR THE REPRESENTATION OF TEST RESULTS OF THE BLENDED EDUCATION ............................................................................ 305
CONTENTS
6
A. Pertsev, K. Svyatov, P. Sosnin COMPUTER-AIDED GENERATION OF PERSONAL JOB DESCRIPTIONS FOR EMPLOYEES OF DESIGN ORGANIZATIONS........313
V. Maklaev, A. Podobry, P. Sosnin AN APPROACH TO INTEGRATION OF INFORMATION RESOURCES WHEN DESIGNING A FAMILY OF COMPUTER-AIDED SYSTEMS..........322
Y. Rudkovsky LABELLED PROTECTION USING AGENT FOR CONTROL AND CONVERSION OF KEY-DRIVEN USER OPERATIONS................................334
Y. Rudkovsky, S. Zhukov LABELED PROTECTION OF PROJECT TASKS AND LOGGING SYSTEM IN DEVELOPMENT OF COMPUTER-AIDED SYSTEMS.............340
A. Dimitriev THE PETRI NET FOR PLACING A WORDS AT TRANSLATED SENTENCE...............................................................................................................349
M.Y. Romodin LINGUISTIC MEANS FOR DESIGN DECISION PROTOTYPING WITH DATABASE QUERIES............................................................................................351
Experimenting with Units of a Human Behavior in Computerized Mediums 7
Experimenting with Units of a Human Behavior in Computerized Mediums Petr Sosnin Ulyanovsk state technical university, Severny Venwetc, 32, 432027 Ulyanovsk, Russia
[email protected]
Abstract. This paper presents a scientific approach to experimentation on objects that are units of hum an’ behavior and is aimed at solving tasks in the computerized mediums. The proposed approach is based on specifying the behavior units as precedents and pseudo-code programming as plans for experiments. Reasoning used by human in the experiments is registered in a question-answer form. Experimenting is supported by a specialized toolkit. Keywords: Conceptual designing, designer’s behavior, experience, questionanswering, precedent, software intensive systems.
1
Introduction
A ubiquitous computerization of all spheres of human activities can be achieved only in conditions of interactions of a human with computerized mediums in forms which are similar to (natural) interactions of the human with a natural environment. In computerized medium the human should have the possibility for acquiring the new units of an experience in accordance with a human nature. The human nature is based on intellectual processing of conditioned reflexes. Such a way of their processing leads to a human behavior which is put into practice in forms of precedents and their complexes. According to the Cambridge dictionary, “precedents are actions or decisions that have already happened in the past and which can be referred to and justified as an example that can be followed when the similar situation arises” (http: //dictionary.cambridge.org/ dictionary/british/precedent). The human experience is oriented on precedents. Thus, in computerized mediums any human should have the possibilities for intellectual processing (shortly IP) of useful behavioral units for their reuse as new precedents. It will be achieved only in cases when the applied IP allows creating the precedents’ models that organically evolve the used experience. The named IP should be implemented by any human who will apply appropriate computerized means in computerized mediums. By the other words the human should be involved in the rational way-of-working which supports interactions with the experience and its computerized models. In accordance with [1] “way-of-working is the tailored set of practices and tools used by a person or a team to guide and support their work.” Consequently, way of working for IP should be created on the base of effective
8
Petr Sosnin
practices the use of which by the human should be supported with useful tools. Such a necessity opens a question about reliable sources of practices. On our deep belief the experience of scientific experimenting is a very important source of practices for IP [2]. This paper presents an experiential approach to the human activity that simulates scientific experimenting with behavioral units. Such simulating is supported by the specialized toolkit WIQA (Working In Questions and Answers).
2
Preliminary Bases
The specificity of the approach is defined by the following features: 1. The investigated units of behavior are interpreted as precedents, with which the human use interactions to obtain helpful units of the experience and its models. 2. Experimenting with the chosen unit of behavior, the human creates the corresponding precedent model that fulfills the function of “experimental setup”. 3. Any such “experimental setup” is built to confirm the existence of a specific “cause-and-effect regularity (or regularities)” in the “naturally artificial world” of the corresponding computerized medium. 4. The existence of any investigated “cause-and-effect regularity” should be confirmed not only by an author of the experiment but also by other humans who are interested in the regularity embedded to the corresponding precedent. 5. Interactions of the human with the experience and its models (i.e., the accessible experience) are based on question-answer reasoning. 6. The approach is responsible for conceptual investigation of precedents because any new regularity of the behavior should be formed and checked as early as possible. 7. Initial models of precedents are programmed in a specialized pseudo-code language that is used by the human in coordination with question-answer reasoning. Only after that the human should rationalize the model, for example, by its programming on the professional language. 8. Any computerized action of the approach (explicitly or implicitly) applies a specialized memory with question-answer structure that is used for storing a meta-description of the computerized medium. Let us clarify some of these features from the side of the question-answer memory (QA-memory) the generalized scheme of which is presented in Fig.1. The scheme indicates a ca se when QA-memory with means of its use is included to the computerized medium. QA-memory is a kernel of the toolkit WIQA that is supported the use of questionanswer reasoning (QA-reasoning) of the human in conceptual modeling the tasks being solved. The memory of this type is specialized on r egistering questions and answers any of which is repeatedly accessible for the human as a unique interactive object (Q or A-type). Such objects are models of real questions and answers. They can be combined in complexes (QA-objects) any of which will be simulated real QAreasoning. QA-objects and their complexes can be used for modeling any objects and processes of any computerized medium [2]. In suggested approach such a possibility
Experimenting with Units of a Human Behavior in Computerized Mediums
9
is evolved till its applying to experimenting of the human with units of own behavior in computerized mediums.
Fig. 1. Generalized scheme of experimenting
In the generalized scheme the subject area of the computerized medium is presented by a set of tasks Z={Zk} that are solved (and being solved) by the human. Therefore the meta-description of the computerized medium should be specified with the side of this set of tasks. Let us notice that in QA-memory the human has possibility to create meta-models for QA-objects, for example, for their systematization or investigation. Thus, the human has possibilities for experimenting with own behavioral units as in the domain of the computerized medium so in QA-memory.
3
Related Works
The version of experimentation described is coordinated with basic principles of the SEMAT Kernel, which is described in [3]. The version is oriented towards waysof-working that focus on the real-time activity of developers of the programmed units of computerized mediums. “The process is what the team does. Such processes can be formed dynamically from appropriate practices in accordance with current situations. The informational cards and queue mechanisms are being used for managing of waysof-working [3].” It is necessary to note that the offered approach is oriented on behavior units of task types for structuring the activity of the humans because solutions of tasks facilitate the enrichment of the accessible experience by scientific experimentation. By experimenting, the scientists solve specific tasks creatively. In the offered approach, the scientific viewpoint correlates with two faces of the software engineering described in [4], where functional paradigms and scientific
10
Petr Sosnin
paradigms are discussed. In the context of this paper, the approach means are oriented towards scientific paradigms used by software engineers. Therefore, the important group of related studies includes publications that present empirical viewpoints on software engineering. In this group, we note the following works [4] and [5], which present the domain of empirical software engineering; papers [6] and [7], which define the Goal-Question-Metrics method (GQM-method) and Experience factory, which includes the Experience Base. All of the indicated studies were taken into account in the offered version of scientifically experimental ways-of-working. One more group of related publications concerns the use of question-answering in computerized mediums, for example, papers [8] and [9]. In this group, the closest research presents experience-based methodology “BORE” [9], in which questionanswering is applied as well, but for the other aims, this methodology does not support programming of the creative human activity.
4 4.1
Operating Space for Experimenting Question-Answer Memory
The offered approach is applied for experimenting with units of the human behavior in the computerized medium and in QA-memory. The basic reasons for such experimenting are a solution of a new task or an attempt to find a more effective solution for the task that is solved early. Thus, in any case the investigated behavior is bound with a solution of a task. An operating space of human actions are clarified generally in the scheme presented in Fig.2. Computerized medium Conceptual solution of task H u m a n
WIQA
Experience Base QA-protocols for tasks (QA-model)
Other sources of experience models
QA-memory
Fig. 2. Generalized scheme of operating space
In solving the task, the human registers the used question-answer reasoning (QAreasoning) in a s pecialized protocol (QA-protocol) so that this QA-protocol can be used as the task model (QA-model). Typical units of QA-reasoning are questions (Q)
Experimenting with Units of a Human Behavior in Computerized Mediums
11
and answers (A) of different types. Tasks are a very important type of question. Below, the tasks will be designated with the use of the symbol “Z”. Models of this type can be used by the human for experimenting in real time with all of the tasks being solved. Units of the experiential behavior that are extracted from the solution processes are modeled on the basis of QA-models of tasks. The scheme also reflects that the investigated behavior model can be uploaded as the model of the corresponding precedent in the question-answer database (QA-base, QA-memory) and in the Experience Base of WIQA. After that, they can be used by the human as units of the accessible experience. Experience models from the other sources can be uploaded in the Experience Base. More detailed structure and content of QA-memory is presented in Fig.3 where following features of the human activity are reflected: • QA-memory (as any other memory) consists of cells any of which is intended for storing the object of definite type; • any human possesses a definite competence and useful units of it are applied by him/her in processes of tasks’ solutions; • in general case the task being solved can include subordinated tasks.
Fig. 3. Detailed structure of QA-memory
Any cell has the following basic features: 1. A cell is specified by a set of normative attributes that reflect, for example, the textual description of the stored interactive object, its type (“Q” or “A” or “Z” or the other kind of types) and unique name, the name of its creator, the time of the last modification and the other characteristics.
12
Petr Sosnin
2. Any cell has a unique address, which has a function that is fulfilled by the type name of the stored unit and its unique index, which is appointed automatically when creating the unit. Empty cells are absent. 3. The human has a chance to appoint a number of additional attributes to the cell if it would be useful for the work involving the object stored in the cell. 4. Having chosen the necessary attributes, the human can adjust the cell for storing any question or any answer (of the definite type) in the form of an interactive object that is accessible by inquiries of the human of programs. Thus, any question and its answer are stored in QA-memory as a pair of related interactive objects, which is called a QA-unit. QA-units are stored in QA-memory as data; the abstract type data will be called QA-data. The use of this type of data helps to emulate other data types, including descriptions of operators. First, this capability is necessary for the use of QA-memory in pseudo-code programming. Thus, cells of QA-memory that are destined for storing QA-units can be adjusted to store other types of units, for example, for units used in solving the tasks. In Fig. 3, the scheme of QA-memory demonstrates the store of presentations for the following constructions: • “Executed roles” as hierarchical structure of a personal set P* of competence units Cqs and their groups that are specified as roles Rq; • “Tree of tasks” that is registered tasks with their subtasks; • pseudo-code program the operators and data of which are emulated with using QA-data. Let us clarify these constructions. The role is a special version of a human behavior which satisfies to the definite set of rules. Role specifications depend on its tasks and tools supporting their preliminary solving and reuses of solutions. There are two lines of tasks’ dependences. The first line is connected with hierarchical relation of the subordinated type. The second line specifies workflows [10] when groups of tasks are executed in accordance with dynamic relation between them. Or another words, executions of tasks are managed by conditions (events) being arisen in workflows. The program, its operators and used data are designated as QA-program, QAoperators and QA-data, respectively, to underline that they inherit the features of QAmemory cells. Cells are used for writing units of the source (pseudo) code in the forms that can be interpreted as “questions” and “answers”. Objects uploaded to QA-memory are bound in hierarchical structures. In real-time work, the human interact with such objects and process them with the help of appropriate operation to find and test the solutions to the tasks. 4.2
Question-Answer Modeling
One method for the conceptual solution of any task of the indicated types is based on creating its QA-model as a system of questions and answers that accompany the solution process. The generalized scheme of such a model is presented in Fig. 4.
Experimenting with Units of a Human Behavior in Computerized Mediums
13
Human
?… ?… ?… ?… ?… ?… ?… QA-model ?… ?… S({Ai}) ?… ?… ?… ?... ?… ?… Computerized process WIQA Fig.4. Interactions with QA-model of Task
Question-answer and other models are created to make an extraction of answers to the questions enclosed in the model. Moreover, the model is a very important form of the representation of the questions, the answers of which are generated during visual interactions of the human with the model. The essence of QA-modeling is the interactions of the human with artifacts included in the QA-model in their current state. For this approach to an interaction, the human can use the special set of QA-commands, their sequences and a s et of WIQA plug-ins. The main subset of the positive effects of QA-modeling includes the following: • controlling and testing the reasoning of the human with the help of “reasoning” and “understanding“ included into the QA-models; • correcting the current understanding of the human with the help of comparing it with “integrated understanding” embedded to the personal experience; • combining the models of the external experience (from outside sources) with an individual experience for increasing the intellectual potential of the human. As is shown in this scheme, any component of a QA-model is a source of answers that are accessible for the human as a result of the interactions with the QA-model. At the same time, the potential of the QA-model is not limited by the questions planned when defining and creating the QA-model. Another source of useful effects of QAmodeling is an additional combinatorial “visual pressure” of questions and answers, which is caused by the influence on brain processes in their contact with components of the QA-model. No difference depends on who created the QA-model. There are different forms for building answers with the help of QA-modeling, which are not limited to only linguistic forms. However, the specificity of QAmodeling is defined by the inclusion of additional interactions with “question-answer objects” into the dynamics of the consciousness and understanding (into the natural intellectual activity of the human). The description of any behavioral unit composed of human interactions with the QA-model in accordance with a s pecific scenario can fulfill the role of a model of such a human activity. To distinguish this type of model from other types of models that were used in our approach, they can be named “QA-models of the human activity”. Any such scenario as a s pecific program reflects human interactions (actions) aimed at understanding the corresponding task and its solution. In the discussed case, the scenario is a text that comprises instructions that indicate the human’s actions, which should be executed in the reuse of the behavioral unit in the WIQA-medium.
14
Petr Sosnin
Similar scenarios can be created for human actions that are not limited to the WIQA-medium. Their content, form and appointment are demonstrated by the following technique: //Reset of Outlook Express O1. Quit all programs. O2. Start On the menu Run, click. O3. Open In the box regedit, type, and then OK the click. O4. Move to and select the following key: HKEY_CURRENT_USER/Software/Microsoft/Office/9.0/Outlook O5. In the Name list, FirstRunDialog select. O6. If you want to enable only the Welcome to Microsoft Outlook greeting, on the Edit menu, Modify, click the type True in the Value Data box, and then the OK click. O7. If you also want to re-create all of the sample welcome items, then move to and select the following key: HKEY_CURRENT_USER/Software/Microsoft/Office/9.0/Outlook/Setup O8. In the Name list, select and delete the following keys: CreateWelcome FirstRun O9. In the Confirm Value Delete dialog box, click Yes for each entry. O.10. On the Registry menu, click Exit. O11. End. This technique is chosen to emphasize the following: 1. There are many behavior units that describe human activity in different computerized mediums. 2. Descriptions of similar typical activities help in the reuse of these precedents. 3. Descriptions of techniques have forms of programs (N-programs) that are written in the natural language LN in its algorithmic usage. 4. Such N-programs are made of operators that are fulfilled by humans interacting with the specific computerized system. In the example of the N-program, its operators are marked by the symbol “O” with the corresponding digital index. Thus, there are no obstacles for uploading the N-programs into QA-memory. This method is used for uploading the techniques that support the human activity in the WIQA-medium. Thus, the other way of coding the human activity is connected with its programming in the context of the scientific research on the task. All of the tasks indicated above are uploaded to QA-memory with the rich system of operations with interactive objects of the Z-, Q- and A-types. The human has the opportunity to program the interactions with necessary objects. Such programs are similar to the plans of the experimental activity during the conceptual solution of tasks. Operators of programs are placed in Q-objects. Corresponding A-objects are used for registering the facts or features of the executed operations. Thus, in experimenting with units of their own behavior, the human has a flexible means for specifying the QA-programs, QA-operators and QA-data that are used in simulating such behavioral units. Experimentation is fulfilled in the form of QAmodeling for solving tasks in conceptual solution of tasks.
Experimenting with Units of a Human Behavior in Computerized Mediums
4.3
15
Pseudo-Code Language LWIQA
QA-reasoning can be used by the human when they create different conceptual models of tasks, for example, in formulating the task statement or in cognitive analysis of the formulated statement or in (pseudo-code) programming the solution plan of the task. The toolkit WIQA supports the creative work of the human with all indicated conceptual modes and conceptual models of the other types. The specialized pseudo-code language LWIQA has been developed for the use of QA-reasoning in programming the solution plans. This language is oriented towards its use in experiential interactions of the human with accessible experience when they create programs of their own activity and investigate them. Step-by-step, LWIQA has been evolved to a state with the following components: 1. Traditional types of data, such as scalars, lists, records, sets, stacks, queues and the other data types. 2. Data model of the relational type, describing the structure of the database. 3. Basic operators, including traditional pseudo-code operators, for example, Appoint, Input, Output, If-Then-Else, GOTO, Call, Interrupt, Finish and the others operators. 4. SQL-operators in their simplified subset, including Create Database, Create Table, Drop Table, Select, Delete From, Insert Into, and Update. 5. Operators for managing the workflows oriented towards collaborative designing (Seize, Interrupt, Wait, Cancel and Queue). 6. Operators for visualization developed for the creation of the dynamic view of cards presenting QA-units in the direct access of the human to objects of QAmemory. The important type of basic operators includes an explicit or implicit command aimed at the execution by the human of the specific action. Explicit commands are written as imperative sentences in the natural language in its algorithmic usage. When human interactions with descriptions of questions or answers are used as causes for human actions, then, such descriptions can be interpreted as implicit commands written in LWIQA. For example, textual forms of questions are a very important class of implicit commands. In the general case, a Q A-program can include data and operators from different enumerated subsets. However, the traditional meaning of such data and operators is only one aspect of their content. The other side is bound with attributes of QA-units in which data and operators are uploaded. As described above, QA-data and QAoperators inherit the attributes of corresponding cells of QA-memory. They inherit not only attributes of QA-units but also their understanding as “questions” and “answers”. Originally, QA-data had been suggested and developed for real-time work with such interactive objects as “tasks”, “questions” and “answers”, which were stored in the QA-database and used by the human in the corporate network. It is necessary to recall that “task” is a t ype of question and “decision of the task” is an answer to a question. The second type of pseudo-code strings is intended for writing the commands (operators). The symbol string of the “question” can be used for writing (in this place) the operator in the pseudo-code form. The fact or the result of the operator execution will be marked or registered in the symbol string of the corresponding “answer”.
16
Petr Sosnin
The following remarks explain the specificity of QA-operators and their use: 1. Any sentence in any natural language includes the interrogative component, which can be indicated explicitly or implicitly. In QA-reasoning, this component is used obviously, while in the pseudo-code operator, the question is presented implicitly. 2. Named interpretation opens the possibility of registering pseudo-code programs in QA-memory in the form of programmed QA-models of the corresponding tasks. 3. In this case, any pseudo-code operator presented by the pair of coordinated interactive objects of Q- and A-types is written on the “surface” of the corresponding QA-unit in QA-memory. 4. Thus, the used QA-unit can be interpreted as the “material for writing” of the corresponding operator of the source pseudo-code. This “material” comprises visualized forms for writing the string symbols that were originally intended for registering the texts in the field “textual description” of the corresponding QA-unit. The initial applicability and features of such a t ype of strings are inherited by data and operators of pseudo-code programs. It is possible to assume that data and operators are written on “punch-cards”, the features of which (basic and useful additional attributes of corresponding QA-units) can be accessible for their processing together with textual descriptions if it is necessary.
5 5.1
Simulating the Human’s Behavior Preparation of Experiments
The principal feature of the proposed approach is an experimental investigation by the human into the programmed behavior, which has led to the conceptual solution of the appointed task. Any solution of such a type should demonstrate that its reuse meets the necessary requirements when any human of the team will act in accordance with QA-program of the investigated behavior. As described above, to achieve the goals, the human should work in a way similar to a scientist who prepares and conducts experiments with the behavior units of the M- or P-types. In the discussed case, the human will experiment in the environment of the toolkit WIQA. In this environment, to prove that the aim of an experiment has been achieved, the human has the possibility of experimenting with any QA-operator of an investigated QA-program and/or with any group of such QA-operators or with the QA-program as a whole. Describing the experiment for reuse, the human should register it in an understandable form for the other members of the team. To begin a specific experiment, the initial text of the QA-program should be built. In the general case, such a project would include the following steps: 1. Formulation of the initial statement of the task. 2. Cognitive analysis of the initial statement with the use of QA-reasoning and registering it in QA-memory. 3. Logical description of the “cause-effect relation” reflected in the task. 4. Diagrammatic presentation of the analysis results (if it is necessary or useful). 5. Creation of the initial version of the QA-program.
Experimenting with Units of a Human Behavior in Computerized Mediums
17
The indicated steps are fulfilled by the human with the use of the accessible experience, including the personal experience and useful units from the Experience Base of WIQA. 5.2
Experimenting with the QA-program
Only afterward can the human conduct the experiment, interacting with the QAprogram in the context of the accessible experience. The specificity of interactions can be clarified on examples of QA-operators of any QA-program or its fragment, for example, the following fragment of QA-program coding the well-known method of SWOT-analysis (Strengths, Weaknesses, Opportunities, and Threats): Q 2.5 PROCEDURE &SWOT main& Q 2.5.1 &t_str& := QA_GetQAText(&history_branch_ qaid&) Q 2.5.2 SETHISTORYENTRIES(&t_str&) Q 2.5.3 CALL &ShowHistory& Q 2.5.4 IF &LastHistoryFormResult& == -1 THEN RETURN Q 2.5.5 IF &LastHistoryFormResult& == 0 THEN ¤t_action_qaid& := QA_CreateNode(¤t_ project&, &history_branch_qaid&, 3, "") ELSE ¤t_action_qaid& := &LastHistoryFormResult& Q 2.5.6 &t_str& := QA_GetQAText(¤t_ action_qaid&) Q 2.5.7 SWOT_DESERIALIZE(&t_str&) Q 2.5.8 &t_int& := SWOT_SHOWMAINFORM() ……………….. Q 2.5.14 FINISH This source code demonstrates an often-used syntax, but features of the code are opened in interactions of the human with it. Conditions and methods of experimenting are shown in Fig. 5, where one of the operators (with address name Q2.5.2) is shown in the context of previous and subsequent operators. Any QA-program is executed by the human step-by-step, in which each step is aimed at the corresponding QAoperator. In this study, the human uses the plug-in “Interpreter” embedded into the toolkit. Interpreting the current operator (for example, Q2.5.2), the human can fulfill any actions until its activation (for example, to test existing circumstances) and after its execution (for example, to estimate the results of the investigation), using any means in the toolkit WIQA. When the human decides to start the work with the QA-operator, this work can include different interactive actions with it as with corresponding QAunits or with their elements. The human can analyze values of their attributes and make useful decisions. Moreover, the human can appoint the necessary attributes for any QA-operator and for any unit of QA-data at any time. In accordance with appointments, the human can include changes in the source code of the QA-program being executed (investigated). Such work can be fulfilled as in QA-memory, with the help of the plug-ins “Editor”.
18
Petr Sosnin Toolkit WIQA Plug-ins: l Team model
QA-memory QA-program_i
…. Editor Compiler Interpreter
................ …. operator Q2.5.1 operator Q2.5.2
A2.5.2.
operator Q2.5.3 …. Human
................
Fig. 5. Experimenting of designer with QA-program
The current QA-program or its fragments can be executed or used step-by-step by the human or automatically as a whole with the help of the plug-in “Compiler”. Therefore, all of the work described above with the QA-operator can be used for any of the groups and for any QA-program as a whole. For this reason, the execution of QA-operator by the human is similarly experimentation. Thus, the human has a flexible possibility to perform experimental research on any task that is solved conceptually. This feature is the principal feature that distinguishes pseudo-code QAprograms from programs written in pseudo-code languages of different types, including the class of Domain-Specific Languages. The specificity of the described type of human activity is the work controlled by the QA-program and executed by the human interacting with the accessible experience. To underline this specificity, the specialized role of “intellectual processor” was constructively defined and is effectively supported in the use of WIQA [11]. This role is added to the other types of roles that applied in the conceptual design 12]. 5.3
Description of Experiments
As described above, any experiment that is conducted should be presented by the human in an understandable and reusable form. In the offered version of experimentation, the function with such a form is fulfilled by the typical integrated model of the precedent, which is shown in Fig. 6.
Experimenting with Units of a Human Behavior in Computerized Mediums
19
Name of precedent Pi: C while [logical formulae (F) for motives M ={Mk}] h as [ F for aims C = {Cl} ] o if [F for preconditions U’= {U’n} ], i then [plan of reaction (program) rq], c end so [F for post conditions U” = {U”m}] e -----------------------------------there are alternatives {Pj(rp)}. PT Task
PQA PL PG
Human
PI PE
P p r o j e c t i o n s
Fig. 6. Framework of precedent model
The scheme, which satisfies the function of framework F(P) for models of precedents, allows integrating the very useful information that accompanies the experiment process in its actions, as indicated above. The central position in this model is occupied by the logical scheme of the precedent. The scheme explicitly formulates the “cause-effect regularity” of the simulated behavior of the human. Framework F(P) includes the following components: • textual model PT of the solved task; • its model PQA in the form of registered QA-reasoning; • logical formulae PL of the modeled regularity; • graphical (diagram) representation PG of the precedent; • pseudo-code model PI in QA-program form; • the executable code PE. Any component or any of their group can be interpreted as projections of F(P), the use of which allow us to build the precedent model in accordance with the precedent specificity. However, in any case, the precedent model should be understandable to its users.
6
Conclusions
The approach described in this paper suggests a system of methods that support experimenting of the human with units of the own behavior in the computerized mediums. Moreover, the human simulates such behavior units with the help of pseudo-code programs that describe the plans for experimentation. Thus, humans investigate the programmed plans of the experiments that they prepare, and they
20
Petr Sosnin
conduct and describe the results in understandable and checkable forms, for later reuse. In the experiments, the investigated behavioral units are modeled as precedents. Such a f orm for a h uman activity is natural because the intellectual processing of precedents comprises the base of the human experience. The toolkit WIQA opens the possibility for the separate execution of any operator by the human. Before and after the execution of any operator of any QA-program, the human can check or investigate its preconditions and post-conditions. Moreover, the investigated operator can be changed and evolved syntactically as well as semantically, for example, with the help of additional attributes. The possibility of experimenting is supported by the special library of QAprograms destined for cognitive task analysis, problem-solving and decision-making included in the named toolkit. Suggested means are used in one project organization in creating a family of SISs.
7 [1]
References
I. Jacobson, P.-W. Ng and I. Spence, “Enough of processes-let’s do practices,” JOT, vol. 6, no.6, pp. 41-67, (2007). [2] P. Sosnin, “Experiential human-computer interaction in collaborative designing of software intensive systems,” In Proc. of the 11th International conference on Software Methodology and Techniques, Genova, Italy, pp. 180-197, (2012). [3] I. Jacobson, P.-W. Ng, P. McMahon, I. Spence, and S. Lidman, “The essence of software engineering: the SEMAT kernel”, Queue, vol. 10, no.10, pp. 1-12, (2012). [4] C. Cares, X. Franch and E. Mayol, “Perspectives about paradigms in software engineering,” in Proc. of the 2nd I nternational workshop on Philisophiocal Foundations on I nformation Systems Engineering, pp. 737-744, (2006). [5] D.R. Jeffery and L. Scott, “Has Twenty-five Years of Empirical Software Engineering Made a Difference," in Proc. 2nd Asia-Pacific Software Engineering Conference, pp.539-549, (2002). [6] D.I. K. Sjoberg, T. Dyba and M. Jorgensen, “The future of empirical methods in software engineering research,” In Proc. of Workshop Future of Software Engineering, IEEE, Minneapolis, USA, pp. 358378, (2007). [7] F. Karray, M. Alemzadeh, J. A. Saleh and M. N. Arab, “Human-Computer Interaction: Overview on State of the Art Smart sensing and intelligent systems.” vol. 1, no.1, 2008, pp. 138-159 [8] B. Webber and N. Webb, “Question answering,” In Clark, Fox and Lappin (eds.): Handbook of Computational Linguistics and Natural Language Processing, Willey- Blackwells, Oxword, UK, pp. 630-655, (2010). [9] S. Henninger, “Tool support for experience-based software development methodologies,” Series Title: Advances in Computers, vol. 59, Elsevier, pp. 29-82, (2003). [10] M. Held an d W. Blochinger, “Structured collaborative workflow d esign”, Future Generation Computer Systems, vol.25 no.6, pp. 638-653, (2009). [11] P. Sosnin, “Pseudo-code Programming of designer activity in development of software intensive systems,” In Proc. of the 25-th International conference on Industrial Engineering and other Applications of Applied Intelligent Systems (IEA/AIE 2012), Dalian, Chine, pp. 457-466, (2012). [12] P. Borges, R.J. Machado and P. Ribeiro, “Mapping RUP roles to small software development teams,” In Proc. ICSSP, Zurich, Switzerland, pp. 190-199, (2012).
Security Concept for a Cloud in a Non-Destructive Testing Environment 21
Security Concept for a Cloud in a NonDestructive Testing Environment Alexander Mathes, Alois Schuette, Michael Bron, Vadim Shishkin, Denis Stenyushkin University of Applied Sciences Darmstadt, Department of Computer Science, Schoefferstr. 8a, 64295 Darmstadt, Germany
[email protected] [email protected] http://www.fbi.h-da.de ScanMaster Systems Ltd. 23 Hamelacha St., Afek Park P.O.Box 11431, Rosh Ha’ayin 48091, Israel
[email protected] http://www.scanmaster-irt.com Ulyanovsk State Technical University 32, Severny Venetz str. 432027 Ulyanovsk, Russia
[email protected] http://www.ulstu.ru Cyber Systems Development, Ltd 20, Borodina str. 432012 Ulyanovsk, Russia
[email protected] http://ritg.ru/
Abstract. The present paper emerged in the context of a project called DSUNDT (Distributed System for Ultrasonic Non-Destructive Tomog- raphy) between the German University of Applied Sciences Darmstadt and the companies ScanMaster Systems from Israel and Cyber Systems Development from Russia. An essential part of this project is a cloud system, that serves for processing and distributing data obtained from external non-destructive testing equipment. The data processed inside the cloud is private property of ScanMaster’s customers, which makes security of the data and the cloud itself to an important aspect of the whole project. This paper discusses our security measures for the cloud including authentication procedures, protecting customer data at rest as well as the interaction with the cloud and the overall separation of customers on the whole system. Keywords: Cloud, Security, Authentication, Encryption, X.509
22 A. Mathes, A. Schuette, M. Bron, V. Shishkin, D. Stenyushkin
1
Introduction
This paper describes the security measures for a cloud we built within an in- ternational project called DSUNDT (Distributed System for Ultrasonic NonDestructive Tomography). Participants of this project are ScanMaster Systems from Israel, Cyber Systems Development from Russia and the University of Applied Sciences Darmstadt from Germany. As the title suggest it, the background of our project is the area of non-destructive testing. The aim is to upgrade the ordinary 2D representation of test data from a test object into a 3D model of the whole item including the aquired test data for a better representation of the structural defects. This should lead to an enhancement in the quality of fault recognition. Because processing the test data and modelling the 3D model are very resource consuming tasks, those operations take place in our cloud, so that hardware costs of the testing equipment for the customers can be kept low. Figure 1 shows an abstract sequence of necessary steps for a testing process within DSUNDT. First, test data is obtained via ultrasonic with specific testing equipment from ScanMaster (1). Then, the data is send into the cloud (2) for further processing (3). The generated 3D models are being transmitted to the customers workstation (4), where the actual examination for structual defects of the test item is done by the customer (5). Cloud
Processing the data and generating the 3D model of the test object
2. Transmi tting test data into the cloud
4. Transmittin g the 3D model
Ultras onic Testing Equipment
1. Generating test data via ultrasonic
Custom er Workstation
5. Analysing the test object via the 3D model
Fig. 1. Abstract sequence of steps for a testing process
Security Concept for a Cloud in a Non-Destructive Testing Environment 23
The cloud is the central component of the system, where data of different customers is stored and processed. In the following, we will discuss the security measures taken for the cloud and the customer data stored inside the system. The paper proceeds as follows: In section 2, we provide an overview of the cloud’s architecture and the different actors of the system. Section 3 describes the au- thentication mechanism for the users of our cloud. The different ways to interact with the cloud are discussed in section 4. How we secure customer data at rest is described in section 5 and in section 6 we explain how we secure communication with the cloud. Finally, our conclusions are presented in section 7.
2
The Cloud
The purpose of the cloud is to store, process and distribute customer data. We distinguish between three user roles for the cloud: The customers from ScanMas- ter, who are in fact the main users of the system. Though, they are not interact directly with it. The second user role is ScanMaster itself, who is responsible for user management and maintenance. Finally the administrator of the whole system, who is taking care of the underlying hardware infrastructure as well as the software components. Each role has different ways to interact with the cloud. We’ll go into detail on this in section 4. At first, we take a look at the cloud itself. The cloud is based on the open-source cloud-framework OpenNebula [1]. The popular alternative OpenStack [2] has been evaluated, but considered as over- sized for this project. As shown in figure 2, it consists of three main components: A front-end as a centralized point for managing and interacting with the cloud, one or more host-systems for executing the virtual machines (VM) of the customers and an image repository where the VM images are being stored. Every customer gets his own VM image on the cloud. This VM serves for storing and processing this customer’s data. Therefore, data has to be transmitted to this VM from the customers non-destructive testing device and transfered from the VM to the customer’s workstation for further analysis. However, this part of data transmis- sion inside the system is not covered in this paper. The access to a customer’s VM is restricted to this specific customer and ScanMaster (or the administrator) for maintenance work. The image on which the VM is based on and, therefore, includes the customer’s data, is stored in the image repository.
3
User Authentication
Users of the cloud, whatever their role, need to authenticate themselves against the cloud in order to get access to it. OpenNebula provides an authentication method based on X.509 certificates. We make use of this method which requires every user of the cloud to be in possession of a valid certificate. Therefore, we provide users with a self-signed X.509 certificate, which is sufficient for our purpose. For creating and signing certificates, we make use of an open-source
24 A. Mathes, A. Schuette, M. Bron, V. Shishkin, D. Stenyushkin
OpenNebula Front-End
Image Repository
Host-System VM Custo
VM Custo
VM Custom
mer A A's
mer B B's
er C C's
data
data
data
Fig. 2. The cloud with its three main components
software called OpenSSL [3]. Users have to install their certificate in their browser through which they access the cloud in order to get properly authenticated. In addition to the certificate, customers as well as ScanMaster obtain a password for the web-applications provided by us (see next section). Thus, we have a two-factor authentication with a possession factor (X.509 certificate) and a knowledge factor (password). While the password is validated by our applications, the validity of the certificate is checked by the Apache server which runs our applications (see figure 3). Furthermore, the authentication process on the inspection system is done through services running on the inspection system and the customer VM using secure tunnel protocols.
4 User Interaction Figure 3 shows the different users of the cloud and their ways to interact with the system. In the following subsections, we will go into more detail on this. 4.1
Customer
The access on the cloud for customers is restricted to their own VM. But because customers are only interested in their data stored inside their VM, they don’t need to access the VM directly. Instead, we provide a web-application where customers are able to login via their password and certificate and can do some basic actions on their data. This includes to list and display all their data currently stored in the VM and to delete selected data objects. In this way, we
Security Concept for a Cloud in a Non-Destructive Testing Environment 25
give a customer the smallest possible access to the cloud in form of the interface of our application. Depending on the correctness of the provided password and certificate, which are both needed for a successfull authentication, a customer gets access to his and only his VM. Therefore, no customer should be able to get access to another customer’s VM as long as he is not in the possession of another customer’s password and certificate. 4.2
ScanMaster
We provide also a web-application for ScanMaster. This application serves for user administrative tasks like adding a new customer to the system, listing and displaying all customers, deactivating and reactivating customers and, finally, deleting customers. Just as a customer, a ScanMaster-user needs to provide a password and a certificate in order to authenticate himself before he gets access to the cloud via our application. ScanMaster is also able to access the cloud direct via the Front-End. This can be done in two ways: Either via the command line interface (CLI) or via a webinterface called Sunstone. In both ways the authentication against the cloud relies only on the X.509 certificate. No password is needed. In addition, ScanMaster is able to access the customer VMs for maintaining purposes (e.g. system updates). Because the customer VMs run Microsoft’s Windows 7, the connection to this VMs is established via the Remote Desktop Protocol (RDP), which is all but the default mechanism for remote connections to a Windows operating system. 4.3
Administrator
The administration of the system can be done either by ScanMaster or a third party. However, the administrator has direct access to the cloud through the Front-End via the CLI and Sunstone. Again, in this case only a certificate is needed for authentication.
5 Secure Storage of Data The data of a customer is stored on a partition inside this customer’s VM image. The VM images and, therefore, all customer data are stored on the hard drives of the image repository, which is realized as a Network Attached Storage (NAS) and connected to all host systems of the cloud. To secure the customer data inside the image, we make use of encryption. The whole data partition inside the VM image is encrypted with the help of the open-source tool TrueCrypt [4]. The data will be encrypted via the Advanced Encryption Standard (AES) algorithm with a 256-bit key, which provides a high security level. Because there is a need to automatically mount the encrypted partition inside the VM, we make use of a key file instead of a password to encrypt and decrypt the partition and, therefore, the data stored on it. Since it is important to be able to automatically
26 A. Mathes, A. Schuette, M. Bron, V. Shishkin, D. Stenyushkin
ScanMaster
Custo mer
CLI (SSH)
Web-Browser (TLS)
ScanMaster Application
ScanMaster/ Administrator
Customer Application
Web-Browser (TLS)
Apache Server
Sunstone
OpenNebula Front-End
Image Repository
Host-System
VM Custo
VM Custo
VM Custo
mer A A's
mer B B's
mer C C's
data
data
data
RDP (TLS)
ScanMaster
Fig. 3. The various ways of interacting with the cloud
Security Concept for a Cloud in a Non-Destructive Testing Environment 27
mount the partition every time a VM boots and in order to simplify this step, the key file is the same for all customer VM images. Nevertheless, no customer has access to it. It remains stored on the cloud and is only accessible for ScanMaster or the administrator. However, there might be a problem with encrypted data when it comes to runtime speed while working with such data, because the data needs to be decrypted before it can be processed by any other program. In this case, we can change to a smaller key with only 128-bit length, which accelerates the proccess of encryption and decryption while still providing a sufficient security level.
6 Transmission of Data In section 4 we described the various ways for the interaction between users and the cloud. There are the two web applications for the customers and ScanMaster, which are running on an Apache server as shown in figure 3. The connection to this server is secured via Transport Layer Security (TLS). In order to establish a TLS connection, the server is in possession of its own X.509 certificate just as every cloud user. The other web-interface called Sunstone has its own server. Access to Sunstone is only granted with a valid certificate, which has to be integrated into the user’s browser. However, Sunstone does not validate the user’s certificate by itself. It needs an additional TLS proxy server for this job. Our Apache server will take on this role. Therefore, the connection with Sunstone is secured via TLS, too. ScanMaster or the administrator is able to access the cloud via the command line interface. If the user has no physical access to the machine which runs OpenNebula, a remote connection to the system is needed. Because the machine on which the cloud is installed runs a Linux distribution, we make use of the Secure Shell (SSH) server. SSH provides encrypted communication based on the AES algorithm with a key length of 128 bit. But other cryptographic algorithms like 3DES or Twofish are also supported. Though, SSH has its origin in the Linux/Unix world, there is also a popular SSH client for Windows systems called PuTTY [5]. At last, there is the RDP connection, which allows ScanMaster to access the customer VMs. RDP supports encrypted connections via TLS. So, we make use of it.
7
Conclusion
In the previous sections we showed, that every customer of ScanMaster gets a separated space on the cloud in form of an own VM image. Access to the VM image for the customers is controlled by us via an additional web-interface. Thus, customers don’t interact with the cloud directly and we are able to separate them from each other in a sound way. Through a two-factor authentication mechanism based on an individual password and a X.509 certificate we make sure, that no unauthorized person is able to access a customer’s VM or the cloud and,
28 A. Mathes, A. Schuette, M. Bron, V. Shishkin, D. Stenyushkin
especially, that no customer gets access to the VM of another customer and, therefore, to this customer’s private data. To prevent unauthorized persons with a physical access to the cloud to come into property of customer data, this data is encrypted via AES before it is written to any hard disk. Finally, communication with the cloud is secured via TLS to prevent eavesdropping of potential sensitive communication data.
8
References 1. 2. 3. 4. 5.
OpenNebula Project, C12G Labs, www.opennebula.org OpenStack Project, http://www.openstack.org OpenSSL Project www.openssl.org TrueCrypt, TrueCrypt Foundation, www.truecrypt.org PuTTY, www.putty.org
Web-Based time series fuzzy modeling for the analysis of enterprise economics 29
Web-Based time series fuzzy modeling for the analysis of enterprise economics I.G. Perfilieva Centre of Excellence IT4Innovations, division OU, Institute for Research and Applications of Fuzzy Modeling, University of Ostrava, Ostrava 70103, Czech Republic,
[email protected]
N.G. Yarushkina, T.V. Afanasieva, A.A. Romanov Information Systems Department Ulyanovsk State Technical University, Ulyanovsk 432027, Russia,
[email protected],
[email protected],
[email protected]
Abstract. The intelligent methods and fuzzy models for t he analysis and linguistic interpretation of non -linear dependences in source data are used in Internet service for the express analysis of enterprise economic indicators time series.
1 Introduction The regular express analysis of economic situation dynamics, expressed in time series, allows managers of any level to uncover problems of enterprise performance, quality changes in financial structure and the trends of the changes. Commonly, managers are incapable of conducting considerable analysis due to a lack of time and skills. In addition, applying statistical methods may be difficult because economic time series are too short. One solution to this problem is to use a procedure of analysis and time series prediction of key enterprise indicators that can generate the results in linguistic terms that are understandable to any manager. Objects of enterprise express analysis possess objective vagueness, which can be processed with the help of fuzzy methods. For instance, economic indicators can be described on the basis of fuzzy quality characteristics, such as Satisfactory, Good, and Bad, and their changes can be described by fuzzy sets, such as Growth, Fall and Stability. The last decade has seen research on economic data conducted by data mining, where time series analysis and the processing of linguistic terms is called fuzzy time series data mining. There are various models of vagueness that are used for the processing of time series with vague observations. Depending on a chosen model, the following
30 N.G. Yarushkina, T.V. Afanasieva, A.A. Romanov
techniques were proposed for time series analysis and forecasting: fuzzy IF–THEN rules over fuzzy sets 3., 2.; decision rules over rough sets 11., measuring of fuzzyroughness in order to establish a similarity between a part and the whole of a time series 9.. This branch is developed in the works of a number of scientists, including the considerable works of Q. Song and B. Chissom 2., 3., V. Novak and others 5., I. Perfilieva 6., and N. Yarushkina 11.. In 1993, Q. Song and B. Chissom 3. proposed models of stationary and non-stationary first-order fuzzy time series. They applied these models to predict the number of Alabama University students being registered. Crisp time series were fuzzified beforehand. In 2004, N. Yarushkina 12. defined the concept of fuzzy tendency for fuzzy time series and determined research tasks. In 2006, I. Perfilieva 6. applied the fuzzy modeling of time series trends based on the Ftransform. After the proposed models were comprehensively examined, expansions were developed, and problems were revealed. The aim of this contribution is to show that the integration of intelligence-oriented techniques: the F-transform for the time series decomposition into a vector of fuzzy local tendencies and residuals, fuzzy IF–THEN rules for the fuzzy local tendency forecast and autoregressive models for residuals forecast can be successfully used to forecast time series, especially short time series of enterprise economics. The integration means that the techniques are applied simultaneously for the same time series in order to obtain better forecasting results compared to those obtained by each technique solely. The structure of this contribution is as follows: the first part briefly describes the methods for the analysis of enterprise economic indicators using fuzzy modeling and fuzzy tendency forecasting; the second part demonstrates the structure of an Webbased support system for managers; and the third part shows the test results.
2 Procedure for analysis of enterprise economics Among the new tools that have arisen in the Internet service market, enterprise performance evaluation is considered important for maintaining positions in business. An important problem is the analysis of changes in performance indicators of an enterprise, expressed in time series, in comparison with the market sector trends. The authors developed the Internet service for the express analysis of economic indicators based on the integral method of fuzzy modeling and fuzzy tendency forecasting (http://tsas.ulstu.ru). This software is designed for support managers in making decision. It uses numerical data of public reports and outputs recommendations in a linguistic form. The source data for analysis are public financial statements (balance sheets and income statements) arranged into groups: liquidity and solvency ratios, profitability ratios, and activity ratios. It is assumed that analyzed statements are available for the defined period, which can be expressed as a time series—a sequence of values stated at equally spaced time moments (the total number of values must be
Web-Based time series fuzzy modeling for the analysis of enterprise economics 31
more than seven). The method for enterprise performance analysis according to economic indicators includes 3 steps. Step 1. Calculation of indicators. Step 2. Indicator forecasting. Step 3. Linguistic summarization of economic indicator forecasting results. We focus on time series of economic indicators, so we assume that an analyzed time series is non-stationary and can be decomposed as follows: . (1) 𝑥𝑥𝑡𝑡 = 𝑓𝑓 (𝑡𝑡) + 𝑦𝑦𝑡𝑡 In (1), 𝑓𝑓(𝑡𝑡) is a deterministic part, which is usually called a trend, and 𝑦𝑦𝑡𝑡 is a random part, which is additionally assumed to be stationary with a zero mean value and a constant variance. A general model of a stationary time series 𝑦𝑦𝑡𝑡 y can be represented in the form of autoregression 10.: (2) 𝑦𝑦𝑡𝑡 = 𝛼𝛼1 𝑦𝑦𝑡𝑡 − 1 +· · · +𝛼𝛼𝑝𝑝 𝑦𝑦𝑡𝑡 − 𝑝𝑝 + 𝜀𝜀𝑡𝑡 , where coefficients satisfy some additional requirements and εt is white noise. The form (2) is denoted by AR(p), and it is the simplest representation of the time series 𝑦𝑦𝑡𝑡 , which can be used for its forecast. Any approach to time series modeling and forecasting is based on a certain approximate representation of the time series using its formal model. Therefore, an a priori chosen quality of approximation is used to estimate unknown parameters of a model and choose a best model among the available models. If statistics is chosen as an underlying theory, then the quality of approximation is usually the least squares distance. We assume that a time series {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } can be decomposed in accordance with (1). Therefore, our first task is to extract its trend. For this purpose, we propose the technique called the F-transform (“F” stands for fuzzy) because it has many successful applications in data and image processing 5., 6., 7.. Generally, the (discrete) F-transform of a function 𝑓𝑓 ∶ 𝑃𝑃 → ℝ is a vector in which the components can be considered weighted local mean values of 𝑓𝑓 . Throughout this paper, ℝ denotes the set of real numbers, 𝑎𝑎, 𝑏𝑏. ⊆ ℝ and 𝑃𝑃 = {𝑝𝑝1 , . . . , 𝑝𝑝𝑙𝑙 }, 𝑛𝑛 < 𝑙𝑙, denotes a finite set of points such that 𝑃𝑃 ⊆ 𝑎𝑎, 𝑏𝑏.. A function 𝑓𝑓 ∶ 𝑃𝑃 → ℝ, defined on the set 𝑃𝑃, is called discrete. The first step in defining the F-transform of f is selecting a fuzzy partition of the interval 𝑎𝑎, 𝑏𝑏. using a finite number 𝑛𝑛 ≥ 3 of fuzzy sets 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 . In 6., we used five axioms to characterize a fuzzy partition. In 8., the number of axioms was reduced to four, and a fuzzy partition was called relaxed. Below, we repeat the last definition. Definition 3.1 Let 𝑎𝑎, 𝑏𝑏. be an interval on ℝ, 𝑛𝑛 ≥ 3, and 𝑥𝑥0 , 𝑥𝑥1 , . . . , 𝑥𝑥𝑛𝑛 , 𝑥𝑥𝑛𝑛+1 nodes such that 𝑎𝑎 = 𝑥𝑥0 ≤ 𝑥𝑥1 < · · · < 𝑥𝑥𝑛𝑛 ≤ 𝑥𝑥𝑛𝑛+1 = 𝑏𝑏. Let 𝑃𝑃 ⊆ 𝑎𝑎, 𝑏𝑏. be a finite set of points such that 𝑃𝑃 = {𝑝𝑝1 , . . . , 𝑝𝑝𝑙𝑙 }, 𝑙𝑙 > 𝑛𝑛 + 2.We say that the fuzzy sets 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 ∶ 𝑎𝑎, 𝑏𝑏. → 0, 1., identified with their membership functions, constitute a fuzzy partition of both sets 𝑎𝑎, 𝑏𝑏. and 𝑃𝑃 if the following conditions are satisfied: (1)(locality) – for every 𝑘𝑘 = 1, . . . , 𝑛𝑛, 𝐴𝐴𝑘𝑘 (𝑥𝑥) = 0 𝑖𝑖𝑖𝑖 𝑥𝑥 ∈ 𝑎𝑎, 𝑏𝑏.\ (𝑥𝑥𝑘𝑘−1 , 𝑥𝑥𝑘𝑘+1 ); (2)(continuity) – for every 𝑘𝑘 = 1, . . . , 𝑛𝑛, 𝐴𝐴𝑘𝑘 𝑖𝑖𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑜𝑜𝑜𝑜 𝑥𝑥𝑘𝑘−1 , 𝑥𝑥𝑘𝑘+1 . ; (3) (density) – for every 𝑘𝑘 = 1, … , 𝑛𝑛 ∑𝑙𝑙𝑗𝑗=1 𝐴𝐴𝑘𝑘 �𝑝𝑝𝑗𝑗 � > 0
32 N.G. Yarushkina, T.V. Afanasieva, A.A. Romanov
(4) (covering) – for every 𝑗𝑗 = 1, … , 𝑙𝑙 ∑𝑛𝑛𝑘𝑘=1 𝐴𝐴𝑘𝑘 �𝑝𝑝𝑗𝑗 � > 0. A fuzzy partition is called uniform if the fuzzy set 𝐴𝐴1 is symmetrical (with respect to the axis 𝑥𝑥 = 𝑥𝑥1 ) and 𝐴𝐴2 , . . . , 𝐴𝐴𝑛𝑛 are shifted copies of 𝐴𝐴1 , i.e. 𝐴𝐴𝑘𝑘 = 𝐴𝐴1 (𝑥𝑥 − 𝑥𝑥𝑘𝑘 ), 𝑘𝑘 = 2, . . . , 𝑛𝑛 (the associated details can be found in 8.. The membership functions 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 in the fuzzy partition are called basic functions. We say that the basic function 𝐴𝐴𝑘𝑘 covers a point 𝑝𝑝𝑗𝑗 if 𝐴𝐴𝑘𝑘 (𝑝𝑝𝑗𝑗 ) > 0. 𝑏𝑏−𝑎𝑎 The formal expression of 𝐴𝐴𝑘𝑘 , i.e. 𝑘𝑘 = 1, . . . , 𝑛𝑛, is given below, where ℎ = . 1− 𝐴𝐴𝑘𝑘 (𝑥𝑥) = �
|𝑥𝑥−𝑥𝑥𝑘𝑘 |
, 𝑥𝑥 ∈ [𝑥𝑥𝑘𝑘−1 , 𝑥𝑥𝑘𝑘+1 ],
𝑛𝑛+1
(3) 0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 In what follows, we fix the interval 𝑎𝑎, 𝑏𝑏., finite set of points 𝑃𝑃 ⊆ 𝑎𝑎, 𝑏𝑏. and fuzzy partition 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 of 𝑎𝑎, 𝑏𝑏.. Denote 𝑎𝑎𝑘𝑘𝑘𝑘 = 𝐴𝐴𝑘𝑘 (𝑝𝑝𝑗𝑗 ) and consider the 𝑛𝑛 × 𝑙𝑙 matrix A with elements 𝑎𝑎𝑘𝑘𝑘𝑘 . We say that 𝐴𝐴 = (𝑎𝑎𝑘𝑘 𝑗𝑗 ) is a partition matrix of P. Once the basic functions 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 (of a relaxed fuzzy partition of 𝑎𝑎, 𝑏𝑏.) are selected, we define 6. the (direct) F-transform of a discrete function 𝑓𝑓 ∶ 𝑃𝑃 → ℝ as the vector 𝐹𝐹𝑛𝑛 (𝒇𝒇) = (𝐹𝐹1 , . . . , 𝐹𝐹𝑛𝑛 )𝑇𝑇 , where the k-th component 𝐹𝐹𝑘𝑘 equals 𝐹𝐹𝑘𝑘 =
ℎ
∑𝑙𝑙𝑗𝑗=1 𝑓𝑓�𝑝𝑝𝑗𝑗 �∙𝐴𝐴𝑘𝑘 (𝑝𝑝𝑗𝑗 ) ∑𝑙𝑙𝑗𝑗=1 𝐴𝐴𝑘𝑘 (𝑝𝑝𝑗𝑗 )
(4)
, 𝑘𝑘 = 1, … , 𝑛𝑛
To stress that the F-transform components 𝐹𝐹1 , . . . , 𝐹𝐹𝑛𝑛 depend on 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 , we say that the F-transform is taken with respect to 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 . Let us identify the function 𝑓𝑓 ∶ 𝑃𝑃 → ℝ with the vector-column 𝒇𝒇 = ( 𝑓𝑓1 , . . . , 𝑓𝑓𝑙𝑙 )𝑻𝑻 of its values on 𝑃𝑃 so that 𝑓𝑓𝑗𝑗 = 𝑓𝑓 (𝑝𝑝𝑗𝑗 ), 𝑗𝑗 = 1, . . . , 𝑙𝑙. Moreover, let partition 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 be represented by the 𝑛𝑛 × 𝑙𝑙 matrix 𝐴𝐴. Then, the vector 𝐹𝐹𝑛𝑛(𝒇𝒇) = (𝐹𝐹1 , . . . , 𝐹𝐹𝑛𝑛 )𝑇𝑇 = �
(𝐴𝐴𝐴𝐴)1 𝑎𝑎1
,…,
(𝐴𝐴𝐴𝐴)𝑛𝑛 𝑇𝑇 𝑎𝑎𝑛𝑛
�
(5)
is the F-transform of f with respect to 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 , where (𝐴𝐴𝒇𝒇)𝑘𝑘 is the k-th component of the vector (𝐴𝐴𝒇𝒇), 𝑎𝑎𝑘𝑘 = ∑𝑙𝑙𝑗𝑗=1 𝑎𝑎𝑘𝑘𝑘𝑘 , 𝑘𝑘 = 1, … , 𝑛𝑛. Expression (5) is a matrix form of the F-transform of f. Because (5) involves linear algebra operations, the computation on its basis is easier than that based on (4). It is easy to see that, when the basic functions 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 A are fixed, the trend 𝑓𝑓 in the form of (4) is determined by the F-transform components 𝐹𝐹1 , . . . , 𝐹𝐹𝑛𝑛 . The sequence {𝐹𝐹𝑘𝑘 , 𝑘𝑘 ∈ 1, 𝑛𝑛. } is considered to be a new time series (see Fig. 1) with observations 𝐹𝐹𝑘𝑘 .We call it a time series with F-transform components and use it to forecast the original time series {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . }. Below, we see that, to forecast {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . }, it is sufficient to forecast two corresponding time series: the series with F-transform components and the other series that has residual vectors. The forecast of the time series with the F-transform components is based on the assumption that the series {𝐹𝐹𝑘𝑘 , 𝑘𝑘 ∈ 1, 𝑛𝑛. } is autoregressive and thus obeys 𝐹𝐹𝑘𝑘 = 𝐺𝐺�𝐹𝐹𝑘𝑘−1 , . . . , 𝐹𝐹𝑘𝑘−𝑝𝑝 �, 𝑘𝑘 = 𝑝𝑝 + 1, . . . , 𝑛𝑛, (6) where 𝑝𝑝 is the order of regression and 𝐺𝐺 ∶ ℝ𝑝𝑝 → ℝ is some function. Three models for G are used in our experiment: linear autoregression (7) 𝐹𝐹 𝑘𝑘 = 𝛼𝛼1 𝐹𝐹𝑘𝑘−1 +· · · +𝛼𝛼𝑝𝑝 𝐹𝐹𝑘𝑘−𝑝𝑝 ,
Web-Based time series fuzzy modeling for the analysis of enterprise economics 33
an artificial neural network and a fuzzy relation model. The first two models are not discussed in our paper, because they are well known and described in the literature.
3 Time series decomposition In this section, we show how a time series can be decomposed into two new series: one series with the F-transform components and the other series with residual vectors. This decomposition is further used for forecasting. Assume that {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . }, 𝑇𝑇 ≥ 3, is a time series in which the observations 𝑥𝑥𝑡𝑡 are real numbers. We consider 𝑥𝑥𝑡𝑡 to be a value of the discrete function 𝑥𝑥, which is defined on the set 𝑃𝑃𝑇𝑇 = {1, . . . , 𝑇𝑇 } of time moments. In what follows, we do not distinguish between the function x and the time series {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . }, and we use the latter in both meanings. Let 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 , 3 ≤ 𝑛𝑛 < 𝑇𝑇 , be basic functions that constitute a fuzzy partition of the interval 1, 𝑇𝑇 .. Denote 𝑃𝑃𝑘𝑘 , 𝑘𝑘 = 1, . . . , 𝑛𝑛, the subset of 𝑃𝑃𝑇𝑇 that consists of points covered by 𝐴𝐴𝑘𝑘 . Because of the density condition, 𝑃𝑃𝑘𝑘 is not empty, and because of the covering condition, 𝑛𝑛
�
𝑃𝑃𝑘𝑘 = 𝑃𝑃𝑇𝑇
𝑘𝑘=1
Let vector (𝑋𝑋1 , . . . , 𝑋𝑋𝑛𝑛 ) be the F-transform of a t ime series {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } with respect to 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 . We say that 𝒓𝒓𝒌𝒌 = {𝑥𝑥𝑡𝑡 − 𝑋𝑋𝑘𝑘 | 𝑡𝑡 ∈ 𝑃𝑃𝑘𝑘 } is the |𝑃𝑃𝑘𝑘 |-dimensional residual vector of {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } with respect to 𝐴𝐴𝑘𝑘 , 𝑘𝑘 = 1, . . . , 𝑛𝑛. All vectors 𝒓𝒓𝒌𝒌 , 𝑘𝑘 = 1, . . . , 𝑛𝑛, have the same dimension, provided that fuzzy partition 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 . is uniform. Let us extend 𝑟𝑟𝑘𝑘 to the full dimensional vector ˜𝑟𝑟𝑘𝑘 = ( ˜𝑟𝑟1𝑘𝑘 , . . . ˜𝑟𝑟𝑇𝑇𝑇𝑇 ) by 𝑥𝑥 − 𝑋𝑋𝑘𝑘 , 𝑖𝑖𝑖𝑖 𝑡𝑡 ∈ 𝑃𝑃𝑘𝑘 . ˜𝑟𝑟𝑡𝑡𝑡𝑡 = � 𝑡𝑡 −∞, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 In the following proposition, we show that the original time series can be reconstructed from the vector of its F-transform components and the series {˜𝑟𝑟𝑘𝑘 , 𝑘𝑘 = 1, . . . , 𝑛𝑛} of extended residual vectors. Proposition 1. Let {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } be a time series and 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 . be a fuzzy partition of 1, 𝑇𝑇 .. Assume that (𝑋𝑋1 , . . . , 𝑋𝑋𝑛𝑛 ) is the F-transform of {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } with respect to 𝐴𝐴1 , . . . , 𝐴𝐴𝑛𝑛 ., and {˜𝑟𝑟𝑘𝑘 , 𝑘𝑘 = 1, . . . , 𝑛𝑛} are extended residual vectors. Then, every element 𝑥𝑥𝑡𝑡 , 𝑡𝑡 = 1, … , 𝑇𝑇 , can be represented as follows: 𝑥𝑥𝑡𝑡 = ⋁𝑛𝑛𝑘𝑘=1(𝑋𝑋𝑘𝑘 + ˜𝑟𝑟𝑡𝑡𝑡𝑡 ) (8) where ∨ denotes the operation of maximum.
34 N.G. Yarushkina, T.V. Afanasieva, A.A. Romanov
Figure 1. An abstract time series −□− and its trend −∗ −. The latter is given by the inverse F-transform with respect to the partition depicted below the time axis t.
4 Fuzzy local tendencies forecasting In this section, a new method for the time series modeling and forecasting is introduced. The method is based on the notion of the local tendency of a time series 12.. Assume that {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . }, 𝑇𝑇 ≥ 3, is a time series that has real observations xt and that the set of these observations is inside an interval 𝑐𝑐, 𝑑𝑑. ∈ ℝ. We adopt the approach of Song and Chissom 3. and associate with the time series {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } a dynamic process with fuzzy values. In practice, this procedure means that we partition 𝑐𝑐, 𝑑𝑑. using fuzzy sets 𝑌𝑌1 , . . . , 𝑌𝑌𝑙𝑙 (Definition 3.1), with the result that the partition is uniform and 𝑌𝑌1 , . . . , 𝑌𝑌𝑙𝑙 have triangular shapes (3). Moreover, the distance between any two neighboring nodes of the partition is chosen in accordance with the required accuracy, e.g. 𝛿𝛿 > 0.With each observation 𝑥𝑥𝑡𝑡 , we associate the fuzzy set 𝑌𝑌𝑖𝑖𝑡𝑡 such that 𝑌𝑌𝑖𝑖𝑖𝑖 (𝑥𝑥𝑡𝑡 ) = 𝑚𝑚𝑚𝑚𝑚𝑚{𝑌𝑌𝑖𝑖 (𝑥𝑥𝑡𝑡 ) | 𝑌𝑌𝑖𝑖 (𝑥𝑥𝑡𝑡 ) > 0, 𝑖𝑖 = 1, . . . , 𝑘𝑘}. (9) In a certain sense, 𝑌𝑌𝑖𝑖𝑖𝑖 gives the best characterization of 𝑥𝑥𝑡𝑡 . Thus, the initial time series {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } can be replaced by the time series {𝑌𝑌𝑖𝑖𝑖𝑖 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } with fuzzy observations. If {𝑌𝑌𝑖𝑖𝑖𝑖 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } is time-invariant,3 then it can be represented in the form of autoregression, as shown in 3.. Below, we propose weaker conditions of the similar claim. They are based on another type of decomposition of { 𝑌𝑌𝑖𝑖𝑖𝑖 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } compared to that in 3.. Let us explain the details. Let 𝑥𝑥𝑡𝑡−1 , 𝑥𝑥𝑡𝑡 be two neighboring values of the time series {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . }, and let 𝑌𝑌𝑖𝑖𝑖𝑖−1 and 𝑌𝑌𝑖𝑖𝑖𝑖 be the respective fuzzy sets that satisfy (9).We denote 𝑑𝑑𝑡𝑡 = |𝑖𝑖𝑡𝑡 − 𝑖𝑖𝑡𝑡−1 | and 𝑣𝑣𝑡𝑡 = 𝑠𝑠𝑠𝑠𝑠𝑠(𝑖𝑖𝑡𝑡 − 𝑖𝑖𝑡𝑡−1 ), where −1, if x < 0, 𝑠𝑠𝑠𝑠𝑠𝑠(𝑥𝑥) = � 0, if x = 0, +1, if x > 0. Then, {𝑑𝑑𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . } and {𝑣𝑣𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . } are two new time series that correspond to {𝑌𝑌𝑖𝑖𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . }. Their values are inside the respective intervals 0, 𝑙𝑙 − 1. and −1, 1. of real numbers. Let the interval 0, 𝑙𝑙 − 1. be partitioned by fuzzy sets 𝐷𝐷1 , . . . , 𝐷𝐷𝑚𝑚 , 𝑚𝑚 ≥ 2, and let the interval −1, 1. be partitioned by fuzzy sets V1, V2, V3,
Web-Based time series fuzzy modeling for the analysis of enterprise economics 35
where V1 represents “increase”, V2 represents “stabilization” and V3 represents “decrease”. Similar to (9), the values 𝑑𝑑𝑡𝑡 and 𝑣𝑣𝑡𝑡 uniquely determine the respective fuzzy sets 𝐷𝐷𝑡𝑡 and 𝑉𝑉𝑡𝑡 , 𝑡𝑡 = 2, . . . , 𝑇𝑇, where 𝐷𝐷𝑡𝑡 (𝑑𝑑𝑡𝑡 ) = 𝑚𝑚𝑚𝑚𝑚𝑚{𝐷𝐷𝑖𝑖 (𝑑𝑑𝑡𝑡 ) | 𝐷𝐷𝑖𝑖 (𝑑𝑑𝑡𝑡 ) > 0, 𝑖𝑖 = 1, . . . , 𝑚𝑚}, (10) 𝑉𝑉𝑡𝑡 (𝑣𝑣𝑡𝑡 ) = 𝑚𝑚𝑚𝑚𝑚𝑚{𝑉𝑉 𝑖𝑖 (𝑣𝑣𝑡𝑡 ) | 𝑉𝑉𝑖𝑖 (𝑣𝑣𝑡𝑡 ) > 0, 𝑖𝑖 = 1, 2, 3}, (11) such that the fuzzy time series {𝐷𝐷𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . } and {𝑉𝑉𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . } are associated with the original time series {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . }. The fuzzy set 𝑉𝑉𝑡𝑡 characterizes a “type” of dynamic behavior for 𝑥𝑥𝑡𝑡 at moment 𝑡𝑡 − 1 in terms of the “increase”, d ecrease”, and “stabilization”. The fuzzy set 𝐷𝐷𝑡𝑡 characterizes the “intensity” of the dynamic behavior of 𝑥𝑥𝑡𝑡 at moment 𝑡𝑡 − 1, which is measured using fuzzy numbers. The pair(𝑉𝑉𝑡𝑡 , 𝐷𝐷𝑡𝑡 ) characterizes a “local tendency” of a time series at moment 𝑡𝑡 12.. Below, we give a description of the proposed technique, which focuses on t he
representation of autoregression models for {𝐷𝐷𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . } and {𝑉𝑉𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . }, in the form of first-order fuzzy relational models. We start with (12) 𝐷𝐷𝑡𝑡 = 𝐷𝐷𝑡𝑡 − 1 ∘ 𝑅𝑅𝐷𝐷(𝑡𝑡 − 1, 𝑡𝑡), (13) 𝑉𝑉𝑡𝑡 = 𝑉𝑉𝑡𝑡 − 1 ∘ 𝑅𝑅𝑉𝑉 (𝑡𝑡 − 1, 𝑡𝑡), where 𝑅𝑅𝐷𝐷(𝑡𝑡 − 1, 𝑡𝑡) and 𝑅𝑅𝑉𝑉 (𝑡𝑡 − 1, 𝑡𝑡) are respective (unknown) fuzzy relations on 0, 𝑙𝑙 − 1. and (𝑉𝑉𝑡𝑡 ) belongs to some fixed collection of fuzzy sets (that does not depend on 𝑡𝑡), then the fuzzy time series {𝐷𝐷𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . } ({𝑉𝑉𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . }) is time invariant and fulfills 𝑅𝑅𝐷𝐷(𝑡𝑡 − 1, 𝑡𝑡) = 𝑅𝑅𝐷𝐷 (𝑅𝑅𝑉𝑉 (𝑡𝑡 − 1, 𝑡𝑡) = 𝑅𝑅𝑉𝑉 ) 3.. In Proposition 2 below, we show conditions that guarantee that both series {𝐷𝐷𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . } and {𝑉𝑉𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . } are time-invariant. As a consequence, we represent unknown fuzzy relations 𝑅𝑅𝐷𝐷(𝑡𝑡 − 1, 𝑡𝑡) and 𝑅𝑅𝑉𝑉 (𝑡𝑡 − 1, 𝑡𝑡) by (14) and (15).
Proposition 2 Assume that {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } ⊆ 𝑐𝑐, 𝑑𝑑. is a time series such that its first-order differences are bounded, i.e. there exists M > 0 such that, for all 𝑡𝑡 = 2, . . . , 𝑇𝑇 , |𝑥𝑥𝑡𝑡 − 𝑥𝑥𝑡𝑡 − 1| ≤ 𝑀𝑀. Then there exists partition 𝑌𝑌1, . . . , 𝑌𝑌𝑙𝑙 of 𝑐𝑐, 𝑑𝑑. such that the corresponding fuzzy time series {𝐷𝐷𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . } and {𝑉𝑉𝑡𝑡 , 𝑡𝑡 ∈ 2, 𝑇𝑇 . } are time-invariant. According to Proposition 2, the fuzzy relations RD and RV in (12) and (13) can be taken as (14) 𝑅𝑅𝐷𝐷 = ⋁𝑇𝑇𝑡𝑡=2(𝐷𝐷𝑡𝑡−1 ∧ 𝐷𝐷𝑡𝑡 ), (15) 𝑅𝑅𝑉𝑉 = ⋁𝑇𝑇𝑡𝑡=2(𝑉𝑉𝑡𝑡−1 ∧ 𝑉𝑉𝑡𝑡 ), where ∧ denotes the min operation. It is easy to see that, once we know the fuzzy sets 𝐷𝐷𝑡𝑡 − 1 and 𝑉𝑉𝑡𝑡 − 1 and the fuzzy relations 𝑅𝑅𝐷𝐷 and 𝑉𝑉 , we can apply (12) and (13) and obtain the fuzzy sets 𝐷𝐷𝑡𝑡 and 𝑉𝑉𝑡𝑡 for any 𝑡𝑡 = 2, . . . , 𝑇𝑇 . In summary, the original time series {𝑥𝑥𝑡𝑡 , 𝑡𝑡 ∈ 1, 𝑇𝑇 . } satisfies the following autoregression scheme: (16) 𝑥𝑥𝑡𝑡 = 𝐷𝐷𝐷𝐷𝐷𝐷(𝑋𝑋𝑖𝑖𝑡𝑡 − 1 + 𝐷𝐷𝐷𝐷𝐷𝐷(𝑉𝑉𝑡𝑡 ) · 𝐷𝐷𝐷𝐷𝐷𝐷(𝐷𝐷𝑡𝑡 )), where 𝐷𝐷𝐷𝐷𝐷𝐷(·) is the defuzzified value of a respective fuzzy set. In our approach, we use the center of gravity defuzzificaton, i.e. for an abstract fuzzy set A on a finite set X, the defuzzified value 𝐷𝐷𝐷𝐷𝐷𝐷(𝐴𝐴) ∈ 𝑋𝑋 is
36 N.G. Yarushkina, T.V. Afanasieva, A.A. Romanov
∑𝑥𝑥∈𝑋𝑋 𝑥𝑥𝑥𝑥(𝑥𝑥) ∑𝑥𝑥∈𝑋𝑋 𝐴𝐴(𝑥𝑥) Below, the scheme (16) is used to forecast a time series. 𝐷𝐷𝐷𝐷𝐷𝐷(𝐴𝐴) =
5 6Wructure of Internet service for analysis of enterprise economics The development of a new software product—an Internet service of an integral method of fuzzy modeling of time series and fuzzy tendency analysis—employed a design concept based on the system approach and component-based architecture. The developed Internet service is an Web-based decision support system for managers, consisting of the following structural components: a control system, two specialized Internet services (or web services) and a d atabase. The first specialized Internet service is implemented on the basis of the theory of F-transforms 5.. It includes: fuzzy modeling, trend extraction and prediction of time series – all is based on numerical values.. The basis of the second specialized Internet service is time series estimation according to a fuzzy scale and fuzzy tendency analysis. From the viewpoint of a user, the developed Internet service is a multipage web site with authorized access. The structure of the Internet system is shown in Fig. 2. F-transformation service
Control system
Database
Fuzzy tendency service
Figure 2. Internet system structure
The Internet system is performed by JSP technology (Java Server Pages). This is a technology for the dynamic generation of HTML, XML and other web pages. This technology is not a component of integrated Java EE for the creation of business application, which is used in the project because it can be applied separately and Java EE in turn can be applied without JSP. The technology allows Java code and EL (expression language) to be embedded into static web-page content. In addition, JSP tag libraries can be embedded into JSP pages. The pages are compiled by a JSP compiler into servlets performed by Java classes executed in the server. Servlets can also be written without applying JSP pages. These technologies are able to supply each other. The case for JSP is that it is one of the highest performance technologies because the entire page code is assembled into servlet Java code by the JSP pages compiler Jasper and afterward is compiled into the byte code of a virtual Java
Web-Based time series fuzzy modeling for the analysis of enterprise economics 37
machine (JVM). Servlet containers (Tomcat) applicable for JSP pages are written in Java language and are applicable for various operation systems and platforms, which is also an advantage. The results of the express analysis are forecast values and tendencies of economic indicators. These values are presented in a graphical form (see Figs. 3) and are explained in natural language as recommendations (see Figs. 4). As of now, there are more than 20 registered enterprises and more than 100 time series that are actively being processed.
Figure 3. Fuzzy tendencies forecasting in internet service
The results obtained - Internet service for express analysis of enterprises on the basis of analysis and prediction of time series with economic indicators - are available for any users.
38 N.G. Yarushkina, T.V. Afanasieva, A.A. Romanov
Figure 4. Linguistic summarization of economic indicator forecasting results
6 Conclusion The integration of two soft computing techniques, i.e., the F-transform and fuzzy tendency modeling, was applied to analyze and forecast time series. In this contribution, we describe a new software system that was elaborated using the proposed theory. Aside from the F-transform, the system includes an analysis of time series and their tendencies, which are characterized in terms of natural language. Analysis results demonstrate that the fuzzy modeling of fuzzy time series is efficient for economic non-stationary time series of short length (typically, 7-40), where stochastic models ARIMA are frequently inadequate. ACKNOWLEDGMENTS The authors acknowledge that this paper was partially supported by: the European Regional Development Fund in the IT4Innovations Centre of Excellence project (CZ.1.05/1.1.00/02.0070), the project MSMT-7026/2012-36 (KONTAKT II, project LH 12229 ``Research and development of methods and means of intelligent analysis of time series for the strategic planning problems'') and the Russian Foundation of Basic Research RFFI-10-01-00183. References:
1. Box, G. and Jenkins, G., 1970. Time Series Analysis: Forecasting and Control, San Francisco: Holden-Day. 2. Hwang, J. R., Chen, S. M. and Lee, C. H., 1998. Handling forecasting problems using fuzzy time series. Fuzzy Sets and Systems, 100, 217-228. 3. Song, Q. and Chissom, B. (1993), “Fuzzy time series and its models." Fuzzy Sets and Systems, 54, pp. 269-277. 4. Song, Q. and Chissom, B. (1993), “Forecasting enrollments with fuzzy time series. Part I." Fuzzy Sets and Systems, 54, pp. 1-9. 5. Novak, V., Stepnicka, M., Dvorak, A., Perfilieva, I. and Pavliska, V. (2010), “Analysis of seasonal time series using fuzzy approach." International Journal of General Systems, 39, pp. 305 328.
Web-Based time series fuzzy modeling for the analysis of enterprise economics 39
6. Perflieva, I.(2006) “Fuzzy transforms: Theory and applications." Fuzzy Sets and Systems, 157, pp. 993-1023. 7. Perfilieva, I.: Fuzzy transforms: A challenge to conventional transforms, in P.W. Hawkes(ed.), Advances in Images and Electron Physics, 147, Elsevier Academic Press, San Diego, 2007,pp.137-196 8. Perfilieva, I., M. Danková, and B. Bede. 2011. “Towards a Higher Degree F-Transform.” Fuzzy Sets and Systems 180 : 3–19. 9. Sarkar, M., 2006. Ruggedness measures of medical time series using fuzzy-rough sets and fractals. Pattern Recognition Letters archive, 27, 447-454. 10.Wold, H., 1938. A Stu dy in the Analysis of Stationary Time Series, Stockholm: Almqvist and Wiksel. 11. Yao, J. and Herbert, J. P., 2009. Financial time-series analysis with rough sets. Applied Soft Computing, 9, 1000-1007. 12. Yarushkina, N. G. (2004), Principles of the theory of fuzzy and hybrid systems, Finances and statistics, Moscow
40 P. Sosnin, Y. Lapshov, V. Maklaev
Pseudo-Code Programming of Workflows in Conceptual Designing of Software Intensive System P. Sosnin, Y. Lapshov, V. Maklaev Computer Department, Ulyanovsk state technical university/Professor UlGTU,32,Severny Venetc, 432027,Ulyanovsk, Russia
[email protected]
Abstract. This paper represents an experiential approach to a programmable management of a collaborative activity in conceptual designing of S oftware Intensive Systems. The suggested version of programming the collaborative activity is oriented on the model of the designer interacting with an accessible experience. Keywords: Conceptual Designing; Precedent; Programming; Workflows.
1
INTRODUCTION
Nowadays the collaborative activity in computerized mediums is a s ubject area being evolved very intensively. A very important part of this area is connected with developments of Software Intensive Systems (SIS) any of which is a system “where software represents a significant segment in any of the following points: system functionality, system cost, system development risk, development time” [1]. Creating the SIS is estimated to be a very problematic type of the collaborative activity. Its numerous researches, including the statistical reports of corporation Standish Group [2], indicate the extremely low degree of effectiveness (about 35 %) of such activity. It is explained by several reasons including the problems with the human factor and high complexity of modern computer applications in processes of their creating and using. Therefore indicated factors of collaborative activities can be investigated on the example of the activity aimed at creating the SIS. Later the results of such researches can be used in the other types of collaborative activities. In the sufficient measure the specificity of the SIS creation can be reflected by features of corresponding technologies, for example, by technologies using “Rational Unified Process” [3]. The specificity of Rational Unified Process (RUP) includes the following features: • collective activity of designers which fulfill d ifferent actions by playing corresponding roles (architect, system analyst, programmer and many others) in frames of definite scenarios; • normative modeling of such activity in the form of workflows the typical units of which are tasks with guides for their repeated decisions;
Pseudo-Code Programming of Workflows in Conceptual Designing of Software Intensive System 41
• necessity of solving the several hundred of typical tasks (for example, in RUP about 500 such tasks only for conceptual designing) by a team of designers; • necessity of formulating and solving the new tasks (not only typical tasks) the creative works with which evolve personal and collective experience of designers. The list of RUP features can be continued but the named of them are sufficient for explaining the suggestions described below. First of all we emphasize a complexity of designer interactions with the used technology of the RUP type. This paper presents an experiential approach to a programmable management of the collaborative activity in conceptual designing of SIS. The conceptual stage is chosen because the price of misunderstanding on the given stage is very high. Programming the collaborative activity promotes simplifying the complexity as for processes of designing so for interactions with SIS in its current state of designing. The programmable management is supported by the specialized toolkit WIQA (Working In Questions and Answers) [4]. This toolkit has several interpretations one of which is “integrated environment for pseudo-code programming of the collaborative activity of designers”.
2
Preliminary Statements
In general sense the complexity (or simplicity) of SIS reflects the degree of difficulty for designers in their interactions with definite models of SIS (or its components) in solving the definite tasks. The system or its any component is complex, if the designer (interacting with the system) does not have sufficient resources for the achievement of the necessary level of understanding or achieving the other planned aims. Often enough various interpretations of Kolmogorov measure [5] are applied for estimations of a d egree of the system complexity. T his measure is connected with “the minimal length of program Р providing the construction of system S from its initial description D”. In creating of SIS the objects of the Р-type are being built in step by step into the process of designing with using the certain “method of programming of M”. Reality of such work demonstrates that the complexity of “P-object” no less than the complexity of SIS in its any used state. Moreover, M-program providing the construction of P-object is being built on the base of the same initial description D as the system S. It can be presented by the following chain D→M→P→S. Named relations between D, M, P can be used by designers for disuniting the process of designing on stages [D(t0)→M1→P1→S(t1)], [D(t1)→M1→P1→S(t2)], …, [D(ti)→Mi→Pi→S(ti+1)], …., [D(tn-1)→Mn-1→Pn-1→S(tn)] where a set {S(ti)} is being created with using “the programs” of M- and P-types. We suggest a constructive way for using means of programming for units of Mand P-types. But first of all we shall explain our understanding of these units which have direct relations to the experiential activity of designers solving the tasks of conceptual designing.
42 P. Sosnin, Y. Lapshov, V. Maklaev
Designers who will be testing the solutions should have the possibilities for experimenting with them. In this sense P-programs correspond to plans of experiments being fulfilled by designers. M-programs are intended for creating the P-programs in forms which are effective for their use by designers. Programs of this type are responsible for collaborative solving the tasks in coordination. Below we shall come back to the diversity and similarity of M- and P-programs but at first we shall indicate means used for the named purposes in the modern technologies providing the development of SIS. For this we shall analyze means embedded to the Rational Unified Process.
3
Workflows in Conceptual Designing
The conformity to requirements and understandability are being reached in the RUP with the help of “block and line” diagrams expressed in the Unified Modeling Language (UML). The content of diagrams built by designers is being clarified by necessary textual descriptions. But UML is not the language of the executable type and therefore diagrams are not suitable for experimenting with them as with programs of P-type. For collaborative solving the tasks in coordination the RUP suggests the means of normative workflows [6] the relations between which are being regulated by a set of rules. The one domain of workflows (without explanation) is presented in Fig. 1 only for demonstrating the structure and complexity of executed processes.
Fig. 1. Workflows “Business modeling” of RUP In accordance with definition “A workflow is a reliably repeatable pattern of activity enabled by a systematic organization of resources, defined roles and mass, energy and information flows, into a work process that can be documented and learned”. (Wikipedia)”. The abstract example of workflow is presented in Fig. 2.
Pseudo-Code Programming of Workflows in Conceptual Designing of Software Intensive System 43
Fig.2. Typical structure of workflow
Thus, in designing, any workflow Wm combines a set of definite tasks {Zni} which are distributed among designers {Dn} in accordance to their competences. The set of competences being needed for conceptual designing should include: 1. Certain competencies which can help to designers in their attempts to solve the necessary tasks ZS = {ZSi} of the SIS subject area. 2. A system of competencies which is necessary for solving the normative tasks ZN = {ZNj} of technology used by designers. Told above emphasizes that designers should be very qualified specialists in the technology domain but that is not sufficient for the successful designing. Normative tasks are invariant to the SIS domain and therefore designers should gain certain experience needed for solving the definite tasks of the SIS subject area. The most part of the additional experience is being acquired by designers in the experiential learning when tasks of ZS-type are being solved. Solving of any task ZSi is similar to its expanding into a series on the base of normative tasks. The series can be interpreted as the expression ZSi(t) = Σ aj(t) ZNj, where any aj(t) is a “coefficient” reflecting the specificity of the use of the task ZNj in the decomposition. Such decomposition is no more than point of view, helping in the detachment of a very important designer activity connected with an adaptation of the chosen normative task ZNj to the content of the task ZSi(t). It is rational to consider that defining of any “coefficient” requires solving the corresponding task of adaptation ZAj. Any task of ZA-type should be determined, analyzed and solved by the corresponding designer on the base of the accessible experience in the real time. Functions of the accessible experience can be being fulfilled by the personal and collective experience and useful experience models also. Forms of the experiential behavior are being used by designers not only in their real work with tasks of ZA-type but also in solutions of tasks: • from the set {ZWm} providing the works with tasks of ZS-type in corresponding workflows {Wm} in SIS;
44 P. Sosnin, Y. Lapshov, V. Maklaev
• from the set {ZWn} providing the works with tasks of ZN-type in corresponding workflows {Wn} in the used technology; • from sets {ZGp} and {ZGr} any of which corresponds to the definite group of workflows in SIS or technology. In suggested approach the designers are interacted with all indicated tasks in the WIQA environment in accordance with the scheme presented in Fig.3.
Fig. 3. Experiential interactions with tasks being solved
For any task of the definite normative workflow the RUP has its interactive diagrammatic model with a set of components the use of which can help in solving the task. Forms of programming are not used also in all of these means. The similar state of affairs with conceptual designing exists in other known technologies supporting the development of SIS. It is necessary to note that programming of the designer activity leads to its automation in the certain volume. Solving any appointed task, the designer registers the used question-answer reasoning (QA-reasoning) in the specialized protocol (QA-protocol) so that this QAprotocol can be used as the task model (QA-model). Models of such type can be used by designers for experimenting in the real time with tasks being solved. Units of the experiential behavior extracted from solution processes are being modeled on t he base of QA-models of tasks. Created behavioral models are being loaded in the question-answer database (QA-base) and Experience Base [7] of WIQA. After that they can be used by designers as units of the experience. Experience models from the other sources can be uploaded in the Experience Base also. If designers of SIS use the toolkit WIQA they have the opportunity for conceptual modeling the tasks of different indicated types. In this case the current state of tasks being solved collaboratively is being registered in QA-base of the toolkit and this state is visually accessible in forms of the tree of tasks and QA-models for corresponding tasks. The named opportunity is presented figuratively in Fig. 4 where QA-base is interpreted as a specialized QA-memory. In the toolkit its QA-base is materialized with using Date Base Management System MS SQL 2008 Standard Edition. But for its users the QA-base is opened as a memory keeping the interactive objects presenting “tasks” Z, “questions” Q and “answers” A. Moreover, these objects are bound in hierarchical structures. In their real time work the designers interact with such objects. They process them with the help of appropriate operations helping to find and test the solution of tasks. All of that has led to the idea of programming the designer activity with keeping of
Pseudo-Code Programming of Workflows in Conceptual Designing of Software Intensive System 45
created programs in QA-memory.
Fig. 4. Sinking of tasks in WIQA-environment
So all indicated tasks are being uploaded to QA-memory with the rich system of operations with interactive objects of Z-, Q- and A-types. Designers have the possibility to program the interactions with necessary objects. Such programs are similar to the plans of the experimental activity in conceptual designing of SIS. Operators of programs are placed in Q-objects. Corresponding A-objects are used for registering the facts or features of executed operations. Thus any QA-operator is registered in pair of corresponding Q- and A-objects or shortly in QA-unit. Creating the programs for experimenting with conceptual models of tasks should be fulfilled by designers the competence of which can be far from the experience of professional programmers. It was a main cause to embed the means of pseudo-code programming in the toolkit WIQA [8]. The similarity of the language of (pseudocode) programming and the natural language in its algorithmic usage was the second and no less important cause.
46 P. Sosnin, Y. Lapshov, V. Maklaev
4
Pseudo-code Language LWIQA
The created language is oriented on QA-memory and named “WIQA”. Below this language will be designated LWIQA. The current versions have evolved the language LWIQA till the state presented in Fig. 5 LWIQA QA-data
QA-operators
Traditional types
Basic operators SQL-operators
Data model
For workflows Fig. 5. Structure of LWIQA
The system of structural units of LWIQA includes: • traditional types of data such as scalars, lists, records, sets, stacks, queues and the other data types; • data model of the relational type describing the structure of database; • basic operators including traditional pseudo-code operators, for example, Appoint, Input, Output, If-Then-Else, GOTO, Call, Interrupt, Finish and the others operators; • SQL-operators in their simplified subset including Create Database, Create Table, Drop Table, Select, Delete From, Insert Into, Update; • operators for managing the workflows oriented on collaborative designing (Seize, Interrupt, Wait, Cancel, Queue); The important type of basic operators is an explicit or implicit command aimed at the execution by the designer the definite action. The explicit commands are being written in the natural language in its algorithmic usage. But the traditional meaning of named data and operators is only one side of their content. The other side is bound with attributes of QA-units in which data and operators are uploaded. Definite data or operator inherits the attributes of the corresponding QA-unit as a sell of QA-memory. They inherit not only attributes of QA-units but their understanding as “questions” and “answers” also. Thus data and operators of LWIQA inherit a very useful set of attributes of QA-units in which they are registered. For example, such set includes “type or sub-type of QAunit”, “name of creator”, ”time of last saving” and any set of any additional attributes appointed by designers. One of basic attributes is a “unique unit name” which can use as a unit address in QA-memory. The unique name includes the name of unit type (for example “Q”) and the compound unique index appointed automatically. Thus unit address reflects the hierarchy relations between related QA-units. Attributes of QA-units essentially influence on data (QA-data) and operators (QAoperators) of pseudo-code programs. Such influence can be used effectively in executions of pseudo-code programs. Named specificity of QA-data and QA-
Pseudo-Code Programming of Workflows in Conceptual Designing of Software Intensive System 47
operators emphasizes the essential difference between LWIQA and other known pseudo-code languages. Therefore, pseudo-code programs written in LWIQA have received the name “QA-programs”.
5
QA-programming of Workflows
The language LWIQA as any other algorithmic language is not separable from means supporting the use of LWIQA in the planned purposes. For this language the following purposes are included to their set: • rational management by the workflow execution in the human-computer medium; • pseudo-code programming of t asks (in the workflow) oriented on their solutions by the definite designer. Let us notice that in the described approach QA-programming is used as for tasks of se ts {ZSi}, {ZNj} and {ZAk} so for tasks of sets {ZWm},{ZWn},{ZGp} and {ZGr}. Pseudo-code programming the tasks of ZS, ZN and ZA-types are described in detail in the paper [8]. This paper concentrates on pseudo-code programming of workflows. Programming of the definite workflow task ZWn is implemented by the designer who is responsible for this workflow. For example, the designer should program the workflow with following dynamic between tasks that are embedded to it: • after performance of Task1 it is possible to begin performance of problem Task2 or Task3; • after completion of problems Task2 or Task3 it is required to begin simultaneous performance of problems Task4 and Task5 persons P1 and P2. • Task4 and Task5 cannot to start be carried out until problem Task2 or Task3 it will not be completed. These relations are described by QA-program: //Initiation. Declaration of the workflow tasks SET &task[1]&, 1; &task[2]&, 2; &task[3]&, 3; &task[4]&, 4; &task[5]&, 5 &cnt&, 0 //Appointing the tasks to the first executor (drsigner P1) SEIZE 1, &task[1]& SEIZE 1, &task[2]& SEIZE 1, &task[3]& SEIZE 1, &task[4]& // Appointing of the task (Task 5) to the designer P2 SEIZE 2, &task[5]& //Including the Task1 to the queue QUEUE &task[1]& // Including the Task2 и Task3 in the queue with conditions QUEUE &task[2]&, &condition& AND &task[1]&.state ==DONE QUEUE &task[3]&, !&condition& AND &task[1]&.state == DONE QUEUE &task[4]&, (&task[1]&.state == DONE) OR (&task[1]&.state == DONE)
48 P. Sosnin, Y. Lapshov, V. Maklaev QUEUE &task[5]&, (&task[1]&.state == DONE) OR (&task[1]&.state == DONE) FINISH
The example indicates that an implementation of any workflow should include the following action: • appointing of tasks to their executors in accordance with necessary competences; • appointing of the planned time characteristics for each task; • programming of dynamic conditions between tasks; • parallel and pseudo-parallel executions of tasks; • controlling of assignments and executions of tasks. The example of the workflow indicates that a definite set of tasks is appointed to each designer who should fulfill rules of joint work modeled by the workflow. Any set of rules finds its materialization in dynamic relations between tasks. Moreover, any task is being executed as a reuse of the corresponding precedent (normative composition of operations. In the toolkit WIQA the execution of workflows is carried out by a complex of special subsystems presented in Fig. 6. Workflows management subsystem Subsystem of tasks management
Compiler
Subsystem of interruptions
Interpreter
Tasks tree and QAmodels
Editor QA-database
Fig. 6. Components supporting the execution of workflows
Into the structure of a complex the following subsystems are included: 1. The interpreter of pseudo-code programs providing the activity of the intellectual processor (step-by-step execution of QA-programs by their executor). 2. The compiler of pseudo-code programs providing their automatic execution by K-processor. 3. A control subsystem of workflows including: 3.1. Subsystem of interruptions of pseudo-code programs executing by the intellectual processor. 3.1.1. Subsystem of tasks management supporting the parallel solution of workflows tasks. The reality of workflows is a parallel work with many tasks at the same time. In WIQA it s upports by means which are presented in Fig. 7 where “Organizational structure”, “Controlling of assignments” and “Kanban” [9] are components of the subsystem of tasks management.
Pseudo-Code Programming of Workflows in Conceptual Designing of Software Intensive System 49 I- processor i
T*
I-processor j Controlling of assignments Organizational structure Z* Z1 G1 Z11 D11 Z12 D12 Z1m D1p G2 Z2 Z21 D21 Z22 D22 Z2n D2q
Tasks Team Means of QA-programming
Subsystem of interruptions Kanban Step_1 Queue Done D1
Zi Zi
Zj
D2
Z
Zn
Step_2 Queue Done Zi Zk
Management of workflows
Fig. 7. Controlling of workflows
Means which are included to “Organizational structure” support the real time generation of workflows tasks and appointments to them their executors (designers). The main screenshot of this plug-in is presented in Fig.8.
50 P. Sosnin, Y. Lapshov, V. Maklaev
Orgstructure Tasks
Persons
Groups
Fig. 8. Operating space for assignments of tasks Thus, plug-in “Organizational structure” supports the real time appointing of tasks to members of the designer team. The copy of the team model (T*,{Gv}, {Dvs}) can be uploaded in QA-memory in the form presented in Fig. 4. The use “Controlling assignments” binds any assignment with planned time of its fulfillment by the responsible designer. The interface form that supports the implementation of this work is presented in Fig. 9.
Reason
Report
Characteristics
Fig. 9. Controlling of assignment
Functions of “Kanban” visually reflect the state of the workflows execution on the form of “queues of cards” which are helped to manage by the process of designing. The example of visualized queues is presented in Fig. 10. Row of first designer
Pseudo-Code Programming of Workflows in Conceptual Designing of Software Intensive System 51
Step_i
Step_i+1
Row of the first designer Groups ds Cards
Row of the second designer
Fig. 10. Queues of cards for tasks
Subsystem of interruptions gives the possibility to interrupt any executed task or QA-program (if it is necessary) for working with other tasks or QA-programs. The interruption system supports the return to any interrupted task or QA-program to its point of the interruption.
6
Conclusion
The approach described in this paper suggests the system of means supporting programmable management of workflows in designing of SIS. This approach is materialized for conceptual stage of designing in question-answer instrumental medium WIQA. The central place in the toolkit WIQA is occupied by QA-memory that provides the store of following constructions: organizational structure of the designers’ team; the tree of the project task in the current state of its development; QA-models for corresponding tasks. The most useful kind of such models is QA-programs any of which is a pseudo-code description of the corresponding task. Such descriptions are specified in the specialized language LWIQA. In the offered approach QA-programming is used for any task that is arisen in the process of the real time designing. Programming the workflows has features that lead to generating a set of tasks’ queues for members of the team. Any task in any queue is included in the dynamic relations among tasks that are executed in parallel. It can be interpreted as executions of a set of specialized programs of M-type. Creations of Mprograms facilitate simplifying of the complexity in collaborative designing of SIS.
52 P. Sosnin, Y. Lapshov, V. Maklaev
References 1. Software Intensive systems in the future. Final Report//ITEA 2 Symposium, from http://symposium.itea2.org/ symposium2006/ main/publications/ TNO_IDATE_study_ ITEA_SIS_ in_the_future_Final_Report.pdf (2006) 2. Reports of Standish Group. http://www.standishgroup.com 3. Borges, P., Machado, R.J., Ribeiro, P.: Mapping RUP Roles to Small Software Development Teams, International Conference on Software and System Process, 190-199 (2012) 4. Sosnin, P.: Pseudo-code Programming of Designer Activity in Development of Software Intensive Systems,” In P roc. of t he 25-th International conference on Industrial Engineering and other Applications of A pplied Intelligent Systems (IEA/AIE 2012), Dalian, Chine, LNCS vol. 7345, pp. 457-466, Springer (2012) 5. Li, M., Vitanui, P. M.B.: An Introduction to Kolmogorov Complexity and Its Applications. Series: Text in Computer Science, 3rd ed., Springer, (2008) 6. Van der Aalst, W.M.P., Hofstede, A.H.M: Workflow Patterns Put Into Context. Software and Systems Modeling, vol.11(3), pp. 319-323 (2012) 7. Henninger, S.: Tool Support for Experience-based Software Development Methodologies, Advances in Computers, vol. 59, 29-82 (2003) 8. Sosnin P.: Role “Intellectual Processor” in Conceptual Designing of S oftware Intensive Systems, B. Murgante et al. (Eds.): In P roc. of the 11-th International conference on ICCSA’2013, Part III, LNCS 7973 Springer, Heidelberg , Ho Chi Minh, Vietnam, , pp. 116, (2013) 9.Wang, J. X.: Kanban: Align Manufacturing Flow with Demand Pull, Chapter in the book: Lean Manufacturing Business Bottom-Line Based, pp. 185-204, CRC Press (2010)
Using of the Methods of Building of Term System in Intelligent Web-based Repository 53
Using of the Methods of Building of Term System in Intelligent Web-based Repository I. Arzamastseva Ulyanovsk state technical university, Ulyanovsk, Russia e-mail:
[email protected]
Abstract. This paper describes the building of a thesaurus for use as a tool in the intelligent project repository. This is an effective means for computer-aided design of software and hardware systems
1
Introduction
With the increasing role of information technology in the processing of a large volume of information is particularly important mathematical modeling of linguistic structures, which extends the automation of information processing in natural language. Many problems in computer-aided design can be solved only on the basis of mathematical methods of interaction with the classical methods of linguistic analysis. Currently, the computer-aided design systems haven’t methods and techniques for creating of thesauri. But such methods are accumulated in linguistics. The use of linguistic methods in computer-aided design systems extends the capabilities of CAD. Scientific and Production Association "MARS" develops hardware and software systems of complex technical products. Vocabulary of these subject areas is developing dynamically, dictionaries are behind. At first it was necessary to extract of terms from the set of project documents which are specific to particular subject area. And then, on the basis of the constructed thesaurus, you can specify the subject area of the documents and increase the relevance of queries. For exact indexing of texts in natural language is necessary to create a mathematical model based on the statistics of the vocabulary. A variety of linguistic methods of forming thesaurus can be generalized on the basis of mathematical modeling of terminological model of industrial products. Mathematical models should reflect the structure and attributes of the terminological system to such an extent that they are allowed to solve the task of identifying the subject area of the project document. To achieve adequacy and applicability to the computer-aided design such mathematical models should generalize not arbitrary corpus of texts, but the corpus of the project documents.
54 I. Arzamastseva
2
Using the Methods of Generation of Terminological Project in Intelligent Web-based Archive
When appeared the first CAD systems, engineers had the idea of using as an index of systems automatically generated dictionaries or connect prearranged dictionary arrays equipped with a number of additional attributes – thesauri. On the Scientific and Production Association "MARS" is already in use proprietary software tool for automation of archive of electronic information resources (EIR). However, the functionality of the tool is not wide enough. It is necessary to refine the system to automate the functions of archivists and the intellectual content of the information management process. An extension of functional of this system is the developed intelligent network archive of electronic information resources. As the basis of the data archive is using MS SQL 2000 database. The indexer is a separate module of intelligent network repository of EIR, intended for preliminary analysis of electronic information resources (formats: MS Word, RTF, plain text format, etc.) for the purpose to form data for clustering processes and information retrieval. The indexer allows to select resources interactively and to conduct indexing (statistical analysis) of: • Electronic documents; • Directories containing electronic documents; • Composite electronic documents (files contained in a directory named and indexed as a single electronic document).
3
Methodology of Expert Classification of Technical Documentation Used on the Scientific and Production Association "MARS"
Our method and algorithm of forming the structure of terminology systems of design objects have been transferred to Scientific and Production Association "MARS". It has been applied for the expert classification of the design documents contained in the intelligent network archive. At the first stage in the archive of technical documentation was made sample of 63 documents with mainly regulatory-organizational content. Archive worker conducted an expert classification of documents. It was received 4 types of classifications: Classes of documents; Types of documents; Sections of the documentation; Subjects of the works. On the first kind classification it was selected 3 classes. By the classification of "Types of documentation" it was identified 16 classes. By the classification "Section of the documentation" it was allocated 22 classes. By the classification of "Subjects of works" it was selected 21 classes. Then, these 63 files were indexed automatically. The results can be seen in Table 1.
Using of the Methods of Building of Term System in Intelligent Web-based Repository 55
Table 1 Number of experiment 001 002 003 004 005 006 007 008 009 010
Number of Clusters
Expert weight
4 4 4 4 16 16 16 16 21 21
1,3 1,4 1.5 1,6 1,3 1,4 1,5 1,6 1,3 1,4
Iterations 90 100 46 31 100 66 47 100 100 59
At the second stage in the archive of technical documentation is made sample of 265 documents of regulatory and organizational content. It was received 3 types of classifications. In the third stage in the archive of technical documentation is made sample of 5035 documents.
4
Indexing on the Basis of Thesaurus
One of the subsystems of intelligent design repository is the indexer. It selects stop words from the text, and on the basis of the remaining terms determines in part the subject area of the document. We replaced in the indexer the dictionary with stop words by the thesaurus, formed on the basis of analysis of the term system (Figure 1). Thesaurus is a terminological resource, implemented as a dictionary of terms and concepts with links between them. Under the thesaurus in computer science it refers a normalized dictionary of concepts and their names in the predominantly natural language, which use in the documentation for indexing, storing and re-search. [1] The main purpose of the thesaurus in our system is definition of the subject area: from the relationships of thesaurus you can build a term system and navigation on the links of thesaurus helps to produce on the basis of the term system the precise identification of the subject area of the document. The first step in the analysis of the text is search of terms witch described in the thesaurus (words or phrases). On the basis of relationships of thesaurus terms are grouped on the semantic proximity in frames and subframes. Each term in the text obtains a relevance evaluation concerning the content of the document, depending on to which term system it relates. Maximal weight gets the terms of the terminological system, which occurred more frequently, minimal mentioned terms. Sometimes the text includes a minimum number of terms, but they are so significant that the text must be related to this subject area. In these cases, the program uses the term significance factor, which can be changed manually.
56 I. Arzamastseva The concepts with such relevance evaluation form a terminological search image of a document or a thematic presentation of the contents of the document. Thematic representation is the basis for rubrication and annotation. Before Now Linear Index Intellectual Index Line Dictioneary
Prozeßrechnersystem – computer control system Eingabe-Ausgabe-Prozessor – inputoutput processor Datenbanksystem – database Überwachungsgerät – monitor Sources: - Dictionaries - Glossaries - State standard specifications - Enterprise standards
Hierarchical Thesaurus Fuzzy systems I.Theory .... Theory of fuzzy sets .........1. The definition sets ..............1а. Visual graphics ..........2. Types of algebras
Methodology: 1. The selection of terms from corps; 2. Correlation terms with classes of conceptual schemes; 3. The definition of hierarchical structure. Sources: - Dictionaries - Glossaries - State standard specifications - Enterprise standards
Logical-conceptual schemes Textcorps
Expert
Figure 1. Using the method of generation of thesaurus on the Scientific and Production Association "MARS"
Using of the Methods of Building of Term System in Intelligent Web-based Repository 57
5
Comparison of Results
In Figure 2 you can compare the results of different methods of determining the software.
100 80 60
Fuzzy
40
Logik
20
Mathematik
0
Fuzzy Logik
1v
Mathematik
2v 3v
Figure 2. Comparison of different methods for determining the subject area
The effectiveness of training shows the ratio of the accuracy of the determination of a new phase relative to previous one. The graph shows that by training system efficiency increases almost linearly, but at a certain point by an increase of vocabulary the effectiveness of training falls.
100 97 95 90 85 80 79 75 74 70 65 63 60 55 50 400
98
100 96
100 98
89
88,5
83,3
92 86,5
82,8
76
600 обучающ ая
800 настроечная
рабочая
1000 расширенная
Figure 3. Dependence of the defenition of the data domain of texts on the number of terms in vocabularies and the number of the studied texts
58 I. Arzamastseva Table 2 Number of Texts 20 50 112 200
X terms training tuning working advanced
400 97 79 74 63
600 98 89 83,3 76
800 100 96 88,5 82,8
1048 100 98 92 86,5
Experiments have been conducted to improve of weight of the basic terms of subvocabulary by changing the coefficients. This has improved the quality of the definition of this subject area by 40%, but reduced the quality of the determination of the other subject areas. Relevant documents are already available in the enterprise. Further research will be based on a combination of different algorithms that increase the goodness of classification. For this purpose we analyze, for example, what can be modified pretreatment with extensive data set of training data, or instead of using the entropy term frequency inversion, so that reduced error in the classification.
Using of the Methods of Building of Term System in Intelligent Web-based Repository 59
Indexer
Workstation of archive worker, designer of project documentation on the Scientific and Production
Clusterizator based on ne ural networks FCMClusterizator Clusterizator based on genetic algorithms
Linguistic support of the project repository
Thesaurus InterBase
Search Subsystem EIR
Tools to building of thesaurus 1. Fuzzy-Base – Program for Bui lding of dictionaries and statistical analysis of documents 2. Subsystem (MatLab) for search of data domain and terms Figure 4. Structure of the intelligent project repository using a tool to generate thesaurus
6
Conclusion
The next experiment was designed to study the influence of the completeness of the dictionary on the quality of identification of subject area. To assess of the quality has been studied methodology of expert classification of technical documentation used on the Scientific and Production Association "MARS". Evaluation was based on the comparison of identification of subject area obtained from the experiment and identification based on peer review. The overall conclusion of the series of experiments: developed tools of thesauri building of linguistic software for CAD systems are an effective means of computeraided design of software and hardware systems. References: 1. Burkhart, M.: Thesaurus. In: Grundlagen der praktischen Information und Dokumentation (Bd. 1). Kuhlen, R.; Seeger, Th., Strauch, D. (Hrsg.). München: K.G.Saur 2004, s. 141-154.
60
I. Arzamastseva, O.Gavrikova
The Use of Linguistic Instrument in Conceptual Modeling in Requirements Document Development Process I. Arzamastseva, O.Gavrikova Ulyanovsk state technical university, Ulyanovsk, Russia e-mail:
[email protected],
[email protected]
Abstract. The paper considers main aspects of conceptual modeling as well as the use of linguistic knowledge in the common conceptual modeling methods. The main methods of c onceptual modeling are compared and the most appropriate method is chosen. Linguistic knowledge as a linguistic instrument is applied to the chosen method.
1
Introduction
Nowadays there are few spheres where one can do without modern fast-developing program-systems. That means more and more non-professionals have to do with such systems. For them, therefore, it is very important to be able to easily get access to these systems. Thus, we need to clearly identify all the requirements to the software in advance which will satisfy the end user. There are different methods to do this. Natural language description of requirements provides quite a precise representation of the modeled system and its elements. It is quite easy to percept (natural language is understandable for everyone) but, unfortunately, often leads to polysemy. Formal representation allows precisely identify all the system elements and relationships between them. The main disadvantage of such method is difficult syntax of formal languages. But formal representation allows to significantly ease the transition from modeling process to the software product development. This transition could be hardly made without precise description of the semantics of the modeled elements. Thus, we need to use linguistic knowledge.
2
Requirements Engineering
Requirements Engineering (RE) includes identification, validation and verification of requirements (including problems, possible solutions and the problem sphere) [1]. There are different methods of requirements modeling and analyzing in RE. Among them are formal (formal languages), semi-formal (diagram notation) and informal (text). Each problem is described in natural language first. Some transition therefore from informal representation to formal one is needed. But it is not easy enough to do.
The Use of Linguistic Instrument in Conceptual Modeling in Requirements Document Development Process 61
However, one method was developed for RE: COLOR-X (Conceptual Linguistically based Object oriented Representation Language for Information and Communication Systems (ICs –X for short) by Dutch scientists J.F.M. Burg and R.P. van de Riet in 1995-1998. The efficiency of this method was proved because it is based on well-developed techniques like OMT (Object Modeling Technique) by James E. Rumbaugh, CPL (Conceptual Prototyping Language) by F. Dignum and STDs (State Transition Diagrams). COLOR-X is well structured but difficult enough and outdated (last publications were in 1998). Another method is UML (Unified Modeling Language) which wasn’t developed for RE but is widespread in this sphere and quite easy to use it even for non-professionals. The main disadvantage of this language is the lack of some linguistic base which would allow to unambiguously understanding all the modeled elements and, as a r esult, to reduce the modeling process itself.
3
Semantic description
Thus, it is reasonable to add some semantic layer to UML. This semantic description includes the description of basic elements of any modeled system. They are agents, entities, associations, events and states. Below is an example of such description. Table 1. Objects of the modeled system Objects
Objects include object-oriented system concepts. Depending on the nature objects can be represented as entities, associations, agents, or events. Object model therefore is a combination of variable conditions with the help of which the rest system elements are described [5], [7]. Object in conceptual modeling is discrete set of main concepts controlled by the modeled system. These concepts have the following characteristics: • Clearly distinct; • Numerable in any system condition; • Despite the differences have some similarities; • May differ from each other in some states [5]. How to identify an object type: a conceptual object should be represented as: an event if its instances exist only in one state; an agent if its instances can control the behavior of other objects; an entity if its instances are passive and independent; an association if its instances are passive and depend on the objects which they associate [5].
62
4
I. Arzamastseva, O.Gavrikova
Example
To demonstrate the use of linguistic knowledge we together with professor Gerhard Raffius from Darmstadt University of Applied Sciences have developed an example of lingua-semantic description for an abstract model of a drink machine. Such a machine system consists of at least the following elements: barkeeper who turns on the machine; the turning element with containers for future drinks; a push-button; a mode key; a platform for the glass with a ready drink; a status led. The example of the description of this system is presented below. Table 2. Objects of the modeled machine system Objects
5
Barkeeper, user, turning element, containers with a drink components, drink components, filling element, push-button, mode keys, platform for a glass with a ready drink, status led, glass, glass proximity sensor, chosen drink led.
UML Class Diagram
As we have the description of all the system elements it is much easier to build a UML class diagram (representing the static part of the modeled system) the fragment of which is shown in Figure 1. Barkeeper - animacy = animate
to service Machine
Figure 1. UML Class Diagram (agent “Barkeeper” and agent “Machine” with association “to serve”)
6
UML State Diagram
To demonstrate the dynamic part of the modeled system we built an UML state diagram the fragment of which is shown in Figure 2.
The Use of Linguistic Instrument in Conceptual Modeling in Requirements Document Development Process 63
Machine is broken; impossible to turn
[Barkeeper turns on the machine]
Machine is being turned on Activity Initial
Figure 2. (Part of) UML State Diagram
7
Activity Final
Machine is on
Conclusion
This paper shows a part of a work devoted to the use of linguistic instrument in conceptual modeling in requirement document development process where we compared two methods of conceptual modeling: COLOR-X and UML. The first one was developed for RE, whereas the latter wasn’t. But having compared these two approaches we can conclude that UML is more appropriate for requirements documentating because it is well-developed, modern and quite easy to use. COLOR-X was proved to be efficient thanks to linguistic knowledge (Burg and van de Riet used WordNet as a lexicon for their model which contained information about all the linguistic units and the rules of their usage) and extremely useful for RE but it is outdated and quite difficult to use. Thus, we added a linguistic basis (lingua-semantic description) to the modeled system which allowed us to build the conceptual model in three steps: lingua-semantic description of system elements; UML Class Diagram (representing a static part of the system); UML State Diagram (representing a dynamic part of the system). Consequently, the use of linguistic knowledge as an instrument in r equirements document development process is proved to be efficient thanks to preliminary linguasemantic description of the modeled system. References: 1. Burg J.F.M., Linguistic Instruments in Requirements Engineering. – Tokyo: IOS Press, 1997. – 296 p. 2. Burg J.F.M. and van de Riet R.P. COLOR-X: Linguistically-based event modeling: A general approach to dynamic modeling / Vrije Universiteit, Amsterdam, 1995. – 14 p. 3. Burg J.F.M. and van de Riet R.P. COLOR-X: Object modeling profits from linguistics / Vrije Universiteit, Amsterdam, 1994. – 53 p. 4. Dignum F. and van de Riet R.P. Knowledge base modelling based on l inguistics and founded in logic // Data & Knowledge Engineering, 1991. – 37 p. 5. Lamsweerde A. Requirements engineering: from system goals to UML models to software specifications. – A John Wiley and Sons, Ltd, 2009. – 683 p.
64
p.
I. Arzamastseva, O.Gavrikova 6. Pohl K. Fundamentals, Principles, and Techniques. – New York: Springer, 2010. – 813
7. Rupp C. & die SOPHISTen. Requirements-Engineering und M anagement. Professionelle, iterative Anforderungsanalyse für die Praxis. 5. Auflage. HANSER, 2009. – 555 p.
Development of Linguistic Pattern for Industry Specification
65
Development of Linguistic Pattern for Industry Specification I. Arzamastseva, T. Kulakova Ulyanovsk state technical university, Ulyanovsk, Russia e-mail:
[email protected],
[email protected]
Abstract. This study focuses on the development of technical specifications in accordance with the rules and requirements presented in Requirements Engineering.
1
Introduction
Specification is the basic document of every project and of all relationship between the customer and the developer. Correct specification, written and coordinated by all stakeholders and decision-makers is a key to successful implementation of the project. In connection with the existing difficulties of developing of technical specifications it was appeared the area of Requirements Engineering, dedicated with the design, management and documentation of requirements. Specification is the source document for a variety of research and design of new products and structures; it is the main document defining the technical performance requirements for research or design object. Typically, in the specification are present stages of the work, developed technical documentation, quality indicators and the technical and economical requirements. [1] With Professor from Darmstadt University of Applied Sciences Gerhard Raffius we developed a technical specification in Russian for the automated system «Caipirinhaautomat». To write specifications, we selected a standard for the creation of an automated system of state standard specification 34.602-89 and IEEE-Standard 830, since they complement each other, which is important for our research.
2
General Information
The item "System name" must include the full name of the developed system, each of the words in the title should reflect what is needed for the system and this will help avoid such a "defect" as erasing: - Name of the system - "Automatic for the preparation of alcoholic cocktails called «Caipirinha» and soft drinks under the name «Caipirinha»". It was necessary to pay attention to the verb used in the title - "cooking". In the development of this specification offers the option of "manufacturing". By the development of this specification we used first the word "manufacturing", but this
66
I. Arzamastseva, T. Kulakova
verb is more applicable to systems or mechanisms, but not to the food product, in this case, the cocktail. Thus, we have eliminated "inexact formulated words". − Developer – students of Darmstadt University of Applied Sciences; − Customer – Professor from Darmstadt University of Applied Sciences Gerhard Raffius; − The planned start and end dates to create a system: 15.09.2012 – 15.12.2012.
3
The purpose and goal of generation of the system
This item is an integral part of any specification. Often, when describing of purpose and goals of the system authors use noun, but in the paragraphs for persons who will use this system we can use other parts of speech. Purpose of the system: − automatic cooking and receiving of alcoholic drinks called «Caipirinha»; − automatic cooking and receiving of soft drinks under the name «Caipirinha». In this case, to avoid the defect of "erasing" we used two verbs "cooking" and "receiving." The goals of generation of the system: − transition to automated preparation of alcoholic cocktail called «Caipirinha»; − transition to automated preparation of non-alcoholic cocktail called «Caipirinha»; − reducing time to prepare of alcoholic cocktail called «Caipirinha»; − reducing time to prepare of non-alcoholic cocktail called «Caipirinha»; − achieve the best possible result for mixed drinks of any complexity. To avoid the defect of "generalization", has been used a detailed description of each of the types of cocktails: alcoholic and non-alcoholic. To avoid redundancy, we have used the word "any".
4
Scope
This item has a s tandard formulation that begins with the words: "The system (machine, program, etc.) ...", this formalization helps to avoid ambiguity: The system is intended to prepare of cocktails at any of the restaurants, bars, cafes and other public catering establishments.
5
Brief description
In order to avoid the defect of "generalization" it is necessary to describe separately each of the actions for which the system is designed: The system is designed to simplify the work for the preparation of cocktails of different complexity, as well as to reduce the time to prepare cocktails.
Development of Linguistic Pattern for Industry Specification
6
67
Description of the system
This section describe more detailed: The system is designed to simplify the work for the preparation of cocktails of different complexity, as well as to reduce the time to prepare cocktails. This paragraph describes how the system works: If the glass is installed correctly, the system will perform all necessary steps for the preparation of cocktails. If the glass is not installed correctly, the system will not take any action and the display shows the red signal indicating an error. If a required component is missing, then the operation for preparation of cocktails is interrupted and a light comes on, report missing ingredient.
7
Customers and their requirements
The Russian standards have not this item, but it is very important because it allows us to describe the subsequent items of specification: Table 1. stakeholders The interests of stakeholders customer 1. The execution of the task. 2. The high performance of system. 3. The security of system. 4. The stability of system. 5. The easy use of system. 6. The easy cleaning of system. 7. The easy replenishment of ingredients. 8. The easy diagnostic and troubleshooting. 9. The high quality of drink that prepare system. producer 1. The execution of the task. 2. The high performance of system. 3. The security of system. 4. The stability of system. 5. The easy use of system. owner 1. Low operating cost of system. 2. The high performance of system. 3. Ease maintenance of system. 4. The security of system. 5. The easy cleaning of system. 6. The easy replenishment of ingredients. 7. The easy diagnostic and troubleshooting. client 1. The high performance of system. 2. Reliability of system. 3. The security of system. 4. The high quality of drink that prepare the system. Wait staff (staff 1. The high performance of system. serving clients) 2. The security of system. 3. Reliability of system.
68
I. Arzamastseva, T. Kulakova
Wait staff (technical staff serving the system) Legislator
8
4. The high quality of drink that prepare the system. 5. Ease maintenance of system. 6. The easy cleaning of the system. 7. The easy replenishment of ingredients. 1. The security of system. 2. The reliability of system. 3. The easy diagnostic and troubleshooting. 1. Compliance with labor laws. 2. Compliance with laws and rules of hygiene. 3. The security of system. 4. The reliability of system.
Conclusions
For linguistic engineering of industry specifications it is necessary to create a unified structure of the text of specification that is creation of such linguistic software that will display the contents of the most complete specification. The development of the field of Requirements Engineering has made a significant contribution to the development of stages, methods and tools of design of specification. Special contribution to this field has made linguistics: it was founded language "defects", worked out the rules for writing of sentences, descriptions of requirements and the use of words. We have developed technical specification, which meets the requirements of IEEE 830-1998. This technical specification is understandable, since it contains all requirements for functionality, performance, associated requirements, as well as additional requirements, is known the reaction of the system to all possible input data in a variety of situations. All information developed in this specification is described consistently: all requirements are consistent with each other, and there is no contradiction between the requirements. Developed specification can be easily changed, as all the information in it is connected with each other, and the specification does not contain redundant information. References: 1. Коберн А. Современные методы описания функциональных требований к системам /А. Коберн. – М.: издательство «Лори», 2002 – 263 с.
A concept of creating integrated intellectual CAPD for the process of forging
69
A concept of creating integrated intellectual CAPD for the process of forging A.V. Konovalov, S.V. Arzamastsev, S.I. Kanyukov, O.Ju.Muizemnek Institute of Engineering Science Ural Branch of the Russian Academy of Sciences. Ekaterinburg, Russia e-mail:
[email protected]
Abstract. The design of forg ing processes is a difficult intellectual problem. With an insufficient number of specialists in this field of research, the role and importance of computer-aided design increase. This paper is aimed at developing a concept for the future development and creation of CAPD for forging processes on the basis of ongoing research.
1
Introduction
Heavy engineering industry, power machine building and transport engineering are characterized by low-quantity production, therefore smith forging processes remain actual in these industries up to date. In these processes, forming of forgings is not limited from all sides by a deforming tool or it is restricted on a small part of the surface adjacent to a s imple-shape tool. However, free deformation of metal has its own laws of plastic flow, which should be taken account by engineers in the development a forging process and by smiths during manufacturing of forgings. Despite the seeming simplicity of the forging geometries, the forging process is calculated by experienced professionals, specially educated and trained. In the development of technological process it needs extensive knowledge about transformations of the metal structure occurring during forging, about the properties available under various schemes of forging, about the plastic flow of the metal taking place at different points of the surface, about the conditions to be used in order to avoid faulty forgings, since it is hardly possible to improve a faulty forging. Thus we can say that the designing of forging process is a difficult intellectual task, and its correct solution depends on the knowledge of experts involved in the development. The shortage of certified expert technologists encourages the creation and industrial implementation of computer-aided design systems for the forging technology.
2
"SHAFT CAPD" – CAPD of the forging process
An object-oriented approach is laid in the basis of creating "SHAFT CAPD". During his work the designer-technologist uses three types of objects: workpieces, forgings and cards of technological processes. Objects are provided with properties
70
A.V. Konovalov, S.V. Arzamastsev, S.I. Kanyukov, O.Ju.Muizemnek
and processing methods that allow them to be represented not merely as a set of data, but also to undergo some actions, for example, transformation of one object type to another, transformation of an object of the same type in order to obtain a specimen with appropriate properties. For the forging process, transformations are performed from parent objects to child objects ones, according the following chain: WORKPIECE – FORGING – PROCESS. The objective approach allows one to structure the designing and programming, thereby facilitating and accelerating the development and operation of the design system with a huge amount of data and parameters, which in the ordinary procedural approach would be extraordinarily cumbersome and complex both in terms of maintenance and initial development. Modularity implemented on the conception of multi-agents [1], which are responsible for the design of individual types of forgings is one of the main principles of the system. Standards (GOST 7829-70 "Forgings made of carbon and alloyed steels manufactured by smith forging on hammers" and GOST 7062-90 "Forgings made of carbon and alloyed steels manufactured by forging on presses") provide 17 types of forgings depending on the geometry elements and the dimension relationships (Figure 1). For CAPD, the first seven elongated types of forgings are conventionally combined into the "circular shafts" type (Fig. 1a). It includes various combinations of forgings consisting of coaxial cylindrical and conical steps. The second group of forgings is disks (Fig. 1b). The necessity of dividing forgings into various types is dictated by different manufacturing technologies. The main forging operation for shafts is elongation with decreasing cross-section of a forging, whereas the main operation for disks is workpiece reduction from the ends, with decreasing workpiece length and increasing cross-section. Shafts and disks make the overwhelming bulk of forged products. Apart from these, there are such types as rolled rings (Figure 1c), bushes and cylinders (Figure 1d), stepped bushes (Figure 1e), forgings with a rectangular cross-section (Figure 1f). A separate design subsystem is created for each type of forgings. By a common interface, all local subsystems are combined into a single CAPD for forging processes (Figure 2).
A concept of creating integrated intellectual CAPD for the process of forging
c)
d)
a)
e)
b)
Figure 1. Types of forgings
f)
71
72
A.V. Konovalov, S.V. Arzamastsev, S.I. Kanyukov, O.Ju.Muizemnek
А.Workpiece drawing
B.Graphics editor
I.Updating and archiving
D.Database of reference information
C.Designing a forging
F.Designing a technological process
G. Process card editor
H.Process card E.Database of work information
Figure 2. Functional diagram of SHAFT CAPD
The information about a workpiece, which is stored either in the form of a paper drawing (Figure 2A) or as a model for an electronic medium comes to a specially designed graphics editor for workpieces and forgings (Figure 2B). The graphics editor changes the geometry of an input workpiece and a forging during the design process. Then the forging is constructed from the workpiece (Figure 2C), i.e. an additional metal layer is assigned to the workpiece surface. It serves for the subsequent finishing machining of the workpiece or for closing small elements of the workpiece geometry, which cannot be performed during forging because of technological limitations. At this stage the database of reference information (Figure 2D) and the database of work information (Figure 2E) are actively connected to the system. The database of reference information (DRI) provides information about the grade of steel and alloys, standards for allowances and tolerances for forgings being designed, rolled-stock dimension types, temperature conditions of forging and heat treatment, etc. After the design of forging, the stage of detailed process design is initiated (Figure 2F). Note that, in contrast to not numerous known forging CAD systems for forging, the system discussed deals not with a flowchart, but with a detailed card of a single process containing all the forging operations and transformations with the draft of the workpiece sketch at each transformation with all the necessary dimensions. The process design is fully automated, but at any stage the user can easily correct the automated solution if it is necessary. A comfortable and ergonomic interface is created for the user, enabling him to fully implement all the capabilities of the system. The consummation of the process card takes place in the editor (Figure 2G) and then
A concept of creating integrated intellectual CAPD for the process of forging
73
it is passed to the forging shop and to the storage either in paper form or in the form of an image file (Figure 2H) for later use in the PDM/PLM of an enterprise. To integrate the designing results with the systems of lifecycle management in general and the systems of documents circulation, an object updating and archiving unit is developed (Figure 2I). It has enabled notifications about changes to be taken into account and an unlimited number of variants for workpieces, forgings and process cards for the same notation but with different internal parameters to be created. For example, if the dimension type of a billet designed in the basic process is temporarily missing at the enterprise, the system makes it possible to create an alternative process for another billet, temporarily bringing the basic process to the archive status. When you receive the desired billet, the status of the two developed technological processes is exchanged: the archive status becomes update and the update status turns into archive. The described scheme of CAPD functioning is both the core and the shell for all the individual modules of the system. In aggregate with the design organization of the system [2], it provides a powerful intellectual tool to design processes of forging a wide range of products. The experience in the implementation and operation CAPD for forging stepped shafts at the Ural Turbine Works [2] has demonstrated the correctness of the selected line of research and system development. Until now the weak part in developing such systems was adaptation to specific production conditions. In comparison with metal machining (turning, milling, grinding and the like), which has a rigid kinematics of the production process and typical equipment, forging is more depended on specific production conditions. Nonrigid deformation scheme depending largely on the blacksmith’s skill, as well as heat, light and other conditions in the production sector, the presence of various basic and auxiliary tools at different enterprises, peculiarities in the calculation of process transformations made by production engineers make the replication of a developed CAPD at a variety of enterprises practically impossible. In this regard, a problem arises to apply the experience and traditions of specialists and to increase adaptability to the conditions of a particular company. It is proposed to solve this problem by creating a hybrid system [3, 4] enabling one to use the information stored in the database for retrieval and later application of knowledge. The essence of the hybrid approach is that the most similar object of the entire set of objects (forgings or processes) is selected from the existing database by comparing the characteristic parameters. By a s pecial adaptation mechanism, the found basic variant is adapted to obtain a new process technology. The output product of the system is the card of the forging process, which is presented in Figure 3. The card includes all the parameters necessary for forging and heat treatment. All the process data are stored in the database to be used in product lifecycle management. Thus the developed concept of creating CAPD for forging enables one to construct an integrated intelligent system meeting all modern requirements.
74
A.V. Konovalov, S.V. Arzamastsev, S.I. Kanyukov, O.Ju.Muizemnek
Figure 3. The process card
3
Conclusion
Thus this paper has demonstrated that, along with the conventional object-oriented approach, CAPD should be developed as hybrid systems, in which parameters of forgings and processes accumulated in the database are used for retrieval and later application of the knowledge. CAPD for the process of forging is created according to the modular approach based on the multi-agent conception. The system developed with the use of a local interface is integrated with the PDM/PLM systems at the level of data interchange from the database. The experience in implementing CAPD for forging stepped shafts shows the effectiveness of the system and the correctness of the chosen line of development. The work was done as part of the fundamental research program of the Presidium of RAS, No 15, project 12-P-1-1024, and supported by RFBR, gtant No 13-07-00531A and RFBR and the government of the Sverdlovsk region, grant No 13-07-96005 r_ural_a.
A concept of creating integrated intellectual CAPD for the process of forging
75
References: 1. Gagarin P.Ju., Konovalov A.V., Shalyagin S.D. Agent approach in forging CAPP for short forgings. Software and systems. No 1. pp. 148–152. (2011) (in Russian) 2. Konovalov A.V., Arzamastsev S.V., Shalyagin S.D., Muizemnek O.Ju., Gagarin P.Ju. Intelligent computer-aided design systems for technological processes of s haft forging on hammers. Bl anking productions in mechanical engineering. No 1. pp. 20–23. (2010) (i n Russian) 3. Konovalov A.V., Gagarin P.Ju., Arzamastsev S.V. Principles of c onstructing hybrid intelligent systems for design of the forging process. Kshp-OMD. No 1. pp. 27-32. (2013) (in Russian) 4. Kanyukov S.I., Arzamastsev S.V., Konovalov A.V. The application of a hybrid approach to computer-aided design of transformations in shaft forging on presses. Kshp-OMD. No 2. pp. 35-39. (2013) (in Russian)
76
A. Afanas’ev. R.Gainullin
Methods and tools of the analysis of graphical business process notations in design of the automated systems 1 A. Afanas’ev. R.Gainullin Ulyanovsk state technical university, Ulyanovsk, Russia e-mail:
[email protected],
[email protected]
Abstract. This paper is about analising correctness of bus sines process graphical notation. We propose a verification mechanism based on a uthor's graphic RV-grammars
1
Introduction
One of the main methodological approaches of design the EXPERT, found a material embodiment in technological practicians, is the approach based on development and realization of the EXPERTS models in the form of complex set of business processes [1]. As concept business process understand regularly repeating sequence of complexes of actions of some requirements of the consumer directed on satisfaction for the purpose of extraction of useful effects[2] (for example, profit extraction, elimination of mistakes, scalability process business, etc.). When developing models of business processes various graphic notations now are actively used: IDEF, BPMN, DFD, UML, EPC, etc. Practice of design of systems of such level showed that use digrammars considerably increases quality of created systems at the expense of unification of language of interaction of participants of process of creation of the automated system, strict documenting of design architectural concepts, formal control of a correctness of chart notations and the subsequent automatic or automated translation of digrammars model in a program code. In practice of design the EXPERT the greatest distribution and use received the models of business processes presented in the UML language. The analysis of design tools with use of charts of UML family revealed a poor development of methods and means of the analysis and control of a correctness of projected charts. Practically there are no control devices of a correctness of semantic coherence of graphic specifications during the work in group[3]. The purpose of work is expansion of a class of diagnosed mistakes in the course of design of the automated systems at the expense of development and realization of methods and means of the analysis and control of graphic notations of business processes that allows to reduce mistakes and time of creation of the automated system.
1
The reported study was partially supported by RFBR, research project No. 13-07-00483 a.
Methods and tools of the analysis of graphical business process notations in design of the automated systems
2
77
Development and research of methods of the analysis and control of graphic notations business of processes
The importance in the course of the analysis has the solution of a problem of neutralization of the mistakes, allowing to continue the analysis and to reveal the maximum quantity of mistakes for "one pass", expansion of RV grammars [4] in the form of RVC grammars is offered. RVC grammar of language L (G) call the ordered ~
five of nonempty sets G = (V , Σ, Σ, R, r0 ) , where
V = {vl , l = 1, L} – the auxiliary alphabet (the alphabet of operations over internal memory); Σ = {at , t = 1, T } – the terminal alphabet of graphic language being association of sets of its graphic objects and communications (set of primitives of graphic language); ~
~
~
Σ = {a t , t = 1, T } – the quasiterminal alphabet being extension of the terminal alphabet. The alphabet includes: – quasiterms of graphic objects not being successors of the analysis, – quasiterms of graphic objects – successors of the analysis, – quasiterms of graphic objects having more than one entrance, – quasiterms of communications – tags with the semantic distinctions defined for them, – quasiterm for verification of graphic objects – successors of the analysis, – quasiterm for analysis end; R = {ri , i = 1, I } – the scheme of grammar of G (the set of names of complexes of production, and each complex
Pij
consists of a subset of production
ri = {Pij , j = 1, J } );
r0 ∈ R RVC axiom – grammars (a name of an initial complex of production ), rk ∈ R – the final complex of production. Production looks like Pij: ~ Ω µ [Wγ (γ 1 ,...,γ n )] a t → rm where Wν (γ1. . . γn) – n relation defining a type of operation over internal memory depending on; Ωμ – the operator of modification definitely changing a type of operation over memory, and; – name of a complex of production – the successor. As internal memory stacks and shops for processing of the graphic objects having more than one exit (to store information on communications – tags), and elastic tapes for processing of the graphic objects having more than one entrance (to note number of returns to this top, and, therefore, quantity of entering arches) are used. For formation of a set of quasiterms of successors the following algorithm is used: 1) all areas of memory with which the grammar works are allocated;
78
A. Afanas’ev. R.Gainullin
2) the table of modification of memory under the influence of each element of the alphabet of grammar is formed; 3) to allocate pair elements – elements which influence the same area of memory with different effects (reading record). Let's consider a problem of automatic generation of RV grammars on the basis of metadescriptions of graphic notations of business processes (on the example of the UML language). The metacompiler is used at a stage of creation of the scheme of RV grammar. The main objective of its application – to simplify work on drawing up rules of grammar, keeping efficiency of parsing of the chart. On an entrance the metacompiler receives the context-free description of chart language, at the exit issues the description of the machine gun of RV grammar in the XML format. Process of metacompilation consists of several stages: 1) lexical analysis of the description of chart language; 2) syntactic analysis and creation of a tree of analysis; 3) the analysis of a t ree of analysis and creation of intermediate structure of grammar in terms of language of specifications; 4) translation – transformation of internal representation to RV grammar terms; 5) optimization and minimization; 6) saving of tables of RV grammar in the XML format. The description of specifications of chart languages consists of the rules containing a name of a non-terminal symbol, Rule following after a reserved word, transfer of a set of the terminal and non-terminal symbols entering into the rule, and also the description of the relations between terminal and non-terminal symbols in the rule. On the pic.1 is the sample graphical notation of activity diargamm and there meta description as a rule. Rule Blocks Consist of unit1 Blocks, Blocks block2, Condition condition, merger merger Internal Relationships: condition.output1 condition = block1.input, condition.output2 = block2.input, merge.input1 = block1.input, merge.input2 = block2.input block 2 block 1 External Relationships: blocks.input = condition.input, blocks.output = merge.output The method of generation of operations with the internal memory, possessing necessary completeness of control is developed. The method automatically generates work rules with internal memory that in the sum the method of formation of the list of successors described above allows to analyze graphic specifications of any complexity with sufficient efficiency. The power analysis is provided in tab. 1
Methods and tools of the analysis of graphical business process notations in design of the automated systems
79
IBM Rational Software Architect
ARIS
RV-grammar based analyzer
+ + +
+ + +
+ + +
+
-
+
-
-
+
+ -
-
+ + +
Sintacsis error Error pair Circular reference Not match the number of input / output connections Invalid connection Errors distant from the context Semantic errors The ring connection Multiple communication Transmission control
error
The developed method for analysis and control allows you to diagnose errors separated by context and semantic errors that are not defined in most modern editors.
3
Analyzing graphical notation developed in group
Designing in the group makes new demands in matters of correctness diagrams relating to their semantic coherence. To solve this problem we suggest a multi-level system interacting RV-grammars: RVM = RVi ( RV1 ,..., RVn ) i Grammar RVi contains as one of the states of the grammar RVj. Grammar RVj can also be integrated. Grammar RVM receives input from the thermal tape alphabet characters and transmits them to an appropriate level. Elements that translate the grammar to another level, we call sub term. Then the description of RVM-grammar takes the form: ~ −
G = (V , Σ, Σ, Σ, R, r0 ) ,
80
A. Afanas’ev. R.Gainullin
where - many sub terms, i.e. the elements of grammar, carrying machine on the lower layer. Products grammar containing sub term is: _
W ( γ ,...,γ )
n→ rm a t ν1 rl
Where rl number of the initial complex sub grammar In the design of a control group is important ontological consistency complex designed diagrams. The designers, making diagrams may forget to take into account earlier decisions (reflected in the diagrams). For example, the designer can specify a component of the system and lose attention the fact that that the previous stages such component was implemented using other technologies. For the analysis of a matched semantic correctness is requested to develop a multilevel grammar. The upper level of compositional grammar will use case diagrams, because the development of the systems should start with this chart (according to the methodology RUP). During the analysis will build semantic information about the domain, and each new chart will be added to the overall structure under the consistent expansion of domain concepts. In the construction of the first charts are checked only semantic consistencies within the diagram. Validates that the semantic concepts in the pair. When adding a new chart diagram verified the consistency and coherence is detached integrated model projected an automated system. Semantic consistency is analyzed on the basis of a text component diagram. Under the textual component of the grammar are understood texts belonging to the blocks and connections diagrams. To test the integrated model to construct a graph of semantic relationships between the elements of an automated system. To solve this problem an adapted method selected lexical and syntactic patterns. The proposed method makes it possible to diagnose and prevent these semantic errors: a large synonym, antonyms objects convulsive relations. One class of errors is not controlled by the proposed method - the incompatibility of objects. This error class is characterized by the use of incompatible concepts, such as Stateless and Statefull system components.
4
Conclusion 1.
2. 3.
Developed a method for automatically build a list of followers for the reconstruction algorithm analysis - allows you to build a lot of followers analysis without the author's grammar. The method ensures the continuation of the analysis with the minimum number of missed terms of graphic input supply. An algorithm of metacompiler graphic language that allows the graphic description RVUML-effective to build a grammar, a distinctive feature is the automatic construction operations with the internal memory. We propose a multi-level apparatus of grammars that allows detecting faults, distributed charts.
Methods and tools of the analysis of graphical business process notations in design of the automated systems
4.
5. 6.
81
Developed a method for analysis and control of semantic errors (based on ontological models) complex diagrams created in the process of designing group. The method extends the class of diagnosed errors, including semantic, and can detect distributed among multiple charts errors. Designed syntactically-oriented analyzer UML-diagrams for MS Visio, which allows detecting made when constructing diagrams syntax errors. Elaborate architecture of the control system correctness graphic specifications, offering a full range of functionality for the analysis and control of syntactic and semantic errors.
References: 1.Quality management systems — Fundamentals and vocabulary, ISO 9000:2005. — 2005. — http://www.iso.org 2.Osterwalder A., Pigneur Y. Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers. // Wiley - 2010. - 288 p. 3.Gainullin R. Control of graphic specifications in the processes of collective design of automated systems // IVT-2013. – 2013. – P. 31-38. 4. Afanasev A., Sharov O. Methods and tools translation of graph grammar // Programmirovanie. – 2011. – №3. – P. 65-76.
82
A. Afanas’ev, D. Bragin
Ontology-Based Approach for Control of IDEF Diagrams 2 A. Afanas’ev, D. Bragin Ulyanovsk state technical university, Ulyanovsk, Russia e-mail:
[email protected]
Abstract. This articles deals with the problems of a nalysis and control of diagram languages, which are used in the design of c ompex computer aids systems. Authors propose the use of ontology-based approach for the control of textual semantic of IDEF diagrams.
1
Introduction
Modern approaches for software architecture and computer aids design is based mainly on g raphic specifications. The use of graphic specifications has usually the system nature, which involves the creation of a complex of diagram models designed by development team. Modern design tools have weak resources to analyze and control graphic information as well as textual semantic, therefore the problem of semantic correctness of complex of diagrams is especially acute.
2
IDEF methods for ontological analysis
The IDEF standard provides the set of graphic specifications which describes the system behavior from the diffrent points of view. In this situation the development of the ontology of subject area is a key point to success in the design process. The IDEF standard provides the tool for ontology creation which is IDEF5 graphic specification. The IDEF5 ontology development process consists of the following five activities: • • • •
2
Organizing and Scoping. The organizing and scoping activities establishes the purpose, viewpoints and context for the ontology development project, and assigns roles to the team members. Data Collection. During data collection, raw data needed for ontology development is acquired. Data Analysis. Data analysis involves analyzing the data to facilitate ontology extraction. Initial Ontology Development. The initial ontology development activity develops a preliminary ontology from the data gathered.
The reported study was partially supported by RFBR, research project No. 13-07-00483 a.
Ontology-Based Approach for Control of IDEF Diagrams
•
83
Ontology Refinement and Validation. The ontology is refined and validated the ontology to complete the development process.
There are two graphic languages which support the ontology development process: the IDEF5 schematic language (SL) and the IDEF5 elaboration language (EL). The schematic language is a graphical language, specifically tailored to enable domain experts to express the most common forms of ontological information. This enables average users to input the basic information needed for a first-cut ontology and to augment or revise existing ontologies with new information. The other language is the IDEF5 elaboration language, a s tructured textual language that allows detailed characterization of the elements in the ontology. The IDEF5 specification provides four types of diagrams, which are used to accumulate the ontological information in clearly understandable form: • • • •
3
Classification schematics; Composition schematics; Object state schematics; Relation schematics.
Control of IDEF diagrams
The ontology development process involves the determining the objects which present in the developed system. On this stage classification diagram is builded therefore the hyponym relation is determined. This diagram is analysed by the RV – analyser [1]. During the analysis process the graphical information is controlled as well as the information about the “parent - child” relation is put to the automata internal memory, which is used by RV-grammar based approach. In case that the diagram is correct, this stage is successful, otherwise – the team should correct and finalize this diagram. In order to determine the meronymy relation (“part of”), the development process involves the building of the composition diagram. Operations over the internal memory which stores the information about “parent – child” relation is executed by RV-automata to verify the consistency of composition diagram and classification diagram. The development and analysis of object state diagram as well as relation diagram are similar to the approach which is described above. The following models of IDEF specification, such as IDEF0, IDEF3, use IDEF5 complex model as a source of ontological information. Textual attributes of each diagramms are checked for the consistency with the existing ontology. Authors propose to verify the diagrams consistency by the methods of detecting links, namely the approach of text matches and hierarchical consistency relations. Text matches are the identity of concepts names (also releated words are involved), textual definitions. Hierarchical relation matches involve the search of common concepts, the filtering of ambiguities, the calculation of semantic distance, the diffusion of semantic groups [2].
84
4
A. Afanas’ev, D. Bragin
Conclusion
This article is about the ontology-based approach for control of textual semantic of IDEF diagrams. Authors suppose to use described approach for IDEF analyser, which was implemented for Microsoft Visio early [3]. References: 1. O. Sharov, A. Afanas’ev. Syntax-Oriented Implementation of Visual Languages Based on A utomation Graphical Grammars, Programming and Computer Software, 2007, vol. 31, № 6, pp. 332 – 339. 2. I. Kuznetsov. Semantic representations. // Moscow: Science, 1986. 3. A. Afanas’ev, D. Bragin. The development of IDEF analyser // Ulyanovsk. – Vestnik of UlGTU, 2011. - № 2 – pp. 41- 49.
Client-Server Model for Analysis of Diagram Languages
85
Client-Server Model for Analysis of Diagram Languages 3 A. Afanas’ev, D. Bragin, R.Gainullin Ulyanovsk state technical university, Ulyanovsk, Russia e-mail:
[email protected]
Abstract. This article presents the client-server system for analysis of diagram languages. The general architecture of t he system and the sepcification of the authomatic grammars are described.
1
Introduction
Nowadays development technologies of complex computer aids systems are based on graphic information, which is presented in the form of diagram languages. There are many commercial and open-source software tools, which allow to design computer system by diagram languages, such as UML, IDEF, EPC, SDL, DFD, etc. Mostly these tools are not oriented on the collaborate development and have weak resources to extend language schema as well as to analyze developed diagrams. Authors propose the general analyser of diagram languages, which allows to define syntax and semantic errors, to extend language schema and to be oriented on the collaborate development.
2
The general descripition The proposed system should have the following possibilities: • • • •
3
Creation of diagramm of graphical languages; Extension of new specification of diaram languages; Analyse developed diagrams on the base of loaded analyse algorithms; Remote access to diagrams and its analysis results.
The system architecture
Figure 1 below illustrates the general architecture of the system. The system consists of 3 different layers: the presentation layer, the web-server layer and the data processing layer.
3
The reported study was partially supported by RFBR, research project No. 13-07-00483 a.
86
A. Afanas’ev, D. Bragin, R.Gainullin
Figure 1. The architecture of client-server system
This separation allows to clearly delineating responsibilities between different system components as well as the client-server nature of the system allows to having each layer on the single computer or on different notes of the enterprise network. The presentation layer is responsible for the interaction between software tools which is used to develop diagrams and the analysis system. This layer is implemented in the form of plugins which are embedded to software design tools and converts internal representation of diagram into format which is recognizable by server side. The two most common formats were chosen as base formats: JSON (JavaScript Object Notation) and XML (eXtensible Markup Language). The main objectives of this component are: • Server side authorization; • Diagrams convertion to the one of the base formats; • Transfer the well-formed representation to the server side; • Getting the list of errors;
Client-Server Model for Analysis of Diagram Languages
•
87
Errors indication on the user diagrams.
The web-logic layer contains the web-server, which receives the user requests. The request body is the well-formed representation of user diagrams. Due to the fact that the data may presented in the different formats (XML, JSON), the next step is the convertion this external representation into internal data structures (Encoder, Request). In order to make the independence of internal view from external one, the design pattern “abstract factory” is used, which allows to add new formats of external data with minimal efforts. Figure 2 illustrates more detailed description of the weblogic layer – the sequence diagram. The data processing layer is the finite state automata that uses with the internal memory. T he analyze algorithm is based on R V-grammars described in [2]. The necessary grammar is selected from the grammar library. On the analysis stage elements of diagram are sequentially read and put into the finite state machine therefore the automata state is changed. There are two possible cases. The processed signal originates from the current state. In this case, the shift and select the desired operation is performed over the internal memory which corresponds to the active transition. If the next element is prohibited in this state, the machine goes into recovery mode. Recovery mode is a special state machine, which allowed the problem of handling erroneous input element. Is a restoration of the state of the analyzer by the method of followers [3]. This mode allows to detecting not only the first error in the graph, but also possible errors made by the developer. Recording an error is executed at the time of transition to a state of recovery.
4
Conclusion
This article is about the architecture of the client-server system for analysis of diagram languages. According to this architecture authors have developed analysers for the UML and IDEF for Microsoft Visio 2010.
88
A. Afanas’ev, D. Bragin, R.Gainullin
Figure 2. The sequence diagram of the web-logic layer References: 1. M. Fauler. Patterns of E nterprise Application Architecture // M . – Williams, 2010, 544pp. 2. O. Sharov, A. Afanas’ev. Neutralization of s yntax errors in the graphic languages // Programming, 2008. – № 1. – 61 – 67 pp. 3. Associative micro programming / Afanas’ev A.N., Guzhavin A.A., Kokaev O.G. – Saratov: Publishing house of the Saratov univercity, 1991, 166 pp.
Diagrammatiсa in computer-aided design of complex computer-based systems
89
Diagrammatiсa in computer-aided design of complex computer-based systems 4 A. Afanas`ev Ulyanovsk state technical university, Ulyanovsk, Russia e-mail:
[email protected]
Abstract. The article presents an automate approach to analysis of diagrammatic models of description and implementation of business processes.
Diagrammatic tools in the form of extensive family of diagram languages are used to describe the system architecture, its business processes, its behavior and relevant data structures in practice of the design and implementation of complex computerbased systems, which are a subclass of SIS-systems (software-intensive systems), and which includes a class of complex computer-aided systems. Active use of diagrammatica is associated with [1]: - Sustainable use of diagram schemes that contribute to the solution of problems and an understanding the meaning by the developers of the SIS in operational considerations. - The requirements of the regulatory diagram registration of development process, its components, and products of the project activities. - Attempts of automatic and / or automated transformation of diagram schemes in their material embodiment. Since the mid-2000s researches on the development and implementation of methods and tools of processing diagrammatic artifacts are conducted at the Department of computer engineering in Ulyanovsk State Technical University. The automate approach was developed to solve these problems. The basic principles of the approach are: 1. Analyzing automaton with memory is the basis for processing methods, including analysis, control and translation. 2. Methods and tools are syntax-oriented. 3. Control completeness of syntactic and semantic errors within the declared class is provided. The class of semantic errors has the property of extensibility without a fundamental change in the method of analysis and control through the use of additional memory structures. 4. Basic mathematical tool is based on the use of the RV-grammars family [2, 3, 4]. Grammar and analyzers for diagrammatic notations UML, IDEF, eEPC, Petri nets, SDL, as well as the analysis of the network version of diagram languages were developed [5]. Currently, researches are being conducted in the following areas:
4
The reported study was partially supported by RFBR, research project No. 13-07-00483 a.
90
Afanas`ev
- The development of hierarchical RV-grammars, which are oriented on analysis and control of diagram schemes with a collective design process; - The development of ontological RV-grammars, which are oriented on semantic control of related diagrams; - The development of RV-grammars diagram workflow technology of sharepoint; - The development of fuzzy RV- grammars and search domains of their application; - The development of an invariant client-server system of analysis and control of diagram schemes for use in the tools of various systems and technologies of complex computer-aided design systems. References: 1. A. Afanas’ev. The methodology and tools for analysis and control of workflows in an in computer-aided design of complex computer-based systems // Proceedings of the Congress on Intelligent Systems and Information Technologies “IS & IT'12”. Scientific publications in 4 volumes, Vol. 1. Moscow: Fizmatlit, 2012. Pp. 391-399. 2. O. Sharov, A. Afanas’ev. Syntax-oriented realization of gra phic languages based on automate graphic grammar // Programming, 2005. № 6. Pp. 56-66. 3. O. Sharov, A. Afanas’ev. Neutralization of s yntax errors in the graphic languages // Programming, 2008. № 1. Pp. 61-66. 4. O. Sharov, A. Afanas’ev. Methods and tools for translation of graphic diagrams // Programming, 2011. № 3. Pp. 65-76. 5. A. Afanas’ev, R. Gainullin. Software system for a nalysis of diagram languages // Software products and systems, 2012. № 3. Pp. 138-141.
Distributed System for Ultrasonic Non-Destructive Tomography
91
Distributed System for Ultrasonic Non-Destructive Tomography Michael Bron, Gerhard Raffius, Alois Schuette, Vadim Shishkin, Denis Stenushkin University of Applied Sciences Darmstadt, Department of Computer Science, Schoefferstr. 8a, 64295 Darmstadt, Germany
[email protected] [email protected] http://www.fbi.h-da.de ScanMaster Systems Ltd. 23 Hamelacha St., Afek Park P.O.Box 11431, Rosh Ha'ayin 48091, Israel
[email protected] http://www.scanmaster-irt.com Ulyanovsk State Technical University
[email protected] http://http://www.ulstu.ru Abstract. The paper describes objectives and main approach applied in Distributed System for Ultrasonic Non-Destructive Tomography (DSUNDT) project. Ultrasonic tomography is a novel and perspective area of non-destructive testing. This approach allows to improve the quality and reliability of testing results and simplify test data analysis. Distributed system approach allows to utilize powerful cloud-based computation and data storage resources and reduce total system cost for customers.
1
Introduction
DSUNDT project deals with development of innovating solution for Ultrasonic Non-Destructive Testing Tomography system. It focuses on providing powerful computing resources for generating Tomography presentation of inspection data collected from relevant production-floor non-destructive systems without increasing computation resources installed on the systems itself. The final goal is to create reliable non-expensive solution, using of which will allow system costs reduction, analyze time minimizing. Different users of NDT inspection equipment will have access to advanced method of result analyzing as topographic presentation. DSUNDT project means an important step forward in NDT due to the following reasons: • The system uses combination of cluster and cloud technologies, where most of the data processing and handling will be performed by the cloud environment rather then the system installed in the factory. This approach will reduce overall cost of the equipment, its maintance cost and simplify operator work;
92 Michael Bron, Gerhard Raffius, Alois Schuette, Vadim Shishkin, Denis Stenushkin
• Tomography 3D presentation of the inspection data within real part geometry allows proper defect validation with reduction of defects under- and over detection risk; • The system allows easy data sharing, that is especially important for international companies where expert may have different geographic location than the inspection or monitoring system; • Having all analyze algorithms centralized on the cloud assures proper homogenous results of inspection and monitoring processes; • Cashing mechanism allows to prevent down time of the equipment due to communication problems. The project goals are not only development of the application part, but research into finding appropriate architecture and development of new approaches. The mentioned approaches are: • Innovative techniques of data transfer from industrial system to distributed networks; • Complicated math computation using distributed or remote computation; • Transparent data security; • 3D presentation over the internet.
2 Research background and objectives NDT Ultrasonic tomography is one of the most advanced t echniques of presentation and analysis of inspection results. It does not only allow presentation of real forms and dimension of the defect in 3D, but also is cleaning out most of the noise and artifacts. The ultrasonic computed tomography for last decade found its use in industrial NDT systems, mostly for analyzing big critical parts such as power generator turbine parts. The main reason for such use is requirement for more reliable defect recognition where both under detection and over detection are not acceptable. Tomography assumes reconstruction of part geometry from the collected data. An algorithm developed within the project makes use of the scanned object model and material knowledge, to overcome two of the ultrasonic computed tomography major difficulties: 1. Inverse scattering problem. When dealing with soft tissues, a common approximation to the solution is made by disregarding the beam diversion when passing between materials. This approximation is not valid when dealing with objects made of rigid industrial materials. New algorithm uses information about the object shape and material to calculate the beam diversion angle when entering and leaving the object, and plan the scan (transmission and receiving elements positions) accordingly. 2. Calculation complexity. Three known techniques used when reconstructing a model are phase shift, absorption & beam speed. All three methods make use the water path as a reference. New algorithm uses the expected phase shift / absorption / speed in the scanned shape as a reference. This (assuming that most of the scanned object does not contain flaws) will enable removal of most of the 'digitized' area of the object, resulting in much less data to crunch, which will enable working in higher resolution and faster results calculation.
Distributed System for Ultrasonic Non-Destructive Tomography
93
Usually such functionality is supplied with most expensive NDT equipment. This is due the fact tomography processing is very CPU- and resources-demanding, so it makes tomography functionality not accessible for most of NDT customers. Even in case system is equipped with tomography processing the use of tomography view in such case is limited to the system PC and end users of the inspected parts have no access to this data, since then they need similar powerful station on their site. The big amount of inspection data limits very much the possibility of data sharing between different parties involved into NDT process. The distributed system targets to solve these problems by utilizing distributed resources needed for tomography processing and data storing and sharing. Most of the modern distributed systems are based on the following basic principals: 1. The interaction of distributed objects (CORBA, DCOM). This approach can be used for design of the systems where structure and number of components which can not be determined at the design stage or it can vary during operation. 2. Parallel computing in multi-processor and clustered systems. Such approach is effective in solving easily parallelized tasks required handling large volumes of data. 3. Client-server applications including Web applications and Web services. This architecture is mostly used for the system where the large numbers of users access the shared data sets. Implementation any one of above approaches or its combination allows creating required distributed system. However there are certain difficulties faced by end users: 1. With increasing functional and organizational complexity of systems the operation cost growing faster, which reduces overall effectiveness of the systems; 2. High cost of system scalability is accompanied by slow reaction time to new demands and long system’s down time during the upgrades; 3. An effective dynamic reallocation of computing power between different tasks depending on priority or hardware usage is impossible. The drawbacks can be solved by using recently gained popularity cloud approach to creating distributed systems. Cloud approach hides from the end user any needs to be aware of computer platforms details, its architecture and methods of operation. It also frees the designers of the system from dealing with computer infrastructure therefore they are able to concentrate all efforts on solving the real tasks. Unified cloud interface provides simple access to its functions, which simplifies system development and support. The cloud virtualization ability allows quick respond to n eeds of end users for scalability and redistributing of computation resources. However, working with large volumes of data required for tomography within the cloud systems is a challenge because of the limited bandwidth of communication channels. It requires the development of caching mechanisms and protocols between the components for the operation of the system with the specified quality indicators. The data to be passed will contain among the inspection results also the geometrical and other information about inspected part. This information is
94 Michael Bron, Gerhard Raffius, Alois Schuette, Vadim Shishkin, Denis Stenushkin
proprietary of part’s designers and manufacturers therefore the special attention should be paid to security issues. The cloud and grid computing security is a relative new topic. To be able to build NDT systems where different customer applications run in its own, separate environment securely within the cloud, virtualization techniques should be used. Research in addresses security of cloud computing in combination with virtualization techniques is also necessary. The focus of the approach is to combine security techniques of both world – clouds and virtual machines together with well known cryptographic protocols.
3 DSUNDT architecture and general approach The system includes the following major components (as depicted at Figure 1): • UT Inspection system where the NDT data is collected and stored locally; • Cloud allocated computation resource for tomography analyze and data sharing; • End user PC(s). The system includes a ser of subsystems that are allocated and running at different boundaries of main system components. The subsystems are listed in the table at Figure 2.
Figure 1. DSUNDT architecture
Distributed System for Ultrasonic Non-Destructive Tomography
Subsystem NDT data acquisition Interaction interface
Boundary NDT system NDT system/Cloud
Tomograph y visualization Security system Remote Data storage Tomograph y algorithms 3D model generation
NTD system
95
Description Subsystem responsible for data collection and storage Subsystem responsible for communication between NDT system and cloud, as well as for cashing Module responsible for visualization of 3D information processed on cloud based tomography
Cloud
Module responsible for data security
Cloud
NDT data storage which based on existing cloud technologies Module responsible for tomography analyze of inspection data Module for generation of 3D model from results
Cloud Cloud
of
tomography algorithms
Figure 2. DSUNDT subsystems
4
Data caching and transfer approach
Development of distributed information system for ultrasonic tomography, where the data gathering, storage, processing and visualization functions are performed by components, that are located far from each other, implies development of data transfer tools that solve following tasks: • Interaction channels for system components that are located far from each other; • Identification and addressing means of system components; • Types identification means for components types; • Dynamic including and excluding of components into the system; • Session management for data transfer system components; • Data transfer means such as operation requests, operation results, error messages etc. for data transfer system components; • Hardware platform independent interaction interfaces; • Effective and high-performance data transfer mechanisms for large amounts of data. Solving mentioned tasks provides the ultrasonic tomography system with the following abilities: • ability to include components that are not located in the same region; • ability to change list of system components dynamically; • ability to include components that operate on different hardware platforms; • ability to include components of different types;
96 Michael Bron, Gerhard Raffius, Alois Schuette, Vadim Shishkin, Denis Stenushkin
• ability to share common data and to control data access for data sharing and data security requirements matching; • ability for system components operate effectively while processing large amounts of data. Large data transfer mechanisms must implement transfer meschanisms that provide transparent network shutdown handling. In DSUNDT such mechanism is based on buffering cache and network monitoring mechanism. This allows to transfer data transparently and not to waste time during network shutdown and other transfer environment problems. System interaction protocol and interfaces are intended to organize components interaction and link them to a single system. Taking in account general approach to development of the ultrasonic tomography system as a cloud system Internet must be used as a base for system components interaction mechanisms development. Its using will result in following positive effects: • There is no need to mount special channels for system components interaction. Existing channels used for Internet access provide appropriate system components interaction abilities taking in account geographical factors; • The system is able to include components operating on different hardware platforms. Application-level data transfer protocol and interfaces are developed taking in account following requirements: • The protocol and interfaces must be platform-independent and provide abilities for system components interaction basing on Internet; • The protocol and interfaces must provide means for system components addressing and identification; • The protocol and interfaces must provide means for dynamical system components including and excluding; • The protocol and interfaces must provide means for interaction session handling; • The protocol and interfaces must provide means for data transferring between system components. • The protocol must be able to us e secure channels between NTD-PC and the cloud.
5
Tomography algorithms and visualization
Commonly widespread tomography inverse-modeling algorithms implement the object reconstruction process using as reference the 'theoretical water path beam', i.e. what would be the phase shift, absorption or beam speed if the beam went through water. This approach is inevitable in the general case, but has several downsides like massive number crunching, and disregarding beam deviation or confronting the illdefined inverse modeling problem. DSUNDT algorithm makes use of knowing in advance the actual shape of the scanned object, together with its material type & characteristics, to overcome these two major difficulties.
Distributed System for Ultrasonic Non-Destructive Tomography
97
1. The algorithm calculates the scanning elements (transmission & receiving) positions, taking into account the surface contour and snell's law, and thus 'knows' the beam diversion when entering & leaving the object. This way the problem of calculating the inverse modeling of the beam deviation is minimized; 2. The algorithm makes use of the theoretical calculated beam path through the object. As a result, since only a small fraction of the object’s volume under test is potentially populated with flaws, almost all of the ‘digitized’ volume elements can be removed from the inversion process, thus result in much less data to crunch. This will enable us to achieve a higher resolution and a higher efficiency of calculation. The results of tomography algorithm are converted to 3D model representation, which contains information about the part itself and indications found there. New cloud-compliant parallel algorithms for server-side cut computation and visualization preparation together with modern hardware accelerated shader technology produce 3D models that are portable across the network in real-time. 3D information is presented to the end user, wherever he is located. The actual generation of 3D model is done on cloud, where the presentation of the model and manipulation with it is done on end-user station.
6
Conclusion
The presented paper describes general architecture and approaches used in process of Distributed System for Ultrasonic Non-Destructive Tomography development. The system aims to improve non-destructive testing results anamysis by reducing chance of under- and over-detection and to reduce total inspection cost. In order to achieve the goal DSUNDT introduces cloud-based approach. In this scheme cloud side stores inspection data and generates 3D representation of the results, and client-side serves inspection process and 3D model display. The system’s function bases on n ovel approaches to ultrasonic tomography calculation and distributed system building.
98
G. Burdo, B. Palyukh, A. Sorokin
Approaches to Development of Interactive System of the Product Quality Control in Multiproduct Machinery Manufacturing G. Burdo, B. Palyukh, A. Sorokin Tver State Technical University, Tver, Russia e-mail:
[email protected]
Abstract. The article proves the relevance of building the automated systems of product quality control in multiproduct machinery and instrumentation manufacturing industries. A set-theoretical model is developed to identify the mechanism of exchanging the information and decision-making in the automated system of quality control at the multiproduct machinery manufacturing enterprise. The enterprise quality control system is considered as a complex system with a three-level hierarchy. Keywords: automated product quality control system, set-theoretical model, system analysis, artificial intelligence.
1
Introduction
The quality of products produced by enterprises exerts a direct influence on t he national economy development level as a whole. This proposition has been repeatedly demonstrated not only in Russia, but in advanced economies of Europe and Asia (Germany, Japan, South Korea) [1, 2, 3]. The key feature of modern machinery manufacturing is a wide range of output products focused on a target customer and a tightly scheduled preproduction stage. This is a good reason to define a modern machinery manufacture as a multiproduct production. Hence, in today's multiproduct machinery production it is necessary to pay much more attention to the entire product life cycle (PLC) [4] in order to ensure product quality, i.e. to control the product quality at every stage and phase of manufacturing. For the purpose of quality control at all stages of a product life cycle it is essential to create a product quality control system (QCS) governed by ISO 9000 standards. On the assumption of entering necessary effective amendments to product performance and a wide product range of a modern machinery manufacturing enterprise, the creation of the automated QCS has become urgent.
2
A set-theoretical model of quality control system
To identify the mechanism of exchanging the information and decision-making in the automated product QCS at the multiproduct machinery manufacturing enterprise, a set-theoretical model was developed. Let us consider the enterprise QCS as a
Approaches to Development of Interactive System of the Product Quality Control in Multiproduct Machinery Manufacturing 99
{ } {
}
complex system with sub-systems R0 = R11 , R12 ,..., R15 , R 21 , R 22 ,..., R 212 (see Fig.1). Figure 1 represents the QCS control subsystem designated as R0 (subsystem of the upper level). Subsystems of subsequent levels are described below.
Fig.1. Set-theoretical model of QCS 1 Subsystem R1 performs quality control within technical specifications (TS)
1 development phase. It includes subsystem R2 - quality control at the stage of technical specifications (TS) development. 2 Subsystem R1 performs quality control within the design work (DW) phase with
2 four life cycle stages. It fulfills the following functions: R2 - quality control at the
3 stage of research (R&D); R2 - quality control at the stage of draft proposal (DP) 4 development; R2 - quality control at the stage of draft design (DD) development;
R25 – quality control at the stage of preliminary design (PD) development. 3
Subsystem R1 performs quality control within the work paper (WP) and maintenance documentation (MD) development phase with two life cycle stages. It 6 fulfills the following functions: R2 – quality control at the stage of WP development;
R27 – quality control at the stage of MD development.
100
G. Burdo, B. Palyukh, A. Sorokin
4 Subsystem R1 performs quality control within the product manufacture and 8 testing phase (M&T) with three life cycle stages. R2 – quality control at the stage of 9
process engineering (PE); R2 – quality control at the stage of product manufacture (PMr); R 210 – quality control at the stage of product testing (PT). 5
Subsystem R1 performs quality control within the product maintenance and utilization (M&U) phase, with two life cycle stages. It fulfils the following functions:
R211 – quality control at the stage of product maintenance (Mc); R212 – quality control
at the stage of product utilization (PU) .Control subsystem R0 has six control functions. The fist one is to control 1
subsystem R1 , that is to find out and adjust product quality parameters within TS phase: R01 : A × b11 → a11 , where A – control signal - is a set of requirements for product quality parameters at all the stages of PLC under the enterprise management 1
1
system; b1 is a set of actual product quality parameter values at the stage of TS; a1 is a set of corrective actions on product quality parameters within the TS phase. The 1
second function is to control subsystem R2 , that is, to find out and adjust product quality parameters within DW phase: R0 2 : A × b12 → a12 , where b12 is a set a set of product quality parameter values within the DW phase; a12 is a set of corrective actions on product quality parameters within the DW phase. The third function is to 1
control subsystem R3 , i.e. to find out and adjust product quality parameters within the 3
3
3 WP and MD development phase: R0 3 : A × b1 → a1 , where b1 is a s et of actual 3
product quality parameter values within the WP and MD phase; a1 is a s et of corrective actions on product quality parameters within the WP and MD development 1
phase. The fourth function is to control subsystem R4 , that is, to find out and adjust 4
4
4 product quality parameters within the M&T phase: R0 4 : A × b1 → a1 , where b1 is a 4
set of actual product quality parameter values within the M&T phase; a1 is a set of corrective actions on product quality parameters within the M&T phase. 1 The fifth function is to control subsystem R5 , that is, to find out and adjust product 5 5 5 quality parameters within the M&U phase: R0 5 : A × b1 → a1 , where b1 is a set of 5 actual product quality parameter values within the M&U phase; a1 is a set of corrective actions on product quality parameters within the M&U phase. At last, the sixth function is to find out the product quality parameters in all the 1
2
3
4
5
PLC phases: R0 6 : A × b1 × b1 × b1 × b1 × b1 → B , where B – output signal – is a set
of end product quality parameters.
Approaches to Development of Interactive System of the Product Quality Control in Multiproduct Machinery Manufacturing 101 1 Subsystem R1 fulfils two control functions. The first one carries out immediate
1
1 1 1 1 quality control at the stage of TS development: R1 1 : a1 × b2 → a 2 , where a 2 is a 1
set of requirements for product quality parameters at the TS stage; b2 is a set of actual (current) product quality parameter values at the TS stage. The second function is the actual TS stage product quality information transfer to the higher control 1
1
1
subsystem: R1 2 : b2 → b1 . 2
Subsystem R1 fulfils five control functions. The first one carries out immediate 2
2
2
2
2 product quality control at the R&D stage: R1 1 : a1 × b2 → a 2 , where a 2 is a set of
2
requirements for product quality parameters at the R&D stage; b2 is a set of actual (current) product quality parameter values at the R&D stage. The second function is 3
2 2 3 3 the product quality control at the DP stage: R1 2 : a1 × b2 → a 2 , where a 2 is a set of
3
requirements for product quality parameters at the DP stage; b2 is a set of actual (current) product quality parameter values at the DP stage. The third function is the 4
2 2 4 4 product quality control at the DD stage: R1 3 : a1 × b2 → a 2 , where a 2 is a set of
4
requirements for product quality parameters at the DD stage; b2 is a set of actual (current) product quality parameter values at the DD stage. The fourth function is the product quality control at the PD stage:
R12 4 : a12 × b25 → a 25 , where a 25 is a set of 5
requirements for product quality parameters at the PD stage; b2 is a set of actual (current) product quality parameter values at the PD stage. The final fifth function is the synthesis of product quality information within the DW phase: R12 5 : b22 × b23 × b24 × b25 → b12 . 3
Subsystem R1 is intended for fulfilling three control functions. The first one is the 6
3 3 6 6 immediate product quality control at the WP stage: R1 1 : a1 × b2 → a 2 , where a 2
is a set of requirements for product quality parameters at the WP stage; b26 is a set of actual (current) product quality parameter values at the WP stage. The second stage is 7
3 3 7 7 the product quality control at the MD stage: R1 2 : a1 × b2 → a 2 , where a 2 is a set of 7
requirements for product quality parameters at the MD stage; b2 is a set of actual (current) product quality parameter values at the MD stage. The third function is connected with the synthesis of product quality information within the WP and MD 3
6
7
3
phase: R1 3 : b2 × b2 → b1 .
102
G. Burdo, B. Palyukh, A. Sorokin
4
Subsystem R1 fulfils four control functions. The first one is the product quality 8
4 4 8 8 control at the PE stage: R1 1 : a1 × b2 → a 2 , where a 2 is a set of requirements for 8
product quality parameters at the PE stage; b2 is a set of actual (current) product quality parameter values at the PE stage. The second function is product 9
manufacturing management: R14 2 : a14 × b29 → a 29 ,where a 2 is a set of requirements 9
for product quality parameters at the product manufacturing stage; b2 is a set of actual (current) product quality parameter values at the product manufacturing stage. 4
4
10
10
The third function is product testing function: R1 3 : a1 × b2 → a 2 , where a 10 2 is a set of requirements for product quality parameters at the PT stage; b210 is a set of actual (current) product quality parameter values. The fourth function is the synthesis 4
8
9
10
4
of M&T information: R1 4 : b2 × b2 × b2 → b1 . 5 The last subsystem R1 fulfils three control functions. The first one is the immediate
product quality control at the product maintenance stage: R151 : a15 × b211 → a 11 2 , where
a 11 2 is a set of requirements for product quality parameters at the maintenance stage;
b211 is a set of actual (current) product quality parameter values at the product
maintenance stage. T he second function is the product quality control at the utilization stage: R15 2 : a15 × b212 → a 12 2 ,
12 where a 2 is a set of requirements for product quality parameters at the PU stage;
b212 is a set of actual (current) product quality parameter values at the PU stage. The
final third function is the transfer of product maintenance and utilization information: R15 3 : b211 × b211 × b212 → b15 .
3
Model Parameters and Operators All the sets of requirements for product quality parameters in PLC phases 2 3 4 5 1 , a1 , a1 , a1 are defined by the requirements for product quality specified by the
(a , a 1 1
)
(
)
organization management system. Subsets b11 , b12 , b13 , b14 , b15 form the quality parameters of complete product B. If the quality is ensured, the set of parameters A is to include the set of parameters B. It is obvious the information derived at every product life cycle stage and phase is to be estimated. The specific character of multiproduct engineering and conditions of design research, technological preproduction and organization of production only provides the analysis of the most valuable, in terms of quality, data flows. Model operator functions can be implemented in automated and interactive modes.
Approaches to Development of Interactive System of the Product Quality Control in Multiproduct Machinery Manufacturing 103
If the formal representation of the product variant evaluated is deficient (TS, R&D, DP stages), the decisions are made in an interactive mode. If there are concrete and specific parameters in a variant description, it is done automatically.
4
Conclusion
The introduced set-theoretical model of automated product quality control system makes it possible to define the rules of information exchange and organization. The following stage consisted in developing product quality evaluation indicators at the product life cycle stages and phases as well as ways of making decisions by quality management system operators on the basis of production model of knowledge. That allowed developing the algorithms and software environment of automated product quality control system in multiproduct machinery manufacturing. References: 1. Glichev, A.V. Fundamentals of Product Quality Control, 2nd edition, revised and enlarged, Moscow, RIA Standarty & Kachestvo. 2001, 424 p. 2. Gorbashko, Ye.A. Quality Control: Tutorial, St Petersburg, Piter, 2008, 384 p, (TUTORIAL series). 3.Chernikov, B.V.; Il’in, V.V. Control of Inform ation Systems in Economics Quality:Tutorial, ed. by B.V. Chernikov, Moscow, Publishing House Forum, 2009, 240 p. 4. Kolchin, A.F.; Ovsyannikov, M.V.; Strekalov, A.F.; Sumarokov, S.V. Product Life Cycle Management, Moscow, Anarkhist, 2002, 304 p.
The development of graph approach to designing the intelligent computer-based training systems 104
The development of graph approach to designing the intelligent computer-based training systems 1 A. Afanas`ev, N. Voit Ulyanovsk state technical university, Ulyanovsk, Russia e-mail:
[email protected],
[email protected]
Abstract. A new approach based on graph theory was developed. It contains a set of methodological principles, methods, models, algorithms, structures that represent the intelligent computer-based training systems, improve the learning performance and quality, and reduce the learning time.
1 Introduction The challenge of improving the learning performance and quality was caused by: • High unplanned additional costs on education, • Only a few students complete their education and are highly qualified, • The development and implementation of software systems enabling the intensification and increasing the involvement of students in the educational process. Paradigms and specifics of learning, principles of educational environments design, standards (IEEE 1484 LTSA, SCORM), technologies (Computer-Based Training, Internet-Based Training, Web-Based Training), platforms (e.g., Virtual Learning Environment) were put into practice in order to reduce the negative factors, improve technology and educational content, implement e-learning technologies into the educational process, theory and practice of development and use of the intelligent training systems. According to the best learning performance and quality, and learning time reduction particular attention has been paid to the challenges oriented to intellectualization and the use of ICT in education up to the world standard. The challenges importance is motivated by the following points. 1. Attempts to automate and / or make an automatic generation process of individual trainee trajectories. 2. The need to integrate the design, process and organizing experience of enterprises and organizations with computer systems and tools for learning into educational process. 3. Requirements for scale application of this systems type in practice. 4. Steady increase in the degree of presence of distance learning in educational standards. The lack of approaches to the development and use of intelligent training systems in a theory of computer-based learning environments was the source of errors. 1
The reported study was partially supported by RFBR, research project No. 13-07-00483 a.
The development of graph approach to designing the intelligent computer-based training systems 105
2 Paradigms of intelligent computer-based training systems Key words: knowledge, abilities, skills and competence of trainee. Knowledge is a set of formalized information of definite content, which generates the whole vision of an object at a certain level or a description of certain stages of the processes, which are available to trainee during the educational process. "Atomic knowledge" is a minimal unit of knowledge which is presented in theory as the concept and elementary object, and in practice – as an atomic operation of student. Ability is the capacity to perform operations according to the trainee knowledge. Skill is the ability to perform the right technique at the right time, effectively and efficiently. Competence is a set of knowledge, abilities and skills according to data domain. The trainee level control is performed on the following parametric criteria: knowledge, abilities, skills and competence.
3 Methodological principles of the intelligent computer-based training systems development The methodological basis for the development of intelligent computer-based training systems is an intelligent learning environment according to the standard IEEE 1484 LTSA [1]. The environment contains the system of the principles "intelligence continuity - liveliness" (Fig. 1). A graph representation of the studied object is shown in the lower part of the figure. The design principles of intelligent computer-based training systems are: 1. Continuity is the end-to-end learning process in schools, colleges, universities and advanced training institutes with continuity of the best training methods. 2. Intellectuality is an automated and / or automatic design of individual trajectories by students. 3. Integration is an interaction of design, technological, organizational experience of enterprises and organizations with computer systems and training tools. 4. Liveliness is an activity rate of designed trajectories use in the educational process. These principles are implemented in the mathematical software of intelligent computer-based training systems, the main components of which are described below.
106
A. Afanas`ev, N. Voit
$
Job
$
Job
Job
University
Advanced training institute
Continuity College
School
Trainees
ICT Mathematics ...
Computers
Data domain interface
Intellectuality Integration
standards
methodics
technologies
experience
advanced experience
professional maturity & competence quality
program N
program N+1
liveliness
Fig. 1. Methodological principles of the intelligent computer-based training systems development
The development of graph approach to designing the intelligent computer-based training systems 107
4 Graph model of trainee A graph model of trainee was developed. It is a graph of the student's individual characteristics or states: Knowledge, Ability, Skill and Competence. In order to estimate the values of these characteristics the function Evaluation was input into the model. The set-theoretic model validation is as follows. Trainee_tm = ( | Evaluation), where Knowledge ϵ C, Ability ϵ C, Skill ϵ C, Competence ϵ C, Evaluation: Knowledge × Ability × Skill → Competence, C ϵ N is a natural sequence. The number of trainee states is equal to the product of Knowledge × Ability × Skill. Competence is integrated characteristic and has C states. Graph representation of the trainee model has the following system states. Trainee = ( S0, < (S1, …, SKNOWLEDGE)×( SKNOWLEDGE+1, SKNOWLEDGE+2, …, SKNOWLEDGE×ABILITY+1, SKNOWLEDGE×ABILITY+2, …, SKNOWLEDGE×ABILITY)×( SKNOWLEDGE×ABILITY×SKILL) >, SKNOWLEDGE×ABILITY×SKILL+1, …, SCOMPETENCE , X, Y, δ, γ), where S is the state of trainee competence level; X is the operational tasks (the input alphabet); Y is the operational decisions (the output alphabet); δ is the transition function; γ is the output function. A graphical representation of the competence states is shown in Fig. 2. The efficiency and effectiveness of the learning process depend on the different specific parameters: learning time, completeness of learning, trainee competence level, etc. In order to determine the efficiency and effectiveness of the learning process the parameters are divided into the set of successful and unsuccessful values. Parameters values can be associated with a specific state of the graph trainee, identifying successful and unsuccessful state of the learning process.
108
A. Afanas`ev, N. Voit
SKAS+1 x4/y4
x1/y1
x5/y5 SKAS+2
x2/y2
x7/y7 x10/y10
x6/y6
SKAS+3 x3/y3 SKAS+4
…
x8/y8
SC x9/y9
x11/y11
Fig. 2. Graph representations of trainee model competence
5 Intelligent method of learning management Intellectualization of the learning process is focused on automated and / or automatic learning management that allows achieving the required values of the individual trainee characteristics successfully in a short period of time. Computerbased management, based on theories of Petri nets [2], information and logical models [3], KFS-graphs [4], etc. is widely used. However an analysis of the learning process at the stage of studying the objects and processes of data domain has not been carried out. Required developments will allow reducing the number of errors made by the trainees during the course [5 - 8]. A new method based on the theory of deterministic automaton or graphs was developed to achieve this goal. According to the method the process of learning object is the trajectories or planned scenes and it consists of operations. The content abstraction of the studied object in the method can be represented as a formal system of rules. It has the following notation: if the event, then P; otherwise Q, where the event is an event or operation flag; P is an operation; Q is an event loop (event, P, Q are the first-order predicate).
The development of graph approach to designing the intelligent computer-based training systems 109
This formalism can be represented by an automaton A, which has a set of states of the studied object Q, input alphabet or operation X, output alphabet or results (output data) of object Y. A = (Q, X, Y, δ, λ) δ: qi × xj → qn λ: qi × xj → yk qi ϵ Q, xj ϵ X, yk ϵ Y, i, j, k, n ϵ N. Each studied object can be associated with a specific automaton A. Supervisory rules of Control check whether the study of object is successful or not. The rules include the regulations for transition of automaton to another state in accordance with the intended learning task (for example, if the operation that does not apply to the task is complete, then the automaton state does not change, the error message and recommendations are displayed on the screen). The minimum set of the supervised rules is the maximum number of errors allowed per one operation. Planned scenes with supervisory rules are presented as a system of trainee model, automaton of studied object and supervisory rules. LearningScene = (Trainee, A| Control), LearningScene is planned scenes, Trainee - trainee model, Control - supervisory rules. The structure of a s cene or a l earning trajectory presented as a s ystem which comprises a vector of three attributes (individual trainee characteristics per operation, automaton states, supervisory rules), and error messages. Trajectory = ( m, ErrorMessage), where Trainee → ErrorCount, ErrorCount is the number of errors made by trainee per operation, q is a state of the automaton, Control is the maximum number of allowable errors ErrorMessage is the number of error messages m ϵ number, number is the number of operations per trajectory. System algorithm allows or does not allow trainee to continue the study according to the number of errors per operation. The method organization generates automatically the learning trajectories based on the planned scenes. The planned scenes of the object can be represented graphically. For example, a number of states can be represented as {q0, ..., q7}, operations - {x1, ..., x13}, the result in each state - {y1, ..., y13} (Fig.3).
110
A. Afanas`ev, N. Voit
x2/y2
q0
x3/y3 x1/y1
x8/y8 q3 x9/y9
q2
x8/y8
x6/y6
x13/y13
q7
x13/y13
x13/y13
q4
x11/y11
q1
x5/y5 x7/y7
x10/y10
x4/y4
x8/y8
q5
x12/y12
q6
Fig. 3. Graph of planned scenes
6 Diagrammatic representation of the graph Graph model of trainee describes successful and unsuccessful states, the input of which receives operational tasks xi. The result of these tasks execution is the operations series of yj, which is the input for the graph of the studied object. The final result is an input xi for a graph of trainee. The cycle communication is provided between the trainee activities and the studied object. Separation of states allows generating the appropriate trajectory for the trainee, if he has the unsuccessful state. The interaction of graph models of the trainee and the studied object are shown in Fig. 4
y SKAS+1
Experimenting with Units of a Human Behavior in Computerized Mediums 111
x4/y4
x1/y1
y x5/y5 SKAS+2
x2/y2
x7/y7
x6/y6
SKAS+3
x10/y10
x2/y2
q0
x3/y3 x3/y3 SKAS+4 …
x8/y8
x4/y4 SC
x9/y9
q2 x8/y8
x11/y11 q3
x9/y9
q1 x5/y5
x7/y7
x6/y6
x10/y10 x8/y8
x13/y13
q7
x13/y13
q4
y
y
x12/y12
x8/y8 x11/y11
Fig. 4. The interaction of graph models of the trainee and the studied object
q5
q6
112
A. Afanas`ev, N. Voit
7 Conclusion Increasing the success of the development of intelligent learning systems using a graph approach is due to the use of the diagram schemes and standards in the operational considerations by the developers. Possible modes of use of the developed intelligent training systems based on the graph approach are: emulation, training and control. In the emulation mode, the trainee can operate according to the planned scenes. In the training mode, the system leads trainee on the appropriate trajectory and provides the right solution according to the task. In control mode, the system counts the number of trainee errors, which he made completing the task according to the trajectory, provides the recommendations and errors. References: 1. IEEE P1484.1/D8, 2001-04-06. Draft Standard for Learning Technology – Learning Technology Systems Architecture (LTSA). 2. Dorrer G. A., Rudakova G. M. Modelirovanie processa interaktivnogo obucheniya na baze formalizmov raskrashennyh setei Petri // Vestnik KrasGU. - 2004. – S. 29-35. 3. Bashmakov A. I., Bashmakov I. A. Razrabotka komp'yuternyh uchebnikov i obuchayushih sistem. – M.: Informacionno-izdatel'skii dom «Filin», 2003, - 616 s. 4. Kurganskaya G.S. Sistema differencirovannogo obucheniya cherez Internet na osnove KFS modeli predstavleniya znanii. // «Vychislitel'nye tehnologii» - 2001, tom 6, nomer 4, str. 3139 5. Graficheskie grammatiki. http://gg.ulstu.ru/articles/ (data obrasheniya 17.08.20013). 6. Afanas'ev A. N., Sharov O.G. Sintaksicheski-orientirovannaya realizaciya graficheskih yazykov na osnove avtomatnyh graficheskih grammatik // Programmirovanie. - 2005. – № 6. – S. 56 - 66. 7. Intellektual'naya platforma obucheniya v obrazovanii, tehnike i ekonomike dlya mobil'nogo oborudovaniya. http://gc.ulstu.ru/ (data obrasheniya 17.08.20013). 8. Afanas'ev A.N., Voit N.N. Modeli i metody intellektualizacii obrazovatel'noi sredy na baze Moodle // Materialy V Mezhdunarodnoi nauchno-prakticheskoi konferencii «Elektronnaya Kazan' - 2013» (IKT v obra zovanii: tehnologicheskie, metodicheskie i organizacionnye aspekty ih ispol'zovaniya). – Kazan'. – Vypusk № 1 (11), Chast' I. – 2013. – S. 43-49.
Import Data Module of the “ULSTU Curricula System”
113
Import Data Module of the “ULSTU Curricula System” M. Guseva Ulyanovsk state technical university, Ulyanovsk, Russia e-mail:
[email protected]
Abstract. In this paper an import data module of t he “ULSTU Curricula System” (UCS) is suggested. The main purpose of the module is to import data from an XML file to SQL tables.
Introdaction Nowadays the problem of educational process automation is quite obvious. There are already several solutions of the problem that were developed in the Universities. The most common is GosInps System. But it has some disadvantages: the program works with single files, so it causes the problem integration of data on s everal computers. In ULSTU there was developed the “ULSTU Curricula System”. It uses centralized MySql database to solve the problem of integration. But the UCS is not commonly used now as GosInsp system, that’s why the problem of connection of GosInsp data (expoted to xml files) and tables of database appears. The main purpose of the described module is to analyze the exported xml file and import its data, setting up the correspondence between them.
1
Characteristic of the XML file data
The structure of the xml file, exported from GosInsp System, is rather easy. All information is located in values of attributes; elements only play role of groups for it. But there is a lot of service information that is not necessary in the UCS database. In the table 1 only valuable elements and attributes are presented. All names of elements and attributes are in Russian in the table as they are in the xml file. Estimated meaning in English is given in brackets.
114
M. Guseva
№ Element 1 “Титул” (Title)
2 3
“Утверждение” (Adoption) “Квалификация” (Qualification)
4 5
“Специальность” (Specialty) “Курс” (Course)
6
“Строка” (String)
7
“Сем” (Semester)
2
Table 1. Partial description of the xml file
Necessary attributes “ПоследнийШифр” (Last key) “ГодНачалаПодготовки” (Year of admission) “КодКафедры” (Code of chair) “Факультет” (Name of faculty) “ДатаПриложения” (Date of enclosure) “Дата” (Date) “Название” (Name) “СрокОбучения” (Term of training) “Название” (Name) “Ном” (Number) “Студентов” (Count of students) “Групп” (Count of groups) “График” (Schedule) “Дис” (Name of Subject) “ПодлежитИзучению” (Hours) “СР” (Homework) “Кафедра” (Chair) “ИдетификаторДисциплины” (Subject identification) “СемЭкз” (Semester examinatino) “СемЗач” (Semester test) “СемКР” (Semester course work) “СемКП” (Semester course project) “Ном” (Number) “Лек” (Number of lectures) “Пр” (Number of practice) “Лаб” (Number of labs)
Relation of the XML file data and the tables of UCS
While analyzing the xml file data and the tables of UCS there was revealed 3 groups of data. − First, there comes the data which is the same in both xml file and tables. It does not need any conversion. For example, Year of admission, Name of the subject or Number of the course. − Second, there are some fields in the tables that can not be got without parsing. For example, such data as Number of lectures, practice and lab is given for the whole semester. Whereas they are given for a week; − The last kind of data is the most complex, because it is not given in the file but it is necessary in the tables for some reasons, such as name of curricula, name of group of students and some others.
3
Module development
The central idea of the module is to present some dialog for the third kind of data. The developed dialog contains 2 m ain parts: data from the xml file (to reduce the human errors) and the dialog where an operator can modify proposed information. The example of the dialog is given in the picture 1.
Import Data Module of the “ULSTU Curricula System”
Pic.1. Dialog for the developed module
The module was built with PHP and contains 450 lines of code.
115
116
A. Danchenko
The ontology of monitoring the documents for planning learning process A. Danchenko Volodymyr Dahl East Ukrainian National University, Lugansk, Ukraine e-mail:
[email protected]
Abstract. In this paper the list of the main structural elements of document for planning learning process is constructed. The required questions to the ontology model of m onitoring are presented. The ontology model that allows using semantic knowledge about subject area and metadata about relations between parts of the course is developed.
1
Introduction
The implementation of the basic concepts of Bologna process is impossible without the active using of learning resources in the educational process. The assurance of high quality education requires the continuous monitoring of learning resources throughout its life cycle. Providing the high quality of learning resources in situation with wide accessibility to the higher education and rapid obsolescence of information requires an organization of the automated monitoring, analysis and evaluation of learning systems educational content using new information technologies. It is very important to take in account the quality of the potential for achieving the goals of education. This potential is declared in course curriculums, educational standards and job training programs in the initial phase of the course development. Curriculum assessment is a process of gathering and analyzing information from multiple sources in order to improve student learning in sustainable ways. The content standards are the curriculum expectations identified for every subject and discipline. They describe the knowledge and skills students are expected to develop and demonstrate in their class work, on tests, and in various other activities on which their achievement is assessed and evaluated. It is impossible to automate the evaluating of quality of the curriculum without using of semantic knowledge about subject area and metadata about relations between parts of the courses. This article concerns the designing of conceptual model (ontology) of curriculums and syllabuses in context of evaluating of quality of the documents for planning learning process.
The ontology of monitoring the documents for planning learning process
2
117
The analysis of structure elements of the documents for planning learning process and requirements to the ontology
This article concerns on two types of the documents for planning learning process – curriculums and syllabuses. Curriculums describe in general terms the knowledge and skills that students are expected to demonstrate by the end of each grade or course, containts list of general modules and references. Curriculums must be created (updated) every 3-5 years. Syllabuses are based on the curriculum and describe the expected knowledge and skills in greater detail. Syllabuses must be redeveloped every year. The list of elements of curriculums is partially represented at the table 1.
Element Course id Name of course Degree Branch Subject Relationship between courses Module Module topic Module content Knowledge Skills Hours/Credits …
The elements of curriculums
Table 1
Example 6.2.02 Information processing methods Bachelor’s degree System engineering Statistical analysis methods for processing data with using of mathematical packages MAXIMA, STATISTICA and integrated development environment IDE NetBeans. Probability theory, Mathematic Distribution of probability theory Descriptive statistics Using mathematical packages MAXIMA and STATISTICA for calculation of descriptive statistics Methods of interval evaluation of parameters of distribution Perform numerical estimation of the distribution parameters by using of methods of mathematical statistics above the array of statistics 144/4 credits ECTS …
The result of analysis of the structural elements of curriculums has shown that the curriculum must contain elements [1]: a co de of the course, name of the course, a branch, a degree, a s ubject of the course, a m ain purpose of the course, a s ets of theoretical tasks and skills of the course, descriptions of the content and the structure of the course, the ways of the organization of controls, data about authors, a number of hours, references, relationships between courses. Syllabuses must contain details about activities (lectures, practices and self studying) and purposes of activities. According the task of evaluating of the quality of the documents for planning learning process it is also important to take to account the relationships between purposes of activities. So, the ontology must answer to questions: • What is the degree of detail of didactical purposes? • Completeness of references: how many purposes have references? • What about actuality of references (not more than 30% of references should be 5 years older).
118
A. Danchenko
• • • • •
3
What is the ratio of number of purposes to the number of credits of the course? What is the ratio of the number of modules to credit? What about the coherence of course elements? What is level of relationships between different courses? Which of purposes of current/any course is a basis for the current didactical purpose?
The ontology model of documentation for planning learning process
The ontology model of documentation for planning learning process is developed by using the web ontology language OWL [2,3]. The classes of ontology model in Manchester OWL syntax are: Class: Author Class: Purpose Class: pShouldKnow SubClassOf: Purpose Class: pMustBeAbleTo SubClassOf: Purpose Class: pMainPurpose SubClassOf: Purpose Class: Branch Class: Speciality SubClassOf: Branch Class: Reference Class: ReferenceMain SubClassOf: Reference Class: ReferencePadding SubClassOf: Reference Class: DiagnosticOfPerformance Class: Curriculum Class: Module SubClassOf: Curriculum Class: ModuleTopic SubClassOf: Module Class: ModuleTopicTask SubClassOf: ModuleTopic Class: Syllabus Class: ModuleContent SubClassOf: Syllabus Class: Practice SubClassOf: ModuleContent Class: Lecture SubClassOf: ModuleContent Class: SelfStudying SubClassOf: ModuleContent The relation between element of curriculum and syllabuses are represented in semantic graph (fig. 1). Class Curriculum contains hierarchy of subclasses Module → ModuleTopic → ModuleTopicTask. Class Syllabus contains the subclass ModuleContent. ModuleContent contains subclasses for declaring the activities: class Lecture, class Practice and class SelfStudying.
The ontology of monitoring the documents for planning learning process
119
Figure 1. Relation between elements of curriculums and syllabuses
The main purpose of the ontology is the describing semantic relations between parts of course/courses. There are three types of relations: “hasPart-isPartOf”, “hasSyllabus-isSyllabusOf” and “has Activity-isActivityOf”. The object properties for describing these relations in Manchester OWL syntax are: ObjectProperty: isActivityOf Domain: SelfStudying, Lecture, Practice Range: ModuleTopic, ModuleTopicTask, Module InverseOf: hasActivity ObjectProperty: isSyllabusOf Domain: Syllabus Range: Curriculum InverseOf: hasSyllabus The relation between element of curriculums and purposes are represented in semantic graph (fig. 2). Class Purposes contains pMainPurpose that describe main didactical purpose of curriculum, pShouldKnow and pMustBeAbleTo for describing requirements for skills and theoretical knowledges. The relation “hasMainPurposeisMainPurpose” describes relation between instances of Curriculums and pMainPurpose. The relations “isPrpSkillOf-hasPrpSkill” and “isPrpKnowledgehasPrpKnowledge” set the purposes for subclasses of Curriculum and Syllabus. The
120
A. Danchenko
relation “isBasisOf-hasBasis” describes a coherence of elements of courses and also is used for cheking the trajectory of learning.
Figure 2. Relations between elements of Curriculum and Purpose
The object properties for describing the relations between curriculums/syllabuses and other classes are presented at tbl. 2. Table 2 The object properties of the ontology The title hasAuthor
hasPrpSkill
hasPrpKno wledge hasMainPur pose hasRef
hasBranch
Domain ModuleTopic, ModuleTopicTask, Module, Curriculum ModuleTopic, ModuleTopicTask, SelfStudying, Module, Lecture, Practice ModuleTopic, ModuleTopicTask, SelfStudying, Module, Lecture, Practice Curriculum, Syllabus
Range Author
ModuleTopic, SelfStudying, ModuleTopicTask, Module, Lecture, Practice, Curriculum Curriculum
ReferenceMain, ReferencePadding
pMustBeAbleTo
Description InverseOf: isAuthorOf, describe authors of different parts of course InverseOf: isPrpSkillOf, describe requarements to skill result
pShouldKnow
InverseOf: isPrpKnowledge
pMainPurpose
InverseOf: isMainPurposeOf InverseOf: isRefOf
Branch
InverseOf: isBranchOf
The ontology of monitoring the documents for planning learning process hasBasis
pShouldKnow, pMustBeAbleTo
pShouldKnow, pMustBeAbleTo
hasDiagnos tic
ModuleTopic, SelfStudying, ModuleTopicTask, Module, Lecture, Practice, Curriculum
DiagnosticOfPerfor mance
121
Characteristics: Transitive, InverseOf: isBasisOf InverseOf: DiagnosticOf
The ontology model is verificated by using embedded tools of Protégé 4.2. [4], a library Jena and ARQ (ARQ is an implementation of support for SPARQL Jena that allows to use SQL-features in SPARQL queries) [5,6].
4
Conclusion
This paper presents the ontology model for monitoring the documents for planning learning process that allows using semantic knowledge about subject area and metadata about relations between parts of the course. The ontology model is developed by using the web ontology language OWL. The verification of the ontology model is made by SPARQL queries with using a libraries Jena and ARQ. References: 1. Danchenko, A. Development of ontology of e-learning cources. Computer Science and Engineering – Collection of scientific papers. Ulyanovsk: UlSTU, 2013. – pp. 51-54. 2. RDF Vocabulary Description Language 1.0: RDF Schema. Available at: http://www.w3.org/TR/rdf-schema/#ch_comment. 3. Herman I. Web ontology language (OWL). Available at: http://www.w3.org/2004/OWL/. Welcome to Protégé . Stanford Center for Biomedical Informatics Research, 4. 2013. – Available at: http://protege.stanford.edu. 5. Matthew Horridge. A Practical Guide To Building OWL Ontologies Using Protege 4 and CO-ODE Tools. Edition 1.2:/ The University Of Manchester. - 2009. Available at: http://www.co-ode.org. 6. SPARQL Query Language for RDF. / W3C Recommendation 15 J anuary 2008. Available at: http://www.w3.org/ TR/2008/REC-rdf-sparql-query-20080115/.
122
R.Gainullin
Ontology method of semantic analisys and control UML-diagramm 6 R.Gainullin Ulyanovsk state technical university, Ulyanovsk, Russia e-mail:
[email protected]
Abstract. In this paper we consider the problems of a nalysis and control of chart patterns used in the design and implementation of complex automation systems. The analysis is based on the automatic graphics RV-grammars. When working in a team proposed an ontological approach for t he analysis and control of complex diagram schemes.
1
Introduction
In the design process of automated systems (AS), an important task is to control the graphics (diagram) workflow specifications. Control is to test individual charts and analysis of complex diagrams for syntax and semantic errors. In modern systems design AC tools, for example, Rational Architect (www.ibm.com/software/awdtools/systemarchitect/), ARIS Toolset (http://www.ariscommunity.com/aris-express) do not provide an effective analysis and monitoring diagram workflows, and integrated control design collective diagrammatiki not done at all. In this regard, the projects at the stage of conceptual design may contain errors that are the most "expensive". [1] To solve the problem of monitoring diagrammatiki encouraged to use syntaxoriented analysis approach, based on a family of automata RV-grammars [2,3]. A method for the control of the semantic-based ontology design. The presentation material is an example of language UML.
2
Multi Level grammar
Designing a team is making new demands in question the correctness of the charts. When the collective design has to consider all possible types of diagrams UML, and the number of terms in the grammar is increased many times over. Classical grammar becomes too cumbersome and its development is extremely difficult. To address these shortcomings, the unit offers automatic multi-level grammars. Consider a multi-level system interacting RV-grammars: RVM = RVi ( RV1 ,..., RVn ) i
6
The reported study was partially supported by RFBR, research project No. 13-07-00483 a.
Ontology method of semantic analisys and control UML-diagramm
123
Grammar RVi contains as one of the states of the grammar RVj. Grammar RVj can also be integrated. Grammar RVM receives input from the thermal tape alphabet characters and transmits them to an appropriate level. Elements that translate the grammar to another level, we call sabtermami. Then the description of RVM-grammar becomes: ~ −
G = (V , Σ, Σ, Σ, R, r0 ) , where - many sabtermov - the elements of grammar, carrying machine on the lower layer. Each production for sabterma must inform the complex in addition to what the receiver is a transition under the influence of sabterma. To extend this record the output Pij appropriately. Let the complex, recorded under the arrow, is a complex successor podgrammatiki. Then the product is as follows: _
Wν (γ ,...,γ )
1 n →r a t m rl
Such products should be read as follows: If a tape is read sabterm, then perform the operation on t he memory. In return we can write complex stack and move into the complex. Complex - the initial set podgrammatiki, the transition to which is initialized with an embedded grammar. Upon reaching the final element of the return stack podgrammatiki read complex, and the work goes on, the rules applying to the complex. Consider the grammar RVM (Fig. 1) for an activity diagram language UML. Grammar RV0 is paramount. The states of each automaton A0 denote the initial state of each grammar. When initializing podgrammatiki it is transferred to the element under the influence of which there was a transition in the composite state. In the grammar RV0 two sabterma lead in the composite state r3 and r5.
124
R.Gainullin
Figure 1 Multi-level grammar activity diagrams The first scheme is the grammar in general terms, and determines the sequence of calls podgrammatik. Consider products for sabterma: W ( 2l 2 )
1 → r3 P
r6
This production means that for an operation sabterma P W1 record number of outgoing links - 2. When you run the product in return stack is written r3 complex and takes you to the set of r6. The above example allows you to analyze the activity diagram, but is more often used for the analysis and control of complex diagrams in the collective design.
3
Diagramm controls in the collective design
In the design of a control group is important ontological consistency complex designed diagrams. The designers, making diagrams may forget to take into account earlier decisions (reflected in the diagrams). For example, the designer can specify a component of the system and lose attention the fact that that the previous stages such component was implemented using other technologies. For the analysis on the semantic correctness is requested to develop a multi-level grammar. The upper level of compositional grammar grammar will use case diagrams, because the development of the systems should start with this chart (according to the methodology RUP). During the analysis will build semantic information about the domain, and each new chart will be added to the overall structure under the consistent expansion of domain concepts. In the construction of the first charts are checked only semantic consistency within the diagram. Validates that the semantic concepts in the pair. When adding a new chart diagram verified the consistency and coherence is detached integrated model projected an automated system. Semantic consistency is analyzed on the basis of a text component diagram. Under the textual component of the grammar is understood texts belonging to the blocks and connections digrams. To test the integrated model to construct a graph of semantic relationships between the elements of an automated system. To solve this problem an adapted method selected lexical and syntactic patterns. Currently, there are two broad classes of methods for extracting knowledge from the corpus - a statistical method and templates. Each of them has both advantages and disadvantages. The statistical method is mainly focused on the processing of the data stream. The methods of this class is better than a large amount of texts studied. Methods based on templates, use the information about the target language to retrieve data. For the methods of this class is characterized by the absence of dependence on the volume of the test case data. Since the diagnosis of errors is important from the earliest stages, the method is selected from the second group - the method of lexical and syntactic patterns. We briefly describe the classical method of lexical and syntactic patterns [4]. Lexical and syntactic patterns allow us to construct a semantic structure that
Ontology method of semantic analisys and control UML-diagramm
125
corresponds to the conceptual content of text units. It uses features of the language in which the text is presented. When analyzing the texts used by the relationship compatibility of text units - koolokatsiya. This relationship suggests fixing some syntactic and grammatical properties of lexical units. All of the following chart form their own mini ontology and must be consistent with the general domain ontology. Combining ontology consists of the following steps: • ontology mapping; • alignment of ontologies; • integration of ontologies. Ontology mapping - the process of determining the connecting links. At this stage, the search for the exact match of text units, a match in a hierarchical relationship. The last means to match a p air of text units are in a h ierarchical relationship (eg, partwhole). An example of such text units may ship - a vessel, electronic computer - a personal computer. Ontology Alignment - is the process of finding similar places in the merged ontology. Identified in step ontology mapping couples undergo further analysis. The purpose of this analysis is to find groups of similar to each other couples. The degree of similarity is determined by applying steam to the same rules in the ontology. Merging ontologies [5] - the process of creating one of the two original ontology. On the basis of the identified at an early stage groups based unified ontology. Are a common part of the previously identified groups. If at block association can not find a uniting link, a warning error. This warning may signal too much fragmentation designed diagrams. To avoid errors of this type, you can either specify an existing chart or add a new connection. Consider the example of the ontological transformation. As a basic chart take a simple use case diagram in Fig. 2.
126
R.Gainullin
Thing
Pic2. Use case diagram For this diagram, the following patterns: • block {type: actor} -> Resource> Actor [value: name] • block {type: usecase} & block {value: name} == verb -> Action [value: name] • block {type: usecase} & block {value: name} = = n oun -> Resource> Action [value: name] Consider that indicates a pattern block {type: usecase} & block {value: name} == verb -> Action [value: name]. On the left side write the condition that consists of two parts united by &. This means that the pattern is applicable when either of the two parts of the environment. The first part of the condition means that the rule applies to the block type of use case. The second part - the text in the block, which is a noun. The result is that the rule applies to the noun in the block use case. The second part is the place in the hierarchy should take the value of the rule. In this case, the rule means that the isolated fragment of the first phase is to be placed in the ontological network under the notion of Action. The result will be the following network setup shown in the right-hand side of Figure 3. Suppose, in the second stage versed following diagram and the result was the following semantic network:
Ontology method of semantic analisys and control UML-diagramm
127
When merging ontologies identified the following supporting elements: Action, Content, Actor. Result of the union is represented in the following figure:
In the analysis of the resulting network diagrams decomposed into two parts. This result means that the diagrams describe the various alternatives and matched. This result may be due to the introduction of the chart from another project due to not properly prepared chart or lack of relationship diagrams. In each of these cases, developers get an error message, and the second chart is marked as an error. The designer may refuse to use the second chart, or add to it, what would have become semantically harmonized chart.
4
Conclusion
In the article the methods of analysis and control of diagram schemes. A method for controlling the correctness of ontological engineering diagrams complex automated system. References: 1. Lipaev V. Software quality / V. Lipaev. - Moscow: Finances and Statistics, 1983. - 263. 2. Afanasyev A, Sharov O Neutralization syntax errors in a graphical programming language / / Programming. - 2008. - № 1. - Pp. 61-66. 3. A. Afanasiev, Sharov O., Methods and means of t ransmission of gra phic diagrams / / Programming. - 2011. - № 3. - S. 65-76.
128
R.Gainullin
4. Automatic Acquisition of H yponyms from Large Text Corpora. Proceedings of t he Fourteenth International Conference on Computational Linguistics, Nantes France, July 1992 5. Ehrig, M., and Sure, Y. 2004. Ontology mapping - an integrated approach. In Bussler, C.; Davis, J.; Fensel, D.; and Studer, R., eds., Proceedings of t he First European Semantic Web Symposium, volume 3053 of L ecture Notes in Computer Science, 76-91. Heraklion, Greece: Springer Verlag
Multistep algorithm of alternatives search in an information catalogue
129
Multistep algorithm of alternatives search in an information catalogue B. Paliukh, I. Egereva Tver state technical university, Tver, Russia e-mail:
[email protected],
[email protected]
Abstract. This article substantiates the efficient use of fuzzy sets for services search in registers and catalogues, we have set a target and provided an algorithm for multistep decision making under fuzzy conditions.
Nowadays users quite often use different functional subsystems in order to optimize the formation of task solving complexes. There is a great number of solvers which are barely accessible due to procedural complications. In order to allow a user to apply current technologies, familiarize themselves with the practice of m anagerial problem-solving, evaluate the features and their applicability to specific tasks, different catalogues are being created in subject areas, such as: Mathtree - for websites about Mathematics, developed by the Institute For Informatics n. b. A. P. Ershov of Russian Academy of Science; WolframMathWorld project, that allows not only data search but also task-solving; NigmaRF project, the catalogue of Biology services; programmableweb.com and others. Despite the fact that data catalogues are widely used in different areas, data search that is considered as the major parts of data systematization, is being done with strictly formulated search requests that result in a search results list. However, quite often it is impossible to state a specific search request, keywords do not match service description tags. It is also impossible to state search criteria specifically. Users have to choose a search area to find a result that matches their task, which they quite often cannot do because they don't know, for example, which subject areas of Mathematics a service developer has used. To avoid restricting the access to fill information available in given catalogues, it is recommended to provide quality system access, also this kind of data is better stated through fuzzy sets. Our targets are as follows: there is a system that has a catalogue of services for a manager. A manager, the user of an information system, states a request with a problem that is subject to solve. The system addresses a catalogue set of services in order to make a ranked list of alternatives (figure 1). Also, it is assumed that number of steps is fixed.
130
B. Paliukh, I. Egereva
Figure 1. Multistep decision making, general view. The task is to develop a data management system allowing the formation of a search results list that suits a search request as much as possible providing that number of steps is fixed. We will address the algorithm of problem solving. If X={x} is a service catalogue that incorporates services {x1, x2, …, xn}. A fuzzy set Yx={yx} has semantic description of every service in the catalogue. Fuzzy set Hx={hx}has description of services parameters (figure 2).
Figure 2. Brief scheme of catalogue structure. A request (figure 3) is stated semantically and resembles sets: Zx1={zx1} - set of search t ags, Zx2={zx2} - extra set of synonyms, Zx3={zx3} - additional set of words similar or close in meaning and concludes the set : (1) Zx=Zx1∪Zx2∪Zx3
Multistep algorithm of alternatives search in an information catalogue
131
Figure 3. Formation of a semantic set of alternatives Solving is a set of services Dx formed by a multistep alternatives selection system that best matches the request. Following the Bellman - Zade formula we will represent solving as integration of tasks and limitations. If we put a s et match 𝐼𝐼𝑥𝑥 = {𝑖𝑖𝑥𝑥 }, 0 ≤ 𝑖𝑖𝑥𝑥 ≤ 1 including indexed matches of a objects descriptions Yx={yx} to requests Zx={zx} formed by calculating the percentage of match. Services with the lowest index of match are not considered, so it is reasonable to limit the set 𝐼𝐼𝑥𝑥 with the desired percentage of semantic match to a description ax. Using this service identifier in the set 𝐼𝐼𝑥𝑥 |𝑎𝑎𝑥𝑥 of the set 𝐼𝐼𝑥𝑥 , by limiting the original set X={x} we get the set 𝑋𝑋𝐼𝐼𝑥𝑥 |𝑎𝑎𝑥𝑥 ⊂ 𝑋𝑋. Next step is required if the limitation 𝐼𝐼𝑥𝑥 |𝑎𝑎𝑥𝑥 is not sufficient. Besides standard repeating of the 𝐼𝐼𝑥𝑥 set and the 𝑋𝑋𝐼𝐼𝑥𝑥 |𝑎𝑎𝑥𝑥 set it is also possible to a fine semantic set Bx, on the basis of the semantic set of alternatives Zx and limitations ax, which alows not to change the whole request. You only need to input some corrections for the search in a previously formed set. This stage will result in a new set 𝑋𝑋𝐼𝐼𝑥𝑥 |𝑎𝑎𝑥𝑥 |𝑏𝑏𝑥𝑥 = 𝑋𝑋𝐼𝐼𝑥𝑥 |𝑎𝑎𝑥𝑥 . When selecting the most appropriate service for a given task we will need to use 𝐶𝐶𝑥𝑥 , the system of parameters that assess content of services and their features. Users set primary requirements to services. After comparing data in Hx, the set of parameters description, with a user's request, the Сx∩Hx set is formed. It's also possible to form a ranked list of services from the set 𝑋𝑋𝐼𝐼𝑥𝑥 |𝑎𝑎𝑥𝑥 |𝑏𝑏𝑥𝑥 = 𝑋𝑋𝐼𝐼𝑥𝑥 |𝑎𝑎𝑥𝑥 in compliance with given parameters. Further 𝑋𝑋𝐼𝐼𝑥𝑥 |𝑎𝑎𝑥𝑥 |𝑏𝑏𝑥𝑥 is limited by the system of parameters Сx∩Hx. The result of our search is as follows (2) Dx=𝑋𝑋𝐼𝐼𝑥𝑥 |𝑎𝑎𝑥𝑥 |𝑏𝑏𝑥𝑥 ∩Сx∩Hx,, representing the crossing of our targets and limitations. Figure 4 shows a multistep decision making process chart.
132
B. Paliukh, I. Egereva
Figure 4. Flowchart of functioning of system of multistep process decision-making At the moment Tver State Technical University is developing a data management system capable of assessing innovative projects that will allow analyzing options of resolving a user's task and conclude about applicability of a given service. This article was some with support of RFFI (projects 12-07-00238, 13-07-00077, 13-07-00342).
A virtual test bench as embedded equipment executive model
133
A virtual test bench as embedded equipment executive model Sergei Elkin, Kirill Larin, Vadim Shishkin Ulyanovsk Instrument Manufacturing Design Bureau Ulyanovsk, 10, Krymova St. Ulyanovsk State Technical University Ulyanovsk, 32, Severny Venec
[email protected] http://http://www.ulstu.ru Abstract. The paper represents the virtual test bench as full-scale analog of t raditional semi-natural ones. The configurable simulation platform for the target system virtual test bench setup, its structure and the components interaction are described. The advantages of t he approach are enumerated. Some original instruments, used in development, are characterized.
1
Introduction
An importance of complex testing in embedded systems development cycle from one hand, and high setup cost and insufficient adjustability of traditionally used seminatural test benches from another, require the lookup of last ones functionality alternative realization ways on the base of modern modeling technologies. One of possible solutions is the development of simulation framework on the base of local PC net as target system model execution environment. In the scale of the environment • functionality of target system executive model is provided by the way of interaction of few remote processes, with any one is some particular target system task code (in its current realization); • functionality of the coupled embedded systems and semi-natural test bench hardware (crew control devices, connecting boxes, etc.) is provided with the corresponding software models.
2
The technology prerequisites
The technology base of the solution is the CDU (control and display unit) software model (CDUSM) [1]. The CDUSM architecture and communication facilities allow to organize the distributed software informational interaction with the use of embedded bus interfaces simulation. Another important technology base of the solution is simulation and diagnostic system (SDS) [2], which provides the coupled embedded systems behavior simulation at informational level. Besides, the setup of the virtual test bench as full-scale functional analog of seminatural one implies presence of some additional software models with quite different
134
Sergei Elkin, Kirill Larin, Vadim Shishkin
rates of graphics image reality and behavior simulation. These differences depend on the roles and functions, the hardware play on the test bench: • Crew control devices models have to provide pseudo-realistic graphics images, high interactivity and full behavior simulation cum bus signals and validity signals. • CDU models are characterized with simplified graphics presentation as «black boxes», low interactivity (socket symbols selection only). The behavior simulation is provided with the algorithms of particular target system tasks and the validity signal simulation. • The connecting boxes models use simplified graphics images (binary mnemonic symbols arrays) and low interactivity (state switch on/off). These models were developed with the use of the Esterel SCADE. Then the C code, generated with SCADE KCG for all the particular models, was integrated into one whole GUI application (further referred as system monitor), to provide • Semi-natural test bench hardware devices interactive simulation; • The information flows and the target system executive model the most important parameters operative monitoring; • The operator interference into the test process by the way of bus error and hardware faults interactive simulation. Additionally some instruments were developed to automate the most routine and laborious procedures of developer manual activities cum • Configuring the CDUSM for complicated target system bus topology; • Generation of code for some system monitor modules. These instruments were integrated into the application (further referred as administrative console), not comprised in the virtual test bench, but used during its setup process.
3
The components interaction structure
The virtual test bench components interaction structure is shown in Fig.1. T he elements called particular task 1...n represent the applications to interact in the virtual test bench environment and are debug executives of particular target system tasks. The elements CF1...m - are configuration files to initialize the CDUSMs and to connect the particular tasks at informational level.
A virtual test bench as embedded equipment executive model
135
Fig.1. The virtual test bench components interaction structure During the simulation session the target system functionality is realized by the way of particular tasks interaction in the virtual test bench environment through the net of virtual informational streams (channels), to simulate real bus lines in model scale of time. Cu rrent informational traffic control and simulation of the events, valuable for system behavior (crew control activity, bus error and hardware faults, etc.), are provided with system monitor interactive facilities. The coupled embedded systems behavior simulation is provided with the use of SDS. There are additional hot swapping facilitates with the use of system monitor to set the contents of particular informational words of data flow manually without any modifying the SDS control specifications. System monitor executive, connected to the virtual test bench as one of particular tasks, provides the operator with the screen presentation of the target system executive model in terms of interchanging interactive objects set. The functionality of any one is provided with the coupled particular task, graphics image comprises pseudo-realistic vidgets and/or mnemonic symbols. The objects connections structure corresponds the target system bus topology. The system monitor screen form is shown in Fig.2. As distinguished from other particular tasks, the system monitor has unlimited access to all the virtual informational channels to provide the required application interface facilities: • Particular tasks load/unload. The tasks list and assignment to the virtual test bench hardware are determined during the process of configuring the CDUSMs on the virtual test bench; • Particular tasks informational exchange control by the way of interactive navigating by the port (socket) symbols (which denote virtual informational channels connection points) and showing the selected data line buffer contents in tabular form (the semi-natural test bench gives no similar facilities);
136
Sergei Elkin, Kirill Larin, Vadim Shishkin
Fig.2. The system monitor screen form • Coloring of all the data lines, connected to the selected port as information sources or targets; • Crew control activities simulation (pressing buttons, handling switches, etc.) through the screen objects, being the p roxies of control devices hardware executive models, interactive interface; • Bus errors simulation (semi-natural test bench connecting boxes analog); • The hardware (both of target system and coupled one) faults simulation by the way of validity flag set/reset and producing the corresponding bus signal; • Temporal substitution (for debug aims) of any particular task output with manually modified data (the connecting box has no similar facilities).
4
A configurable simulation platform for the target system virtual test bench setup
The virtual test bench is being setup on the base of the CDUSMs set, with CDUSMs number is determined with the target system hardware structure. The setup process includes: • The CDUSMs configuration – building the initialization files appropriate set, which describe the interacting applications list, ones assignment to hardware test bench resources (by setting the hosts IP addresses), and coupling the virtual informational channels to the applications, to represent the target system bus topology; • Building the system monitor executive to provide the target system graphics image and GUI to the virtual test bench objects. The components being used during the setup process are the elements of the simulation platform. It includes:
A virtual test bench as embedded equipment executive model
137
• Appropriate number of CDUSM copies; • The hardware used in target projects (both UMDB product line hardware and other vendors products; some model fragments are shown at Fig.3.) executive models library;
Fig.3. The UMDB product line hardware device model fragments – the top level SCADE Display GUI model (top) and SCADE Suite behavior specification (bottom) • The system monitor framework project, which consists of models library objects and the number parameterized samples to describe their informational interchange typical project solutions;
138
Sergei Elkin, Kirill Larin, Vadim Shishkin
• The administrative console application, including the number of generation instruments to automate the most routine and laborious, yet well formalized procedures for development of 1. CDUSM initialization files (denoted at Fig.1. as CF1...m); 2. Some SCADE language structures, used for tuning the system monitor framework project samples in according with the target system hardware and bus structure; 3. C language modules, used in building the system monitor executive to provide a. The target system executive model elements dynamic visualization; b. The executive model objects and CDUSMs interaction. The simulation platform components interaction during the virtual test bench setup process is illustrated at Fig.4. The digits in the circles, to denote the data flows, correspond to the setup process, described at chapter 4, steps sequence.
Fig.4. The simulation platform components interaction structure during the virtual test bench setup process
A virtual test bench as embedded equipment executive model
139
The planning system monitor facilities extension includes: • Using the simplified presentations of real display units graphics images to visualize the units executive models current regime and the most important state parameters; • Producing the test (debug) session protocol, including the real interaction cycloramas in model time, to be used for subsequent analysis and comparison with the system requirements, particularly concerning efficiency characteristics and allowable delays in real time units.
5
The setup process methodic
The reference items for the virtual test bench setup process are interaction protocols (being the part of system requirements), the system monitor framework SCADE Suite/Display sample projects and the hardware executive models library. During the process the next works have to be done 1. On the base of interaction protocols 1.1. Preparing the target system structure and bus topology the MS Access specifications; 1.2. Developing on the base of the SCADE Display sample project the system monitor executive model screen image; 1.3. Revision of the hardware executive models library and, if necessary, its extension with new models; 2. Producing on the base of above mentioned specifications by administrative console application generation instruments 2.1 the SCADE language declarations file, being connected to the system monitor framework SCADE Suite sample project to provide the executive model, which adequate the target system architecture; 2.2 the C language modules, being connected to the Borland C++Builder system monitor project to provide the executive model dynamic visualization and its interaction with the CDUSM library communication utility routines; 2.3 the virtual test bench configuration files to provide the distributed system executive model; 3. Some final system monitor framework SCADE Suite project manual tuning, if necessary; 4. Producing on the base of the tuned framework SCADE projects specifications (with the use of SCADE KCG-generators) the C language modules, to provide the system monitor visualization and behavioral functionality; 5. Producing the system monitor executive by Borland C++Builder linker; 6. Connecting the system monitor executive to the virtual test bench as one of particular task. The administrative console application is developed in C# language in Microsoft VisualStudio 2008 environment.
140
Sergei Elkin, Kirill Larin, Vadim Shishkin
In developing the system monitor executive model the instruments Esterel SCADE Suite/Display 6.3 were used. The target system structure and bus topology specifications package Topology, being connected to SCADE Suite project, was produced on the base of MS Access 2007 specifications outside the SCADE environment - with the use of administrative console application generation instruments. The system monitor executive is realized as WinForms application in Borland C++Builder 5.0. The most parts of the project are the C modules, produced by SCADE KCG. Besides these, the project includes SCADE the system libraries, the CDUSM communication utility routines library, and some service C modules, produced by administrative console application generation instruments.
6
Conclusion
Few virtual test bench versions, having been setup on t he base of described simulation platform, were used in realization of target projects. The practice had affirmed their use both economical and technological advisability as traditional seminatural test bench alternative for embedded distributed control systems class. Among the advantages of the virtual test bench uses are: • The possibility of setup and use at early stages of target project development; • Adjustability and scalability – independency of the structure and wholeness of current project realization, the possibility of multiple reconfiguring as the realization wholeness grows; • Use the same test vectors for the virtual test bench and semi-natural one; • The possibility to debug the particular tasks using integrated development IDE debuggers.
7
References 1. 2.
«The methodology of development of virtual equipment for aviation information and control system», Nikolay Dolbnya, Kirill Larin, Vadim Shishkin, Control processes automation, № 29, Ulyanovsk, 2012 «Diagnostic maintenance of avionics systems of electronic indication», S.Cherkashin, Interactive systems and technologies, Vol.1, Ulyanovsk, 2005
About the Tendencies and Prospects of Development of the Human-Computer Interaction Virtual Environments
141
About the Tendencies and Prospects of Development of the Human-Computer Interaction Virtual Environments A.S. Zuev Moscow State University of Instrument Engineering and Computer Science e-mail:
[email protected]
Abstract. Article presents the review of actual decisions to the organizations of the virtual two-dimensional and three-dimensional environments of hum ancomputer interaction. Results of research of op portunities of t he virtual fourdimensional environments of hum an-computer interaction are explained, the description of functional capabilities of the program model that implements an environment prototype is provided.
1
Introduction
Today different types of computers are an integral element of life and workplaces in various professions. Conveniences of using its software, used both in professional activity and in day-to-day life, are very important for users. Software commercial successes are mostly defined by convenience of its usage that forces software vendors to enhance the principles and means of the organization of human-computer interaction (HCI [1]). Software's necessary component that is used in an interactive mode, is the user interface — set of means, methods and rules, that regulates and provides interaction between human and a software product and a computer. The graphical user interface (GUI) is based on visualization of objects and user's interaction process with program and technical means [2], provides the virtual interactive environment of its operation control. Interactivity of GUI is shown in the recording of cursor position on the display and composition of the selected objects (text, table, pictures, etc.) for which the options implemented by means of certain item collection of the interface are provided. Now in different hardware — mobile (smartphones, tablet computers, etc.) and game (PlayStation Portable, Xbox360 Kinect, etc.) devices — principles of direct manipulation (DM) with class WIMP (Windows, Icons, Menus, Pointing device) GUI are implemented for the arrangement of human-computer interaction. Direct manipulation implies possibility of GUI object management by means of reversible actions and back coupling. Choice of the object and activation of any function is a normal syntax composition in this case [3]. Usually one of metaphors of graphic representation lies at the top level of the direct manipulation systems [4]. For example, the metaphor "desktop" is widely used, that defines primary work area — the main window of a user graphic environment with objects added to it and the background image [2]. The concept of the WIMP interface was offered in 1980, its first implementation was in 1984 in Apple Macintosh. Along with multimedia and computers
142
A.S. Zuev
development, by means of implementation of special visual and animation effects, implementation of spatial interfaces [5] became possible. Simulation of three-dimensional (volumetric) virtual space for WIMP interface elements placement allows to implement a human-computer interaction that is a habitual environment for the user (real three-dimensional space analog). Now competitiveness of software and different types of computer hardware (mobile phones, smartphones, tablet computers, etc.) significantly depends on the organization of human-computer interaction. One of the future-oriented directions of researches in HCI sector is an extension of opportunities and enhancement of ergonomics of spatial interfaces in GUI and the specific environments of humancomputer interaction.
2
Existing actual decisions in creation of the virtual environments of human-computer interaction
Today software developers created a set of the environments of human computer interaction. For example, there are numbers of the virtual desktop environments, such as BumpTop, GNOME, KDE, Xfce, LXDE, EDE, IRIX Interactive Desktop, OpenWindows, Ambient desktop, Mezzo, ROX Desktop, Unity, etc. is developed. This sector of the software market is perspective and has a long-term tendency of dynamic development. Some models of creating a three-dimensional virtual environment of humancomputer interaction are shown in Figure 1–3. In BumpTop application objects locate on the cube's inner surfaces, and in Cube Desktop Switcher – on the outer. In iPhone 4 and iPad 2 using the frontal camera it’s possible to recognize user’s gaze direction and simulate three-dimensional interface by an appropriate projection to the display. In mobile applications the viewing of panels with WIMP interface objects is available. It is also necessary to underline a perspective direction of HCI researches — development and use of holographic interfaces.
About the Tendencies and Prospects of Development of the Human-Computer Interaction Virtual Environments
Figure 1. BumpTop (above) & KDE Cube Desktop Switcher (below) environments.
Figure 2. Three-dimensional interface on iPhone 4 and iPad 2.
143
144
A.S. Zuev
Figure 3. Informational panels (above) and holographic interface (below).
The common in the considered decisions and the appropriate directions of researches is the implementation and enhancement of the virtual three-dimensional space ergonomics. Also the common thing is the accepted restriction of space dimensionality (three-dimensionality). In case of a two-dimensional virtual environment implementation of humancomputer interaction more limited set of decisions can be applied. Decisions that are implemented in Microsoft operating systems can be considered as standard. In Figure
About the Tendencies and Prospects of Development of the Human-Computer Interaction Virtual Environments
145
4 (on the left side) the example of task switching mode in Windows Vista (windows are displayed in the virtual three-dimensional space that simplifies their review) is given. In Figure 4 (on the right side) the example of the Metro Windows 8 interface oriented on sensor devices is given. Instead of a standard desktop with the Start button and a t ask bar the start screen with interactive panels is implemented. These panels display names and fragments of applications, options of their scrolling, group, and sizes and layouts change are provided.
Figure 4. Task switching example (above) and «Metro» interface (below).
Also in two-dimensional virtual environments of human-computer interaction it is necessary to underline such decision as the scalable interface (Zooming User Interface, Zoomable User Interface). Eaglemode is an example of its implementation, where the work area is provided by the plane of placement of objects, and its properties and contents become available in process of scale increase, Figure 5.
146
A.S. Zuev
Figure 5. Scalable interface in Eaglemode application.
3
The suggested solutions of the virtual environments creation of human-computer interaction
The interface is oriented to the person if it meets his/her needs and considers his weaknesses [6]. In the existing environments of human-computer interaction the virtual two-dimensional and three-dimensional spaces are implemented where the basic operation with different objects are similar to the real person activity. At the same time the virtual information space potentially can have more dimensions, for what restrictions are computer hardware and a multimedia in a context of its representation, and also of the user perceivability. At present, the author makes
About the Tendencies and Prospects of Development of the Human-Computer Interaction Virtual Environments
147
researches of aspects of implementation of special visual and animation effects for four-dimensional virtual space’s simulation [7]. In Figure 6 prototypes of the environment displaying the virtual three-dimensional spaces on the sides of rotated cube, similar to Cube Desktop Switcher application environment are provided. Each cube side is volumetric, like in BumpTop application environment. The fourth virtual dimension allows to implement "disjoint" volumes of the virtual three-dimensional spaces corresponding to the cube sides. On the prototypes (Figure 6) examples of violation of inner and outer proportions of a cube are given – offset of edges allows to vary volumes perception of the virtual three dimensional spaces.
Figure 6. Examples of four-dimensional environments view.
Figure 7 shows examples of the virtual four-dimensional space simulation without violation of outer proportions of a cube.
148
A.S. Zuev
Figure 7. Examples of virtual four-dimensional environments view simulation.
4
Developed software model
According to results of the researches made by the author the software model “4DCube” is developed, its screenshot is given in Figure 8. Implementation of the cube metaphor and corresponding options allows to integrate actual achievements and decisions in field of knowledge and the organization environments of humancomputer interaction, and to use all existing means of user interaction with software and hardware means, including Kinect-technologies, holographic interfaces, indicative and sensor devices. The cube is a metaphor, that allows to provide joint display of several virtual three dimensional spaces for the purpose of virtual four-dimensional space formation, in a form, available to user’s perception.
About the Tendencies and Prospects of Development of the Human-Computer Interaction Virtual Environments
149
Figure 8. Screenshot of the developed software model.
The implemented virtual four-dimensional space is interactive — options of cube relocation and rotation are available. Holding right mouse button one can rotate a cube, that provides access to the virtual three dimensional spaces appropriate to its sides is available. If lock of the free rotation is switched off, rotation is available only in horizontal or vertical direction. In addition, the "adjustment" option is implemented — in case of release of right mouse button hold the cube automatically rotates to fully show the edge with greatest displayed area (this option provides elementary "physics" of model). Cube rotation also available through keyboard arrow keys. Using a scrolling wheel of the mouse, representation of the developed model on the display can be scaled, that brings closer or moves away the cube. User's operation with each of the virtual three-dimensional spaces of model can be separately implemented like in BumpTop application, and each of internal surfaces of these spaces can be displayed full screen as in standard desktop of an operating system. The images located in the virtual three-dimensional spaces of a cube sides, are functional — impacts on t hem results in to display of appropriate graphic objects. Inner surfaces of each of the virtual three-dimensional spaces can contain separate background images and be used for display of windows (folders, files, web pages), files, shortcuts, stickers, etc. that allows to implement an original method of data representation and the access organization. In addition, inner surfaces of each of these spaces potentially can contain three-dimensional objects, for example, folders and files "piles". At the same time not all cube sides (two- and three-dimensional mode can be selected by the user) can be "volumetric". Within the developed model the following options of operation with the objects that lies in the virtual threedimensional spaces, corresponding to cube sides can be implemented:
150
A.S. Zuev
1 . For objects such as "file", "folder", "shortcut", "sticker" — placement on inner surfaces of the virtual three-dimensional spaces with visual effect of perspective, relocation between these spaces, resizing, grouping in "piles". 2. For objects such as "window" — placement on inner surfaces of the virtual three-dimensional spaces with visual effect of perspective, display in these spaces perpendicularly to a vector of a user’s look, resizing and scaling (using a scrolling wheel of the mouse), grouping in "piles". Options menu prototypes are implemented in the model, see Figure 9.
Figure 9. Example of a model options menu prototype.
Model implements jumping to inner surfaces of any of virtual three-dimensional spaces that correspond to cube surfaces. This jump allows to work with the selected surfaces in the desktop mode, and its elements — folders, files, shortcuts, windows, etc. — become objects on a desktop. Jumping to the selected surface is implemented by means of the animated movement, named "point of look", Figure 10.
Figure 10. A diagram of animated movement of "point of look".
About the Tendencies and Prospects of Development of the Human-Computer Interaction Virtual Environments
151
The black dots show surfaces centers, solid arrows show movement paths of "point of look", and dotted arrows show gaze directions from ending points of paths. The existing model can be used on sensor devices with the Windows operating system, corresponds to the WIMP-interfaces concept, direct manipulation principles are used to work with the model. The received results of implementation of the virtual four-dimensional graphic interface are original development of GUI and of the principles of the human computer interaction organization, and the relevant decisions can be implemented also within other operating systems. Researches that are similar to this are unknown to the author, and current research will be in author's further work directions in human-computer interaction field.
5
Conclusion
Article considered the actual decisions of creation of the virtual environments of human-computer interaction. Examples of the special visual effects researched by the author are given, usage of these special visual effects allows to simulate the virtual four-dimensional space in graphical user interfaces. The software model of the virtual four-dimensional space based on results of executed researches is provided, the main implemented options are briefly described. Usage of special visual and animation effects for simulation of the virtual fourdimensional (and, probably, more than four-dimensional) spaces can be one of the directions of researches in the fields of HCI, the development of spatial interfaces and the virtual environments of human-computer interaction. Thus, received decisions can be considered as a functional capabilities extension of the existing human-computer interaction environments, and also as the original elements of graphical user interfaces. One should note that in addition to a cube as a metaphor for the virtual four dimensional spaces implementation, other three-dimensional objects can be used, for example, rectangular parallelepipeds. References: 1. Grashhenko L.A., Fisun A.P. Teoreticheskie i prakticheskie osnovy chelovekokomp'juternogo vzaimodejstvija: bazovye ponjatija cheloveko-komp'juternyh sistem v informatike i informacionnoj bezopasnosti: Monografija. – Orel: OGU, 2004. – 169 p. 2. Gul'tjaev A.K., Mashin V.A. Proektirovanie i dizajn pol'zovatel'skogo interfejsa. – SPb.: KORONA print, 2004. – 352 p. 3. Mandel T. Dizajn interfejsov (per. s angl.). – M.: DMK Press, 2005. – 416 p. 4. Baecker R.M., Gridin J., Buxton A.S., Greenberg S. Designing to fit Human capabilities. Readings in Human-Computer Interaction: Toward the year 2000. San Francisco, Morgan Kaufmann Publishers, 1995. 5. Magazanchik V.D. Cheloveko-komp'juternoe vzaimodejstvie: Uchebn. posobie. – M.: Universitetskaja kniga; Logos, 2007. – 256 p. 6. Raskin D. Interfejs: novye napravlenija v pro ektirovanii komp'juternyh sistem (per. s angl.). – SPb.: Simvol-Pljus, 2005. – 272 p. 7. Zuev A.S. O vozmozhnostjah realizacii chetyrehmernyh graficheskih interfejsov // Informacionnye tehnologii. – 2013. – №4. – p. 57 – 60.
152
D. Kanev
Analysis of automated learning systems architectures and architecture development of intelligent tutoring system based on mobile technology 7 D. Kanev Ulyanovsk state technical university, Ulyanovsk, Russia e-mail:
[email protected] Abstract. This paper presents an analysis of architectures automated training systems. Also designed architecture of an intelligent tutoring system based on mobile technology.
1
Introduction
The architecture is the fundamental organization of a s ystem that connects the components and methods of interaction design principles and development. The methodological basis for the development of intelligent computer systems, training of project activities is the space in which the interaction of the "student subject area - the process of learning.".
2
The architecture of the educational system (specification LTSA)
In 2002, the Committee on S tandards of educational technology standard was developed to describe the architecture of learning systems IEEE 1484 (Learning Technology Systems Architecture, LTSA) [1]. This specification refers to the standardization of learning technologies for licensing information systems in the field of education and risk reduction in the design and development of information systems in the field of education. The architecture of systems implementing learning technologies, are five levels [6]: 1. Student interaction - environment. 2. Design features associated with the students. 3. IEEE 1484.1 system LTSA, the components of the system. 4. Mediation perspectives and priorities. 5. Operating components and interactions. The main components of the system architecture that implements the technology education (LTSA), are shown in Figure 1. LTSA components are: processes, data stores and data flows.
7
The reported study was partially supported by RFBR, research project No. 13-07-00483 a.
Analysis of automated learning systems architectures and architecture development of intelligent tutoring system based on mobile technology 153
Figure 1. LTSA components
3
Architecture Moodle learning management system
Moodle (Figure 2) is a course management system (CMS) open source software, also known as a L earning Management System (LMS) or a Virtual Learning Environment (VLE).
Figure 2. The system Moodle
154
D. Kanev
The system operates with teaching units - courses. The contents of the course is created by adding resources (url, file, text page), and elements of the course (Wiki; profile, the database, and the glossary; task, Lecture, feedback, survey, seminar, quiz, forum, chat). With the element of assignments and tests, students can take their work and evaluate their teachers. When using the test, you can automatically evaluate performance. Moodle consists of three parts [2, 3] • code that runs on a web server with PHP support • database driven MySQL, PostgreSQL, Microsoft SQL Server or Oracle • storage for uploaded files and generated files. All three parts can be run on a server and can be separated by load-balancing a web server, a database cluster, and file server. The training system consists of a set of modules that work together with the core system. The modular architecture is very convenient, it lets you upgrade Moodle in different directions. In Moodle plug-ins are strongly typed, ie Depending on the type of functionality that is required to implement, create a plug of different types and using different API. For example, the authentication plug-in module and activity will communicate with the kernel using different interfaces API. At the same functionality (installing, updating, resolution, settings) are treated uniformly for all types of plug-ins. Physically, the plug is a folder with the script PHP (CSS, JavaScript, etc. if necessary). The server communicates with the plugin, looking at designated entry points defined in the file lib.php within the plugin. The core of the system provides the infrastructure needed for the construction of a learning management system. It implements key concepts that all the various plug-ins can work with it. These include [5]: • Courses and events: Course - a sequence of activities and resources are grouped into sections. • Members: anyone who uses the system Moodle. There are several default roles (students, teachers, administrators). • User functionality in Moodle: o User roles: the status of the user in a certain context. o Context: Context space in Moodle, for example, a course module, etc. o Permits: defines the rights of users to access different functionality. o Profile: stores information about the user. • Passage and the end of the course: this includes engine testing. • Logs and statistics: progress reports. • Navigation and Setup. • A library of forms. • Update the system.
Analysis of automated learning systems architectures and architecture development of intelligent tutoring system based on mobile technology 155
4
Architecture educational environment "Database Generator and Educational Resources"
Figure 3. The system Database Generator and Educational Resources
Automated training system Database Generator and Educational Resources [4, 7] (Figure 3) established in accordance with the technology shared pieces of content based on the ontological approach. The database includes training materials thesaurus concepts, training, test and reference modules and training courses. The training modules are part of the potential educational benefits, they may contain pieces of educational material in various forms. The system includes the following subsystems: • Information - base of educational materials; • Tool (Author) - Wednesday tracking of training materials; • compiles - allows the synthesis of new textbooks; • training - allows end-users; • Information Retrieval; • Diagnostic (administrative).
156
D. Kanev
5
Architecture distance learning system "Prometheus"
Figure 4. Prometheus System
LMS "Prometheus" is a support system for the learning process and allows you to: • the planning and administration of the educational process; • provide training using different types of lessons (lectures, seminars, business training); • self-study training material; The system of "Prometheus" has a modular architecture. The main components: • Learning Portal - the publication of information in the virtual university. • Subsystem registration and processing of applications - registration for training and enrollment of students in the group. • The subsystem control payments / expenditure - accounting of funds for training. • Control Subsystem groups - the administrative operations at the group level. • Subsystem schedule - the schedule of the course. • Subsystem library - storage of textbooks, consolidation of certain courses, a fulltext search. • The subsystem testing - testing of educational achievements. • Subsystem communication - a means of communication between the participants of the educational process. • Administration subsystem - the creation and maintenance of facilities LMS; • Media Server - management of streaming video / audio; • The monitoring subsystem - statistics teaching and building reports.
Analysis of automated learning systems architectures and architecture development of intelligent tutoring system based on mobile technology 157
6
Architecture distance learning system "Sakai"
System and network-based distance learning platform Sakai - a virtual environment for training and collaboration (see Figure 5).
Figure 5. The system Sakai
The system architecture is designed to support the following principles: • creation system in which various types of applications can be combined outside Sakai to create a unified user interface. • Ensure the separation of application and presentation logic. • creation of conditions which enable reuse in various media tools Sakai, and perhaps others (not Sakai) environments. • Create an environment that allows you to adapt the tools and services to the local needs of enterprises and offices.
158
D. Kanev
Figure 6. Architecture Sakai
Sakai architecture consists of the following elements (see Figure 6) • Client - Sakai is designed to work as a client / server applications. The primary client for a web browser. • Aggregator - outputs of one or more applications Sakai can be combined with application aggregator. Aggregator controls the user interface through the use of standard user interface elements. • Presentation - the presentation layer combines data from Sakai tool and creates a d escription of the user interface, which is aggregated prior to delivery to the user. • Instrumentation - Sakai tool is an application that combines the presentation logic from application logic provided by various agencies. Tools provide code to respond to any user interface, and the events to change the data. • Services - Services are a set of classes that manage data through a certain set of behaviors. The behavior represented by published application programming interfaces (API). Service can call other services, creating a dependency on them. • System - environment includes a web server, database server, file storage and other resources. Expansion of the system occurs expense of developing new services, and use of provided API.
Analysis of automated learning systems architectures and architecture development of intelligent tutoring system based on mobile technology 159
7
A comparative analysis of educational systems architectures
Comparison of different architectural solutions presented in Table 1. If a particular system supports a substantial element of the string, this fact shall be recorded by a "+". If the content of a string is, the table uses the symbol "-". Partial support is marked "+ / -", indicating condition of support. Table 1 Database Generator and Educational Moodle Resources Prometheus Sakai Work on th e Internet / Intranet + + + + Extensibility +/(sponsored + development) + Open-source software + + The presence of additional software +/- (for some for the client functionality) External API +
Multimedia objects Testing Profile of Scenario training Adaptive Learning Logging Authorization
Architecture underlying
+
+
+
+
+
-
+
+
+
-
+
+
+
+
+
+/-(optional plug-in)
-
-
+ +/-(connect an external system)
+
-
+
+
+
-
+
+
3-tier, plugins
3-tier
3-tier, plug-ins
Clientserver, serviceoriented, plug-ins
Table 1 shows that the presented modern education system works through the Internet / Internet, using a thin client - a web browser, or mobile application. The system must support a variety of multimedia objects (images, sound, video). Mandatory requirement is the ability to expand the system, with the help of services
160
D. Kanev
(service-oriented architecture), or plug-ins. External API - a nice addition to interact with the environment allows third-party applications.
8
Software architecture of intelligent tutoring systems
We distinguish the following requirements for the system being developed: • increased interactivity and flexibility of the learning process in comparison with existing development; • implementation of an adaptive learning method to account for the dynamic component of the student characteristics; • developing tools that automate the process of filling the system of training materials; • planning and Forecasting learning paths; • the possibility of including in the course of various multimedia objects; • easy to use, intuitive interface to the users; • storage, gathering and visual representation of statistical learning process; • work through the mobile equipment in the Internet and Intranet. In accordance with the above analysis of educational systems, and the main characteristics of the system under development. The basis we choose a client-server architecture. Realization of the mobile client is an application through which the user will interact with the training system environment. The server is a kernel, the main task of which is to provide communication and function modules of the system. The main modules of the system: representation (ensures the shipment information to the client for display to the user), the space of learning (the user model, script, domain), user management (authorization, authentication, roles and permissions, the user profile), testing, reporting system. Basic calculations are performed on the server with different modules in order to minimize traffic on the client arrange the data cache (user profile and course material). The designed architecture is shown in Figure 6.
Analysis of automated learning systems architectures and architecture development of intelligent tutoring system based on mobile technology 161
Store
Data management
learning space The user model
Profile
Scenario model
Component testing
The domain model
Virtual teacher
Editor courses
The editor of educational materials
The core of the program
The event handler
Level display
The reporting system
User rights management Client
Mobile application
Cache
Figure 6. The architecture of intelligent tutoring systems based on mobile technology
9
Conclusion
This article describes the IEEE 1484 (Learning Technology Systems Architecture, LTSA), and an analysis of some architectures, automated training systems, including systems Moodle, Database Generator and Educational Resources, Prometheus, Sakai. Strengths and weaknesses are presented in the table "Comparative analysis of educational systems architectures" to 13 points. Also designed architecture of an intelligent tutoring system based on mobile technology, which consists of 17 components. References: 1. IEEЕ P1484.1/D8, 2001-04-06. Draft Standard for Learning Technology – Learning Technology Systems Architecture (LTSA). 2. Development:Developer documentation [Электронный ресурс]. — Электрон, дан.– Режим доступа: http://docs.moodle.org/25/en/Developer_documentatio, свободный. — Загл. с экрана. — Яз. англ.
162
D. Kanev 3. Moodle architecture [Электронный ресурс]. — Электрон, дан.– Режим доступа: http://docs.moodle.org/dev/Moodle_architecture, свободный. — Загл. с экрана. — Яз. англ. 4. База и Генератор Образовательных Ресурсов [Электронный ресурс]. — Электрон, дан.– Режим доступа: http://bigor.bmstu.ru/, свободный. — Загл. с экрана. — Яз. рус. 5. Жигалов Е.В. Архитектура Moodle // Открытое и дистанционное образование. 2012. - № 4. С.24 - 28. 6. Сысоева Л.А. Международные стандарты на архитектуру систем, реализующих технологии обучения (LTSA) / Открытое образование. № 3, 2002. С. 13-19. 7. Норенков И.П., Уваров М.Ю. Информационно-образовательные среды на базе онтологического подхода. // В сборнике научных статей "Интернет-порталы: содержание и технологии". Выпуск 3. / Редкол.: А.Н. Тихонов (пред.) и др.; ФГУ ГНИИ ИТТ "Информика": - М.: Просвещение, 2005. -С. 367-378.
Control of CAPD for press forging via user actions
163
Control of CAPD for press forging via user actions S.I. Kanyukov, A.V. Konovalov
Institute of Engineering Science Ural Branch of the Russian Academy of Sciences. Ekaterinburg, Russia e-mail:
[email protected]
Abstract. This paper deals with control problems of computer-aided design of press-forging of shafts under conditions of an active graphical user dialog with a computer. Nodes of possible user’s actions on the control object model named as the forging process card are identified and classified by levels of importance. A logic layout of the control of CAPD for press-forging of s hafts connected with these actions is developed.
1
Introduction
In the current circumstances a significant part of products in the blank production of mechanical engineering is manufactured by the hammer and press forging methods. Technological preparation of such production is a complex engineering task, first of all, because of the weak formalization of this subject domain. Technological instructions concerning the designing process not only different in various companies, but also within one enterprise are approximate and often contradictory. It is noticed that the designing of press forging processes is more difficult than the technological designing of hammer forging. During press forging, the range of tasks to be solved is much wider, because, during forging, the intermediate heatings of the metal in the furnace are usually used. For each step in furnace heating (called removal), it is necessary to design an intermediate configuration of forgings and their forging technology, if possible, without violating the technological constraints imposed on the process of forging. Since the boundaries of technological restrictions are always in a cer tain range of their values, so in practice the developers of computer-aided process designing (CAPD) have to make decisions under uncertainty, using approximation algorithms and programs. In turn, it requires the provision of system users with the possibility of making corrections to the design results obtained in an automatic mode. The basic idea is to replace a complex mathematical model of the real process by the logical-linguistic model for controlling this process. Within the bounds of this approach, it is supposed to use the experience of the user controlling this process on the basis of rules observable by man, but difficult to formalize using conventional algorithms.
164
2
S.I. Kanyukov, A.V. Konovalov
Related works
Known CAPD of hammer and press forging [1–3] provide users with certain features to make corrections to the design results obtained in an automatic mode. The original approach to the solution of the corrective introduction problem is used in CAPD of forged forgings and the technology named "MALACHITE" [2], in which the user can make changes to the knowledge base of algorithms for solving a number of design problems based on th e “STEP-SH” tool system. However, such approach requires special training for users. In addition, in cases when you want to change the value of some design process parameter for a specific forging, it is not always logical to change the calculation algorithm, which works well for other forgings. Fairly wide capabilities of influencing the results of designing both forgings and the technological process are utilized in CAPD of stepped shaft forging [1] on a hammer. CAPD for press-forging of stepped shafts [3] also provides for the possibility of correcting the design results; but this feature applies only to allowance, lapping and tolerance settings on forgings, not to the parameters of the technological process card. The experience of the development and implementation of forging CAPD at various enterprises shows that, for the successful implementation of the possibility of correcting the design results, one should not only highlight the nodes of possible user’s effects on the control object model, but also classify them according to their significance for the technological process, and the concept of the design process control should be based on this classification.
3
The concept of design process control
Since practically all the decisions shown in the card of the forging process are interconnected, their correction requires iterative re-designing of the whole technological process in view of the corrections made. In this sense, it is about user’s control of the design process. This paper describes the structure of potential user’s effects on the design results and the ensuing control scheme of CAPD for shaft press forging. The conceptual scheme within which any control problem is formulated is considered in [4]. In the assigned task, this scheme can be interpreted as follows. The control object is the process of designing forging technology, which always occurs in certain production conditions, i.e. in a specific design environment. The control object model is a card of the forging process, which reflects all the features of the designed technological process and is one of the possible solutions of the formulated problem. The purpose of the object control is to produce a non-defective forging manufacturing technology with minimal material and energy costs. If the obtained solution (control object model) satisfies the user’s requirements, the assigned goal is achieved. If the goal is not achieved, it is necessary to take corrective solutions, implementing a control action on the control object model. This effect is selected by the user on the basis of the design environment and the control object model, and it is transmitted to the decision unit. The task of the decision unit is to verify the correctness of the exerted effects in two ways. Firstly, you need to check for syntax, when the effect corresponds to the
Control of CAPD for press forging via user actions
165
value typed by the user, rather than selected from the list proposed by the system. Secondly, we need to test the effect for entering within the corresponding confidence interval. The confidence interval is defined as the boundaries within which each action shall be in accordance with the technological instructions for designing. If the effect is not correct and does not pass the test, then it is rejected. Otherwise, the information about this effect is passed to the decision implementation unit, which provides redesigning of the technological process in view of all the effects exerted by the given moment. In this case, if during the redesigning process the latest effect does not contradict to the previous ones, it is recognized practically feasible. So the control object model is driven to a new state. The user can evaluate this by observing the control object model and comparing it with the aim of designing. The result of the observation and comparison initiates either new effects on the control object model when the goal is not achieved or work completion when the goal is achieved. If, with the totality of all the previous effects, the latest effect cannot allow one to design a control object model, then at the given moment it is not recognized as practically feasible, and different versions for further decisions are possible. In the first version, when we use the rule: the system “has no right” to exclude the already adopted user’s practically feasible effects for itself, the latest effect is to be rejected. However, this approach also has an essential disadvantage, since, if the latest rejected effect is more important than the previous ones from the user’s point of view, then the user himself will have to except consistently all the previous actions, and each time the user will try to redesign the technological process with this new effect, but that is not quite logical. It would be more correct, if there is a conflict, to automatically exclude less significant effects, but this requires developing a system for classification of a set of possible actions and determining the status (importance) of each action. Let X = { x1 , x2 ,..., xn } is a set of technological parameters. They characterize the control object model obtained after the implementation of the set of effects V = vα , vβ ,..., vχ . Each parameter xi (i = 1,2,..., n) may be affected by some
{
}
consecutive actions, but, as each following effect on xi replaces the previous one, so the index at v corresponds to the index at x . Assume that x j (1 ≤ j ≤ n) is the process parameter affected by a new action
v j ( x j ) (or simply v j ), more significant than the effects from the set V . Note that each user’s effect v j can be only one of three events: A: changing the parameter x j ;
(1)
B: fixing the parameter x j ; C: cancelling previous effect on the parameter x j . When describing the logic of control of CAPD for shaft forging, it is convenient to use the apparatus of mathematical logic, the basic concept of which is the concept of “simple and complex statements” – each of them can be either true or false. We use the following statements: (2) 1. the proposition k j : "the effect v j is correct";
166
S.I. Kanyukov, A.V. Konovalov
2. the proposition r j : "the solution under the effect v j is obtained" (it means that the control object model has been successfully constructed); 3. the proposition p j : "the effect v j is taken". In accordance with the logic of control of forging CAPD, when the validity of the proposition k j checks the decision unit and the validity of the proposition r j checks the decision implementation unit, the relationship between the statements (2) can be described by the expressions
kj ⇒ pj,
(3)
k j ∧ rj ⇒ p j ,
(4)
k j ∧ r j ⇒ p∂ ∧ pβ ∧ ... ∧ pχ ∧ p j .
(5)
Expressions (3 – 5) are respectively interpreted as follows: – v j is incorrect ⇒ v j is rejected; – v j is correct and the solution has been obtained ⇒ v j is accepted; – v j is correct and solution has not been obtained
⇒ less significant effects
vα , vβ ,..., v χ are eliminated and v j is accepted. In the latter case the information about the remaining effects is passed to the decision implementation unit, and the designing is effected anew. Obviously, when the events B and C (1) are fulfilled, the relation (4) is always true.
4
Classification of control effects
The relative importance of each effect on the corresponding correction parameter is determined on the basis of the physical understanding about the subject field. Let us illustrate this example. Figure 1 shows a simplified typical diagram of the sequence of the basic operations of shaft forging on presses.
Control of CAPD for press forging via user actions
167
The original ingot
Operation k t hi "
III.1
"Broaching
and
d1
I.1 lc dc
Operation lb
II.1
Operation
"Billet
III 2
"Pre-
h
d2
Operation
lo ds
Operation
II.2
IV.1
"Final
dk
d3
Operation
lk Figure 1. A simplified diagram of the main shaft forging operations
The set of the correction parameters for this diagram can be represented as: (6) X = { s, lb , lc , d c , hs , d s , d k , lk , d1 , li , d 2 , lo } . Note that the parameter d 3 (see Figure 1) is not included in expression (6), as it is the size of the final forging and, of course, is not corrected. The interconnection between the relative significances of the effect v on the adjustable parameters in expression (6) is schematically shown in Figure 2. v(lc ) v(s )
v(li )
v(lb )
v(lo )
v(d c ) v(hs ), v(d s )
v(d k ), v(lk )
v ( d1 )
v(d 2 )
Figure 2. The interconnection between the relative significances of the effect on the adjustable parameters
168
S.I. Kanyukov, A.V. Konovalov
As it can be seen from Figure 2, the relative significance of the effects decreases from left to right (according to the arrow direction) in the order of the sequence of solving the process design problems. The most significant effect is ingot replacement
v(s ) . The parameters of an upset ingot hs and d s , as well as those of a broached cylindrical preporation d k and lk , are interrelated through the constant volume
condition, therefore the correction of one of them entails a certain change in another.
The pin parameters lc and d c are not related to each other, and the correction of any of them affects only the surplus size li and the flogging size lo . The considered example does not reflect all complexity of the relationship between the correction parameters and the exerted effects, so in what follows we describe only the most important parameters, which are somehow connected with the workpiece geometry along the operations. In general all adjustable design parameters in CAPD of shaft press-forging and provided effects on them are divided into four groups according to the significance levels (Figure 3). The notation of the corresponding effects on these parameters is indicated in parentheses near the name of the adjustable parameters in Figure 3. The superscript at v denotes the group significance level of the effect. The arrows indicate the mutual influence of the adjustable parameters.
Control of CAPD for press forging via user actions
169
General parameter The original ingot (v I )
Global parameters Number of Billet shape
Minimal total reduction
(v1II )
tranformations
(v2II
)
(v3II
)
Billet upset (v4II )
Local parameters Billet size (v1IiI )
Upset ingot size (v2IiI )
Broaching preparation size (v3IiI )
Preparation size in main operations (v4IiI )
Individual parameters Preparation heating modes, standard time for forging, names of forging operations, etc.
(v IV ) Figure 3. Classification of adjustable parameters and control effects
The general effect provides a replacement of the selected ingot and has the highest relative significance. By analogy, using the notation (2), we can write:
k I ∧ r I ⇒ (∀ pi , i = 1,..., n II ∧ ∀ p j , j = 1,..., n III ∧ ∀ pqIV , q = 1,..., n IV ) ∧ p I , II
III
170
S.I. Kanyukov, A.V. Konovalov
where n II , n III , n IV are the number of effects to global, local and individual adjustable parameters respectively. Minimal total reduction, the number of transformations, the shape of billet and the upset of billet enter into the structure of global parameters. Minimal total reduction is the minimal permissible amount of forging deformation during the whole process of forging. The guidelines for calculating the total reduction and the determination of its minimal value are given in [5, 6]. Note only that the increase in the minimal value of the total reduction may require to increase the size of the original ingot (for example, for Figure 1, it may lead to an increase in the upset ingot diameter d s ), and this, in turn, can lead to the replacement of the original ingot. A group of forging operations performed within one heating (removal from the furnace) is here referred to as “transformation”. For example, in Figure 1 operation I.1 refers to the first transformation, operations II.1 and II.2 refer to the second transformation, operations III.1 and III.2 refer to the third transformation and the final forging is performed in the last, fourth, transformation. Each transformation must meet certain technological requirements mainly concerning minimal reductions within the transformation. Naturally, the greater number of transformations, the larger size of the original ingot is required. The following values for the “billet shape“ parameter in CAPD for shaft press forging are possible: "truncated cone" (as in Figure 1), "cylinder", "concave barrel", "convex barrel" and "ingot" (when the roughing operation is excluded and the original ingot is used as a "billet"). The "billet upsetting" parameter has only two values, namely, "yes" when upsetting is provided (operation II.1 in Figure 1) and "no" when the operation of upsetting is excluded from the forging process. The global effects (v1II , v2II , v3II , v4II ) have the same relative significance in the sense that no global effect can cancel another one and any global effect can lead to the replacement of an unsecured ingot. The influence of the global effect on less significant effects (local and individual) is described by the expression:
kiII ∧ ri ⇒ (∀ p j , j = 1,..., n III ∧ ∀ pqIV , q = 1,..., n IV ) ∧ pi , i = 1,2,..., n II II
III
II
. Unlike global parameters, local ones are interrelated, therefore the relative significance of the respective effects is characterized by two factors, namely, the group level significance (superscript at v ) and the inside-the-group level significance (subscript at v ), which corresponds to the order of determination of these parameters in the design process. The influence of any local effect v j
III
on the other ones is
described as
k III j ∧ rj
III
⇒ (∀ p j +1 , j = j + 1,..., n III ∧ ∀ pqIV , q = 1,..., n IV ) ∧ p j , j = 1,2,..., n III III
III
. A local effect is always in the framework determined by the values of more significant parameters. For example, if a local effect is exerted on the upset ingot size ( v2III in Figure 3), then, for this effect to be found correct, you need to be within the
Control of CAPD for press forging via user actions
171
technological boundaries corresponding to the general and global parameters having already been defined by the moment, as well as the billet dimensions (v1III ) . Individual parameters are such data that effects on them do not require any redesigning of the process. These effects are similar to the user's actions by means of an eraser and a pen, when some inessential corrections are introduced in certain pieces of the finished card of the process (design object model).
5
Conclusion
Thus, CAPD for shaft press forging is constructed in a tree-like form with highlighting points (nodes) of possible user effects. After the implementation of the automatic mode of designing, the user analyzes the obtained control object model and, necessary, consistently makes controlling corrections to the design results, which are processed by the system in accordance with the principles discussed. The discussed scheme of controlling CAPP for shaft press forging allows one to obtain acceptable solutions even with a significant change in the production conditions, and this considerably simplifies its implementation at various enterprises. The work was done as a part of the fundamental research program of the Presidium of RAS, No 15, project 12-P-1-1024, and it was supported by RFBR and the government of the Sverdlovsk region, grant No 13-07-96005 r_ural_a. References: 1. Konovalov A.V., Arzamastsev S.V., Shalyagin S.D., Muizemnek O.Ju., Gagarin P.Ju. Intelligent automated designing systems of technological processes of shaft forging on hammer. Blanking productions in mechanical engineering. No 1. pp. 20–23. (2010) (in Russian) 2. Chesnokov V.S., Kaplunov B.G., Vozmishev N.E. et al. Development and application of software for computer-aided designing and simulation of forg ing and hot forming. KshpOMD. No 9. pp. 36–44. (2008) (in Russian) 3. Kanyukov S.I., Arzamastsev S.V. Computer-aided design technology of forg ing of stepped shafts. Kshp-OMD. No 9. pp. 13-14. (1995) (in Russian) 4. Rizkov A.P. Elements of the theory of fuzzy sets and fuzzy measure. Moscow. Dialog – MSU. 75 p. (1998) (in Russian) 5. Antroshenko A.P., Fedorov V.I. Metal-saving technologies of forging and stamping production. Leningrad. Mashinistroenie. 279 p. (1990) (in Russian) 6. Trubin V.N., Makarov V.I., Orlov S.N., Shipitsin A.A., Trubin J.V., Lebedev V.A. The quality control system for designing the process of forging. Moscow. Mashinistroenie. 184 p. (1984) (in Russian)
172
P. Bibilo, N. Kirienko, V. Romanov
Architecture of a system of synthesis of pipelined logic circuits P. Bibilo, N. Kirienko, V. Romanov United Institute of Informatics Problems of National Academy of Sciences of Belarus, Minsk, Belarus e-mail:
[email protected] ,
[email protected]
Abstract. Architecture of a system of synthesis of pipelined logic circuits is proposed. The basic system components and program modules is considered.
Many synthesis systems of logic circuits whose behavior is represented in highlevel languages are widely used now. The synthesis system LeonardoSpectrum [1] is the most frequently used; it may be configured to a custom library of logic elements. Very high speed integrated circuits Hardware Description Language (VHDL) is the language for this system to describe circuit behaviour. A new approach to improve the speed of logic circuits of VLSI library elements by transforming them into pipelined structures is dicussed in this paper. Circuit functioning is carried out on the systolic principle: all signals from the outputs of the previous block of the pipeline are received simultaneously on i nputs of the next pipeline block. The speed of the resulting pipelined circuit is determined by the delay of the most "slow" block. Next, consider the architecture of a system of synthesis of pipelined logic circuits. The main tasks of the system are partitioning of the circuit into some blocks and inserting registers of triggers between them. The architecture of the system is presented in Figure 1.
Figure 1. Architecture of the system of synthesis of pipelined logic circuits
Architecture of a system of synthesis of pipelined logic circuits
173
The system consists of components, each of which is designed to solve a specific task: building the project, building the pipeline in various ways, forming results in the required format. The component “Building working project” converts VHDL description of the circuit into an internal format SF [2] and also determines the delay for each library part, which the circuit consists of. Two methods for constructing the pipeline are implemented in the system – the maximum pipeline and the block pipeline. The component “Building the maximum pipeline” performs ranking the circuit elements by cascades (topological sort) and inserts the trigger registers between elements of each pair of adjacent cascades, and before the inputs and at the outputs of the circuit. Program modules “OneOut” and “Cascades” implement this method. Module “OneOut” performs the following transformation. All the circuit elements having a branching at the output are duplicated as many times as many elements in the circuit they connected. Thus we get the circuit variant, in which each element is associated with only one element at the output. Module “Cascades” performs partitioning of the circuit in cascades. The component “Building the block pipeline” involves partitioning of the circuit elements in a given number of blocks according to a given criteria. Trigger registers are inserted between elements of adjacent blocks. A block includes several adjacent cascades. Program modules “Cascades” and “Blocks” implements this method. The component “Forming results” builds the description of the circuit with triggers using program module “Triggers” for this purpose and a special converter that converts a circuit description from the internal format SF into VHDL. Work session of the system is extcuted by filling a chain of forms. An example of one of the forms is shown in Figure 2.
Figure 2. One of the forms of the system of synthesis of pipelined logic circuits References: 1. Bibilo, P.N.: Systems of de sign of i ntegrated circuits based on VHDL. StateCAD, ModelSim, LeonardoSpectrum. - М. SOLON-Press, 2005. – 384 p. (In Russian) 2. Bibilo, P.N.: Silicon compilation of c ustom VLSI. – Minsk: Institute of engeniering cybernetics of National academy of sciences of Belarus, 1996. – 268 p. (In Russian)
174
Z. Luchinin, I. Sidorkina
LSM tree as an effective data structure for documentoriented database Z. Luchinin, I. Sidorkina Volga State University of Technology, Yoshkar-Ola, Russia e-mail:
[email protected]
Abstract: This article provides an approach that can reduce the load on non relational database management system, through the use of a lgorithms tree structure to store data. The performance of data processing operations varies depending on the data structures. The study of tree structures, such as B + tree, and merging trees into journal data structure or fractal structures has shown that using that algorithm you can receive rate processing benefits, to compare with MySQL. In the paper the LSM tree algorithm was applied to the documentoriented databases. The description of algorithm was made for CRUD (create, read, update, delete) operations.
1
Introduction
Increasing the power of computers and increasing the volume of information requires programmers to review the data processing algorithms and storage structures to improve the performance. As a consequence occurs a problem of effective access and data processing. One of the actual approaches for software developers in processing and storing large amounts of data is a n on-relational database system (MongoDB, Amazon SimpleDB, CouchDB). The data structures in the non-relational database can be represented as hash tables, trees, graphs, which allows to store data in the suitable software structure. An advanced algorithm is necessary to combine the data processing rate and the flexible data model.
2
Description of the algorithm
The increase in the data processing rate in the database is achieved by using indexes. Indexing in relational database management system is based on algorithms of B or B + trees [1]. The disadvantages of these structures are the complexity of balancing the tree by adding new value to the index and the resource consumption as the index is stored in RAM. Log-Structured Merge-Tree (LSM tree) is a data structure [2] that provides a low cost of operation and high indexing speed of adding and deleting data both in large and small volumes. LSM-tree consists of at least two identical tree structures, namely, C0 and C1. One component in this case, C0, is in RAM and the second component C1 is in the hard
LSM tree as an effective data structure for document-oriented database
175
disk drive. Each new transaction of the data adding generates a log entry with a specific fixing key. The journal of consolidation is one for all of the namespaces and it represents the sequence of operation modifications of data stored on the hard disk. After adding a log entry to the key tree C0, this object limits the RAM space and serves as the primary key for a records store because the recording rate and the data processing in RAM is much higher than on the hard disk.
Figure 1. Schematic representation of the LSM tree consisting of two components
When the object C0 exhausts free RAM, the data from C0 transfers to the hard disk. When data is transferring the segments of trees are merging. The tree structure stored on the hard disk is comparable to the B-tree storage structer but is optimized for sequential access to the disk. It is common for this data structure to hold parent node before child nodes in a certain sequence in the form of multiple adjacent blocks on the hard disk for effective use.
Figure 2. Scheme of transferring data from memory to the hard disk
The new integrated data is written to the hard drive clean blocks so that the old blocks are not overwritten and will be available for recovery in the event of an accident during a ce rtain period of time. This scheme allows writing data as fast as HDD can process it. The journal of consolidation also allows recovering data in the
176
Z. Luchinin, I. Sidorkina
event of an emergency stop of the unit. This type of data recording is implemented in SSTable technology of storing data on a hard disk. A search operation is carried out using the index of LSM-tree which ensures fast data retrieval. First the data is checked in RAM, namely in the C0 tree and than in C1 tree. Searching in this order is based on the presence of unsaved data on the hard disk in the computer's RAM.
3
Сonclusion
The algorithm based on the LSM-tree can be used for horizontal scaling. Each node forms a sorted sequence of data by key. There is the range of keys of each server stored on a master server that allows making a request to server on which the data is held. It is the way to increase the rate of data search and balancing of loads on available servers. References 1. Jeremy Cole B+Tree index structures in InnoDB — http://blog.jcole.us/2013/01/10/btreeindex-structures-in-innodb/ 2. Patrick O'Neil The Log-Structured Merge-Tree (LSM-Tree) — http://goo.gl/2OcRQ
About the Method of Training Cost Reduction in Tasks of Symmetrical Objects Detection in Digital Images 177
About the Method of Training Cost Reduction in Tasks of Symmetrical Objects Detection in Digital Images S. Malysheva Vologda State Pedagogical University, Vologda, Russia e-mail:
[email protected]
Abstract: Using the cascades of classifiers on the base of Haar’s features can be called one of the most effective solutions of the task on image classification. A considerable weakness of the method is the training procedure, which is very costly. In the article the algorithm of modifying the existing cascade of classifiers to be applied to symmetrical trained object is considered.
Keywords: cascade of classifiers, reflection of the model features, source image
1
Introduction
The task of object detection in an image becomes more and more important. The algorithms of human face and parts thereof detecting are the most demanded trend in a similar class of tasks. Algorithms of face detection and identifying of anthropometric points are being actively developed by the researches and implement a rather big number of solutions [1-4]. The most efficient proved to be algorithms with the use of neural networks [3] and a cascade of classifiers on the basis of Haar-like features [4]. The technology utilizing the cascade of classifiers possesses both high performance that allows using it in real time systems and high precision, i.e. probability of making correct decisions. However, the preliminary procedure of training the classifiers is time consuming and demands using of extensive hardware resources. In this connection a task emerges to reduce training costs when detecting symmetrical objects by means of their form analysis and further modification of the existing cascades of classifiers with consideration of the identified features. When solving the task of face detection the number of objects, which need to undergo such a training, is reduced in this case two times. This article is devoted to the algorithm of the cascade of classifiers transformation by means of alteration of the detected Haar features for the detection of objects symmetrical to the trained ones.
178
2
S. Malysheva
Terms and Definitions
Let us define a set of basic concepts characterizing a cascade classifiers model. Class — is a set of uniform objects which have common features. Classifier — is some rule, which allows making a decision, on the basis of a great number of numerical features of the region, that images within the given region belong to some class. Haar feature (Figure 1) — a numerical index, which characterizes the difference between the weighed sum of intensities of image pixels in different rectangular regions (Haar-like rectangle features). The description of features contains coordinates of several rectangular regions as well as weighing coefficients by which the sums of each region must be multiplied. Initially only straightforward primitives were used. However, in addition to standard regions, R. Linhard proposed using of analogous inclined primitives [3]. Weak classifier — an aggregate of Haar’s features with threshold values on t he basis of which the separation of objects from background takes place.
Figure1. Examples of rectangular and inclined Haar features.
3
Description of xml-file Structure for Storing Information of Haar Cascade Classifiers
Data on Haar’s classifiers cascade are stored in a xml-file having a standard description. Structure of the XML-file together with its description is presented in Chart 1. Header of the summit level contains the name and the size of the standard window to which an image is brought for identification of presence or absence of an object of interest, for detecting of which the classifier is trained. Then there is the description of separate units of the cascade. First the Haar features are described, which are used as weak classifiers: positions and sizes of rectangles within the image are indicated, inside of which one needs to calculate the
About the Method of Training Cost Reduction in Tasks of Symmetrical Objects Detection in Digital Images 179
sum of pixels intensities and weight coefficients for each of the rectangles. Also for each of the primitives the left and the right threshold values are presented. While the algorithm operation, in case the calculated value is less than a threshold one, the left value is added to the sum, otherwise the right value. After the description of primitives there follow level parameters: a threshold value, number of the ancestor node and number of the following node at the same level if there is one. When passing through the stages, if the actual sum for this level is more or equals its threshold value, then the transition to the descendant node takes place. Otherwise the transition to the next node, if any, takes place or an attempt to find a similar node at the higher levels. If all of the higher located nodes do not have neighbours, then the value of 0.0 is produced, which means that the object to be found is not detected. Chart 1. Data structure of XML-file with a description of a classifiers cascade
Header, name
Used Haar features
x1
y1
Coordinates of rectangles calculating of the feature
for
0 Flag for identification feature inclination
of
wt1.
w1
h1
...
thr
Threshold for an individual Haar feature
l_val
ЛLeft side value, to be added to the sum in case the value of the
180
S. Malysheva feature is less than the threshold one
r_val
Right side value, to be added to the sum in case the value of the feature is greater than the threshold one
... st_thr n N
Threshold for the which the sum of individual primitives compared
level with values of is to be
Number of the ancestor node Number of the next node if it is at the same level with the given one
...
4
Description of the next levels (similarly)
Algorithm of Presentation of the Trained Haar’s Cascade for Detection of a Symmetrical Object
Contemplation of presentation procedure for a cascade of classifiers is performed on an example of identification of the outer corner of an eye. Symmetry in reference to the vertical axis for identification of the left and the right corners is evident. Let us assume that we have got a trained classifier for the outer corner of the right eye. The task is to get a cascade, which would use reflected features based on the trained cascade, and, correspondingly, would detect the square of the left outer corner of the human eye. The algorithm is based on converting the Haar’s features coordinates with retention of check sums and threshold values. Let us define parameters for a primitive description: x - abscissa of the upper left angle of the rectangle being a part of the primitive; y - ordinate of the upper left angle of the rectangle being a part of the primitive; w - width of the rectangle being a part of the primitive; h - height of the rectangle being a part of the primitive; WIDTH - width of the rectangle within of which the primitive is located (it is fixed for the cascade and is indicated in the header at the summit level). In Figure 2 the example of an inclined rectangle with indication of coordinates in the above form is shown.
About the Method of Training Cost Reduction in Tasks of Symmetrical Objects Detection in Digital Images 181
Figure 2. The example of the inclined primitive with coordinates x = 2; y = 3; w = 5; h = 3
Let us assume the values of coordinates of the reflected primitive are being as follows: x ' - abscissa of the upper left corner of the reflected rectangle, which is a part of the primitive; y ' - ordinate of the upper left corner of the reflected rectangle, which is a part of the primitive; w ' - width of the reflected rectangle, which is a part of the primitive; h ' - height of the reflected rectangle, which is a part of the primitive; Formulae for calculation of coordinates of the reflected rectangle being a part of the primitive result from the rectangles geometry analysis and the line of reflection and have the appearance (1) for the straightforward primitive, and appearance (2) for the turned one. x ' = WIDTH − x — w ; y' = y ; (1) w'= w ; h' = h ;
x ' = WIDTH — x ; y '= y ; (2) w' = h; h'= w ;
5
Experimental Results
To examine the correctness of the reflection method the classifier has been trained for the right outer corner of a human eye and its reflection has been performed. For the quantitative evaluation of the method performance correctness 722 images have been selected from the FERET database, for which the precise location of the outer corner of the eye has been indicated.
182
S. Malysheva
To assess the quality of the cascade performance the following approach has been applied: 1. For each of the images the deflection of a detected dot from that set by the operator has been calculated with the use of Euclidean norm of coordinate difference. 2. The function of deviation from the precise result, which reflects a part of the images for which the deviation value is not more that the preset one, has been plotted (a portion of images constituting the deviation). Accordingly, the graphs of deviations for the trained cascade in the source images and for the reflected cascade in the reflected in reference to the vertical axis images should be similar. We do not speak of identity of the deviation graphs because prior to the detection the preliminary processing of images, which brings in the error probability, takes place. In Figure 3 t he comparison of the deviation results for the source cascade in the source images with the reflected cascade for the mirrored in reference to the vertical axis images is presented. Based on the data obtained we can arrive to a conclusion that the method performs correctly. In Figure 4 the results obtained while testing the method for detection of a symmetrical object by the reflected cascade of the classifiers are presented. The method is implemented in с++ with the use of OpenCV library, and the presented drawings reflect the obtained results of detection by the trained cascade of the outer right corner of the eye on the source image and detection of the outer left corner of the eye by the reflected cascade in the image mirrored in reference to the vertical axis.
Figure 3. The comparison of the object detection results by the trained cascade and of the mirrored object by the modified cascade of classifiers
About the Method of Training Cost Reduction in Tasks of Symmetrical Objects Detection in Digital Images 183
Figure 4. Examples of detected dots obtained while testing the method
6
Conclusion
A new method of getting Haar’s features for an object symmetrical to the trained one is presented. The principal characteristic of the method is the possibility to ensure detection of symmetrical objects avoiding a lengthy and costly training procedure. References: 1. Желтов С.Ю., Визильтер Ю.В., Ососков М.В. Система распознавания и визуализации характерных черт человеческого лица в реальном времени на персональной ЭВМ с использованием web-камеры // Материалы 12 Международной Конференции по Компьютерной Графике и Машинному Зрению Графикон’2002. Нижний Новгород, 2002. - C.251-254. 2. Yin L., Dasu A. Nose shape estimation and tracking for model-based coding / / Proc. IEEE International Conference on Acoustics, Speech, Signal Processing. - May 2001. - Pp. 1477-1480. 3. Lin S., kung S., Lin L. Face recognition detection by probabilisic decision-based neural network // Trans. neural networks. - 1997. - V.8 - №1. - P. 114-132. 4. Viola P., Jones M. Rapid object detection using a boosted cascade of simple features. // Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. December 2001, vol. 1, pp. 1063–6919 5. Lienhart R., Kuranov A., Pisarevsky V. Empirical Analysis of Detection Cascades of Boosted Classifiers for Ra pid Object Detection // Microprocessor Research Lab, Intel Labs.USA, 2002.Р.1-8.
184 Kuprijanov A.A. Melnichenko A.S. Suganova U.V. Sandalova A.V Butakov N.A.
Architecture of virtual business environment a geographically distributed automated systems on the basis of the provisions of the DEMO – methodology Kuprijanov A.A. (Federal Research-and-Production Centre Open Joint-Stock Company «Research-andProduction Association «Mars»), Ulyanovsk (
[email protected]) Melnichenko A.S. (Ulyanovsk State University, Ulyanovsk (
[email protected]) Abstract.Presents the problem of formation of the virtual business environment for design and development of geographically distributed automated systems (AS), and its decision on t he basis of methodology of t he DEMO.
1
Introduction
Design and development of AS, one of the most complex and unpredictable activities. To solve these problems, to date, there are a number of innovative ideas and concepts, which include system and software engineering, knowledge management, virtual enterprises, Internet-technologies, service-oriented technologies, multiagent systems, ontological engineering and other innovative directions of computer science. Research work in these directions in the current moment are relevant. The authors during 2009 - 2013 was performed a series of studies [1 - 4], aimed at the development of the concept of the virtual organization design and development of the AS. Materials of the report reflects the the results of one of the stages of research on this topic and contains: • Justification of the transition from virtual enterprise to a virtual business environment. • Conceptual model of the virtual business environment. • The methodology of the DEMO in the development of architecture of the virtual business environment. • Architecture virtual business environment. • Features of the virtual business environment. • The structure of the virtual business environment. 2 Justification of the transition from the concept of virtual enterprise to the concept of the virtual business environment What is it, first of all, «business environment» and how it differs from such concepts as «virtual platform», «environment design», «virtual store», "virtual organization"? Consider several important clarifications to these concepts based on a «systems approach» presented in the work [5]: 1. In theory the system approach, the word «system» is used to refer to objects of
The main design decisions of the intellectual system of designing of schemes division
185
activity, and indications of their structural concepts. 2. Through a systemic approach, the term «system» from the point of view of the object of activity is understood in its abstract representation. 3. From the point of view of the «system as an abstract representation of the object of activity is its model representation from certain positions. 4. Specificity implementation of activities with the system (as with the object of activity) is expressed in the fact that any system should be provided in the context of its environment (environment), with which the system interacts», where «environment system S» to its constructive expression is «all potentially possible systems in which there is or there should be a system S». 5. In the General case, a systematic approach is difficult to detect «all» system, the analysis of which should lead to a constructive representation of «the environment» system S that leads to a form of presentation of the environment as a set (not the system) of its parts. 6. Specifically abstraction Si and its use as a model Mi, allocates specific dictionary in the natural or natural - professional language. 7. In the framework of this dictionary are identified and specified by the environment Si system. Remarks and explanations given in [6] for a DEMO of the methodology presented in Fig.1 confirm this position: 1. In accordance with the methodology DEMO distinguish the system of external definition (function) and the internal device (design). Architectural modeling is not that other, as a way to display the communication options with the design and only the model can determine/show how the function is associated with the design. 2. The problem is that in real life considerably confused: a) the function and design engineers function and construction is called the same name, after which regularly confused in typing, i.e. include ontologically to the wrong class, and then wonder why the functions applicable operations that can only be done with the design, and Vice versa); b) class (drawing), class (catalogue item) and instance (iron detail, including installed at the place indicated in the drawing). 3. Hypothesis: to discuss any system must have at least two онтолетов, including one for the description of its functionality, the other for the description of its structure. 3 The conceptual model of the virtual business environment projects AS Next, we introduce some definitions that link above provisions of the concepts of «business environment», «virtual platform», «environment design», «virtual store», "virtual organization" used in the design and the development of the AS. 1.Среда, in which the activity takes place on the design and development of the AS, should represent all the possible set of parts of the AS, parts of the control environment, parts of the technological environment in which emerging exists or should exist speakers. From the point of view of the speaker as an abstract representation of an object of activity of the co-owners, is its model representation from certain positions (kind of). Types of project management project, view from a position of technological preparation of the project, view from the position of
186 Kuprijanov A.A. Melnichenko A.S. Suganova U.V. Sandalova A.V Butakov N.A.
the results of the project, i.e. the structure of the AS. Specifically abstraction of the AS, and its use as a model Mi, allocates specific dictionary in the natural or natural - professional language, that, in essence, is the ontology of the project of the AS, including the ontology of the project management of the AS, the ontology of the technological preparation of the project, the ontology of the speaker (the results of the planning and development of the AS). 2. Virtual organization (virtual business environment) has upstream ontological model, output forms the innovative model. In figure 1. the conceptual model of a virtual business environment projects AS [2,3], where: Ontological model is a list of what they saw interested persons (representatives of the customer, developer and user), what objects and their aspects they are also thought to be important in meeting the challenges ahead. Innovation model is a model of the process associated with the transformation of the results of research and developments, information and knowledge in a new product or improved technological process. Virtual business environment projects AS - these are the methods, techniques and methods, tools and trained personnel, applied in decision-making processes in a number of projects AS. Virtual store of the business environment is the basis for the formation and accumulation of data, knowledge, documents, templates, methods, techniques and ways, supported by tools and trained personnel during the design and development of the AS.
Входная онтологическ ая модель
Виртуальная делова среда
Внедренче ская онтологическ ая модель
Виртуальное хранилище деловой
среды
Fig.1. The conceptual model of the virtual business environment projects AS. 4 The methodology of the DEMO in the development of architecture of the virtual business environment Organizational ontology and methodology DEMO (Design and Engineering Methodology for Organizations) [5] was developed Jan Дитцем (Jan Dietz) at the Technological University of Delft (Netherlands). DEMO - methodology applied in the projects of the reorganization of a number of large organizations, including:
The main design decisions of the intellectual system of designing of schemes division
187
• Royal Dutch Airways (KLM). • ING Bank. • The state Agency on management of transport and water management, and several other organizations. Theoretical grounds: • system approach; • ontology facts (Bunge, Wittgenstein); • theory of communicative action (Habermas). This methodology is put in a basis of the standard VISI, mandatory for use in government agencies Holland and academic projects in different countries. Methodology DEMO shares the concept of «function» and «design», «function»is the behavior of the system from the point of view of the user, or «black box», «design» is the structure of the system from the point of view of the developer, the manufacturer or the «white box».
Fig. 2. The architecture of the system on Jan Dietz.
Based on the DEMO methodology become clear that: 1. System component (Holon, see figure 3) determines the implicit presence of the whole system and supports the principle of part - whole for functional physical objects. 2. Function (external behavior) is the implicit presence of the person concerned, defining the purpose of the component in the whole system. 3. Design is a constructive solution, the project design. 4. The mechanism is part of the generate function, model - behavior inside part to functions are not considered . 5. The physical object is installed on a place in the construction (design solution). 6. The life cycle is a sequence of models, then the installation parts.
188 Kuprijanov A.A. Melnichenko A.S. Suganova U.V. Sandalova A.V Butakov N.A.
Fig. 3 System Component (Holon) determines the implicit presence of the whole system and supports the principle of part - whole for functional physical objects.
5 Conclusions From the point of view of practice «system approach» and « system engineering», is regulated by the standard ISO/IEC 15288, where system engineering is industrial engineering enterprises and organizations. Standards of system engineering organizes many of the principles, methods, approaches, tools, regulations, binding them to the stages of the lifecycle of complex systems of various purpose. However, in order to follow the standards of system engineering little, they are easy to read, sensible the application of these standards requires preliminary work on their implementation, including performance of works which are not regulated by the standard, which include the following: • Standards of system engineering mainly answer the question of what to do in the processes of design and development systems. The question of how this should be done in the processes of design and development, remains open (see ISO/IEC 15288). • In the standards of system engineering is not determined ways of formation of the «business environment», defined in the same standard (ISO/IEC 15288). • Standards systems engineering is not the ways and methods of accumulation and use of intellectual potential (resource) organizations as a part of «business environment». • In the standards of system engineering no longer a distinction between «function» and « structure». • A significant addition to the standard ISO/IEC 15288 is standard VISI, defining methodology DEMO and defining what «function» and what is the «structure».
The main design decisions of the intellectual system of designing of schemes division
6
Architecture
of
virtual
business
environment
projects
AS
Architecture of virtual business environment projects, the AU and the functions of its main component holons) is shown in figure 4,5,6,7. The emphasis in the presented model the virtual business environment projects Tr AU done on the features of the virtual business environment, including the intellectual tools (agents and agencies) decision support stakeholders. Intelligent Toolkit provides extraction of knowledge of specialists in the design and development of the Tr TT for their use in new projects of systems. The functions and composition of intellectual agencies represented in the table. The structure of the virtual business environment presents in Fig.8.
Fig.
4.
The
architecture
of
the
virtual
business
environment
Fig.5. Architecture «Agency of technological preparation of design and development of the AS»
189
190 Kuprijanov A.A. Melnichenko A.S. Suganova U.V. Sandalova A.V Butakov N.A.
Fig.6. Architecture «Agency design and development of the AS».
Fig.7. Architecture «Agency for management of design and development of the AS».
The main design decisions of the intellectual system of designing of schemes division
Агент «База данных» Агент «WEB Агентство проектирования и разработки ТР АС [Ek= ]
Агентство технологической подготовки проектирования и
Агент «Корпоративная база
Агент «Виртуальное хранилище данных» Агент «Приложения (стандартные Агент «Архив проектных документов»
Агентство управления проектированием и разработкой ТР АС [Eu=