Title:
1 2
Toward Robust and Quantifiable Automated IFC Quality Validation
3 4 5 6 7 8
Authors: Wawan Solihin, Prof. Charles Eastman, Yeong-Cheol Lee (Georgia Institute of Technology)
9 10 11 12 13 14
Corresponding Author: Wawan Solihin Ph: +1 (404) 944 2653 or +65 96738091 Email:
[email protected]
1
1 2 3
Toward Robust and Quantifiable Automated IFC Quality Validation By W. Solihin, C. Eastman, and YC Lee
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Abstract:
23
Keywords: IFC; Quality; Automatic validation; Export test; Import test
The use of IFC as a standard format in exchange processes has been increasing as the industry begins to address the need of interoperability. A current problem with the use of IFC is in the quality of product models. Much effort has addressed this issue in the form of certifications to ensure a minimum quality of exchange requirements for an IFC file. However, even with the recent increasing awareness and effort to improve the quality of IFC files, the process is still too tedious, time consuming and requires manual efforts by experts. Even if those resources are available, there is currently no clear and quantifiable definitions of what exactly is a good quality IFC file. Without such measures, the adoption of IFC as standard exchange format will be hindered and the industry will be left with only restricted alternatives that are mostly vendor dependent. This paper sets out to address this issue in two aspects: first by defining what a good quality IFC model is, and second by proposing rules that can be automated to measure with confidence the completeness and correctness of the IFC model. These two aspects will serve as a starting point toward a more comprehensive and quantifiable measures of the quality of an IFC file. This proposed goal is represented by using well defined and well documented rules collected from various projects the authors had experienced over the years. The rules include all known aspects of IFC, including geometry, which currently requires mostly manual validation. The paper also proposes a method to address import validation that is under-developed compared to the export validation.
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
Quality of an IFC file has been a persistent concern in the architecture, engineering, construction, and owner-operated (AECO) community. We have only begun to address this issue and start to develop a better way to assess the quality of an IFC file. The concerns on quality range from validating that an IFC file is syntactically correct to evaluating whether a model is well-formed in terms of being testable, to more versatile types of checks: for satisfying programmatic requirements or building code checks. Current uncertainty on the quality of IFC models has prevented or at least has put a psychological barrier to wholehearted adoption of BIM and IFC among end users. This calls for an urgent need to define robust and rigorous test criteria, processes and tools. However, defining and validating data quality is not easy since it should take into account multiple characteristics of data, including syntactic well-formedness, consistency across multiple redundant representations, integrity of translation from the sources, the accuracy of derived data, and others. Reliable and automated means to assess all of these aspects are important for all organizations that need to rely on model data. Model data in building and construction involves structuring of data that includes systems (structural, electrical, piping and others), geometry, personnel, process and schedules; each of these have their own wellformedness conditions. This paper aims to lay the groundwork toward achieving well defined, 2
1 2 3 4 5
unambiguous, and precise criteria of IFC quality testing. It starts with defining the most fundamental requirements of what a good IFC model is. The authors present this work to serve as a basis to start focused discussions and further research to incrementally build increasingly complete and robust testing criteria. We also consider tools facilitating robust and trustworthy IFC model test facilities that can be readily accessible to the general public.
6
1 Current Issues with IFC Model Exchange
7 8 9 10 11 12 13 14 15 16 17 18 19
The topic of the quality of an IFC model in the interoperability scenario has been discussed for many years. Laakso [1] provided a well-ordered historical overview of the development of IFC as a standard and its various issues. Amor [2] discussed the issue and compared it with similar issues in other industries, namely healthcare and STEP-based manufacturing. He proposed a combination of processes, best practices and tools to accomplish what other industries have achieved to manage the challenges of interoperability. Amor’s paper for the first time explicitly proposed tools that are able to perform geometry based comparison used in STEP-based manufacturing domain. Similarly, Lipman [3] discussed in more detail the same issues and highlighted examples from various IFC conformance testing and made comparisons with similar conformance testing in other domains such as ISO AP 227 (Plant Spatial Configuration) from the Plant domain. Kiviniemi [4] and a report from IAI Denmark [5] highlighted issues with the IFC certification process in the past and the need to define a more robust certification regime that is understandable by end users and more predictable in the real world by including more realistic test cases.
20 21 22 23 24 25 26 27 28 29 30
To address the model exchange use cases using IFC, it is important that we look at the typical or expected user workflow. Figure 1 shows simplified diagram of the typical model exchange workflow that shows three general steps that may contribute to the quality of the model. Unfortunately, most of the steps are simply black boxes to the users (Fig 2). In most cases, recipient will see errors that may come from any of the previous steps almost without any control to influence the outcome. Even for the software developers, it is often that the burden falls to the importing applications since errors are mostly reported at the recipient’s end. While software developers have access to one of the black boxes, e.g. the importing application, tremendous efforts and time are usually required to identify the real source of errors. Certification process in many ways reduce a number of issues and greatly improve the quality of the exchanged models. However, there are several issues that are not addressed well with the certification:
31 32 33 34 35 36 37 38 39 40 41
1. Certification is still relatively closed loop process. It provides a higher level confidence on the exchange model, but it does not provide consistent guarantee as it can only cover limited exchange cases of the allowed combinatorial set used within the certification process. It still tilts towards certifying the capability of the software application to export or import IFC files. 2. The criteria for quality assessment within the certification are still not fully well-defined. The current criteria vary from arbitrarily set by the certifying parties or are collected as part of experience accumulated from the previous certification activities. There has not been enough systematic/theoretical support for more rigorous test criteria. 3. Not all errors are equal in term of their impact downstream to the receiving application within the exchange workflow. Many criteria defined as part of domain specific MVDs (Model View Definition) can be treated using statistical significance. For example, in the domain of rule 3
1 2 3 4 5 6 7 8 9 10 11
checking there are many criteria that require combinations of multiple properties from the model to derive a new property. One such example is deriving the envelope of a building using object geometries, property that specifies a building element faces external space, type of the building element. When the information is incomplete or partially complete, the impact may not be as severe since it may just disallow such check to be complete or give meaningful report, but not a complete failure or worst still give a wrong result. 4. Current certification still rely on a few experts to manually evaluate the model, especially for geometry related tests that are difficult to automate. This significantly reduces the comfort of the end users who work with IFC based exchanged as they lose control of the process and the quality of the exchange model without a good support to fix any issue when it is detected often far downstream in the workflow.
BIM Tool #1
Source of errors
Export
Modelling
Import
12 13 14
BIM Tool #2
IFC Exchange Model
Figure 1 - Exchange Scenario Involving Building Model using IFC
15
4
IFC Export
User (sender)
BIM Tool Internal Modelling Capability
Error
Error
IFC file
Error
Error Error
IFC Import
User Input
Error Error
Error
User (recipient)
Imported Model
1
Black boxes
2 3
Figure 2 - IFC Export and Import process are mostly black boxes to the end users. Errors accumulate and are usually detected on the recipient side that potentially include errors from all three exchange steps
4 5 6
Figure 3 – Fig 6 show several examples from the actual reports of the model exchange issues between various BIM tools and certifications. For each cases, tremendous amount of efforts and expertise had to be invested to identify issues and to fix them. All of them involve IFC certified applications.
7
8
Note: application name has been erased
9 10
Figure 3 - Example 1: Report by consultant to the end user on exchange issues between two IFC certified applications and the analysis of the issues
11
5
1 2 3
Figure 4 - Example 2: Report highlighting exchange issues where objects are missing in the imported application. The analysis uses a viewer as the third application to help verification
4 Summary report
Detailed report
5 6 7 8 9 10 11 12 13
Figure 5 - Example 3: Compiled report from GSA Concept Design 2010 certification. The summary report was compiled by hand based on the detailed report shown on the right, plus Solibri model checker
In this paper, we set to address core requirements for a robust exchange workflow. In defining the core requirements we focus on preservation of the model, i.e. the model must be identical or maybe identically mapped using different representations. In both case, measured from the end user perspective, the model is preserved. We exclude most of domain specific MVD requirements as they have fuzzier criteria where ranking or statistically significant measurement may be applied. In the 6
1 2 3 4 5 6 7 8
simple term, we simply want to define a simple measure to give confidence to the users that the model exported and then imported are the “same”. The aim is to provide an open and well-defined/wellaccepted criteria for defining a robust exchange measurement and to encourage an open source development of a tool that checks the criteria, effectively removing the issue of closed loop-ness of the current certification process. We think that it is critical since availability of such tool will ensure that the quality validation does not only occur within the certification process, but it is accessible to the end users for every exchange. This increases confidence for the robustness of model exchange and minimizes the reliance on the expert to analyze errors or potential errors.
9 10 11 12 13
We start the paper by reviewing the evolution of IFC testing methodologies, followed by defining the quality dimensions for the IFC exchange scenarios, and defining the classification system as a way to standardize the way we catalog the criteria that allows extension over time. We close the paper with overview of additional criteria critical for robust measurement of the IFC exchange that require additional information and steps external to the IFC file.
14
2 Evolution of IFC Testing Methodologies
15 16 17 18 19 20 21 22 23 24 25 26
Prior to the Coordination View 2.0 certification, IFC model testing had been done mainly through manual and visual inspection. Implementers contributed to the development of test cases, exchanged the test cases with each other and manually and visually inspected the results. Certification was awarded usually at the certification workshop when pairwise inspections were done between two applications. The process was qualitative with emphasis on the appearance of correctness from the results of the export and import. Kiviniemi [4] highlighted the weaknesses of this approach, i.e. human interpretation allows for human accommodation of “close enough” errors. The appearance of correctness was not necessarily a proof of actual correctness of the model data. Another issue was that the reporting varied by individuals. While the process improved over the time, indicated by the increase in the number of test cases: around 32 test cases for IFC2x certification, which increased to 264 test cases for IFC2x3 Coordination View 1.0, the certification still fell short from providing assurance to the customers that IFC was a robust way for exchanging building model [4].
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
Beginning with IFC2x3 Coordination View 2.0 (CV 2.0) certification, a conscious effort has been made to address this lack of assurance. The new certification process put more emphasis on the quality of IFC produced by a specific application. It follows a similar practice of the Plant domain with STEP AP 227 [3]. As part of the certification requirements, the certification committee documented the complete Model View Definition (MVD) and made it available in the BuildingSMART Tech website [6]. The MVD is used as a reference to define test cases with precise exchange requirements and instructions for various objects that software developers have to re-create in their native application and export the results as an IFC files. The files are subjected to self-service test through an online application GTDS [7], which checks the resulting IFC files based on 3 categories of tests: IFC schema syntax and where rules, rules from implementer’s agreement, and limited numbers of simple semantic checks based on the MVD. The rules are developed using specifications based on mvdXML [8]. However, the final step in the certification process still relies on human manual checks to verify the IFC file against the reference model, and to evaluate and approve exceptions, i.e. specific application’s lack of support for certain entities, relationships and properties in the reference model. In addition, geometry related verification is still done manually. The test regime and validation criteria are well 7
1 2 3
developed for export cases. For import there is very minimum support for automated and rigorous checks currently due to the complexity of proprietary mapping and lack of consistent mapping in the implementations in the native authoring tools.
4 5 6 7 8 9 10 11 12
Separately, similar processes using Exchange Requirements, MVDs and automated tests combined with manual tests, are being developed and used for the GSA Concept Design BIM 2010 (GSA CD BIM 2010) by Digital Alchemy. GSA CD BIM 2010 is an extension from the earlier validation process in 2007 with GSA Spatial Program Validation. The GSA CD BIM 2010 includes the original 2007 spatial program validation, energy performance analysis, circulation/security rules analysis in courthouse design, and quantity takeoff/cost estimating [9-11]. Another independent validation driven by the US Army Corps Engineers with COBie challenge concerns BIM for Facilities Management Handover. It follows similar process with a self-service test tool made available (COBie Toolkit) and workshops that are scheduled regularly twice a year on average [12].
13 14 15 16 17 18 19 20
Even with the current development, the validation that the data in an IFC model is syntactically wellformed is still an open question because the conditions defining well-formedness or syntactic correctness are not well defined. A survey on research in this area results in few publications. Those available either fall into the same situations with lack of rigor in defining well-formedness, or just repeating what has been done in this area. The lack of definitions and rigorous criteria restrict testing to only small scale models. Unfortunately, small scale models often fail to represent the real-world situation with large and complex models and complex design requirements. Testing such models becomes impractical unless robust automation is applied.
21 22 23 24 25 26 27 28 29 30 31 32 33 34
From the authors’ involvement in BIM model testing for programmatic requirements, standardizing the development of Model Views [13], and rule checking projects [14, 15], there is an urgent need to have well-defined methods to assess the coverage and correctness of a building model based automated rule checking. This paper is an early effort to develop criteria that allow quantitative measurements to identify the level of adherence of an IFC file against a specific model view. Where the model view is defined at different levels of process: syntactic, explicit constraint (EXPRESS WHERE rules, explicit model view semantic restrictions, for example). This paper aims to provide a starting point toward that goal by outlining initial criteria meant to lead to comprehensive well-formedness and providing itemized test requirements that are developed based on schema conformance, existing testing or certification programs and comparisons from other domains [2, 3]. The criteria are meant to cover complete aspects of IFC schema used currently including geometry related tests. Since the main focus of this paper is to identify standardized, measurable and quantifiable test types and test cases that assess them, it does not deal with the actual implementation and platform tools to implement the tests.
35
3 Defining data quality dimensions for the IFC exchange scenarios
36 37 38
Data quality has been a general topic in the Information System domain. Wand and Wang laid an ontological foundation to define the data quality dimensions [16]. In their paper, they focus on the intrinsic data quality and define four general data quality dimensions shown in Table 1.
8
1
Table 1 - Data Quality Dimension as Defined by Wand and Wang [16]
Data Quality dimension Complete Unambiguous Meaningful Correct
Description Refers to an improper mapping, i.e. certain representations fail to map and therefore usually results in missing representations Data can be mapped back to its original state Data can be interpreted meaningfully Data should be mapped back to the right state
2 3 4 5 6 7 8 9 10
In this paper, only the intrinsic data quality is in the scope and it does not deal with the details of internal representation or the physical appearance of the data [16]. There are various other works that extend this framework for data quality that covers context specific categories and their respective dimensions. The nice thing about the [16] work is that they categorize and summarized many quality dimensions into just four. For example Strong, Lee and Wang define four categories: Intrinsic, Accessibility, Contextual and Representational [17]. There are several other works that address the data quality issues from the broader perspectives that include the consumers of the data. Batini [18] and Scannapieco [19] covers the overview of various approaches in their papers.
11 12 13 14 15 16 17 18 19
In general, data quality is an open system. Quality needs to be defined relative to some explicit criteria. It is generally recognized that the specification of an exchange is defined by a model view [ref]. Thus exchange quality can be defined by two types of information: (1) identifying the issue(s) that motivate the rule and its public health or safety motivation; in terms of the knowledge regarding standards of representation for measurement of the rule, for example Concrete Slump Test - ASTM C143, that defines the procedures for carrying out the measurement [20, 21]; and (2) as the degree of realization of an explicit subset of the absolute requirements. Thus the range of quality includes the accuracy of measurement (especially allowed deviations of geometric measurements, including level of detail and eventually surface accuracy where relevant.
20 21 22 23 24 25
Relevant to our work to measure data quality of the IFC exchange, we take and adapt the framework of Wand and Wang [16] proposed for the intrinsic quality of the data and also part of the extension that includes the domain perspective or context specific from Strong and Wang [17]. Since the IFC exchange focuses on a very specific area, we modify the data quality dimensions to suit our need. We cover two categories of data quality, i.e. the intrinsic quality of the IFC model and the domain context of representational quality. Table 2 describes them in more details:
26
Table 2 - Adapted Data Quality Dimensions that are fitting to IFC Quality Criteria
Category Intrinsic
Data Quality dimension Complete Meaningful
Correct
Representational
Conformance
Description There is no loss of information within the agreed scope of MVD The data should be semantically consistent. For example if an object is said to be contained within a particular building story, it must be spatially located in that story. Data must be mapped correctly and it can be mapped to a valid state of the original data and the right one. The failure on this dimension is when the data is mapped back to a valid state but a wrong one. Data must comply to the set of agreed standard, which usually either extends the schema or refines criteria for a specific requirement of MVD. For example, all building elements must
9
Ambiguous
have geometry is required in the IFC coordination view refining optional attribute in the IFC schema. IFC exchange allows certain flexibility for different mapping to be done by different applications, The mapping some time is not 1 to 1 both in term of the entity representation as well as the shape representation. In both case however, the mapping should not be ambiguous, i.e. the mapping can be traced back to the original data.
1 2 3 4 5 6 7 8
Based on the above data quality dimensions, we adopt a simple ratio measure [22] to quantify the overall quality score or confidence of an IFC exchange model (equation 1). This fits our purpose very well since the goal for an IFC data exchange is very high confidence and ideally the model should be 100%, i.e. a perfect translation, even though in practice a small tolerance may be accepted. It is because some applications may not be able to achieve 100% without major internal native model modifications. The near 100% requirement is especially true because building designs can influence the safety of both of the occupants and the workers who build or maintain it [23-25]. 𝑪(𝒎𝒐𝒅𝒆𝒍) =
9
∑ ( 𝑻(𝒍𝒐𝒄𝒂𝒍𝒍𝒚 𝒗𝒂𝒍𝒊𝒅) ) ∑( 𝑻(𝒍𝒐𝒄𝒂𝒍𝒍𝒚 𝒗𝒂𝒍𝒊𝒅) + 𝑻(𝒍𝒐𝒄𝒂𝒍𝒍𝒚 𝒊𝒏𝒗𝒂𝒍𝒊𝒅) )
(1)
10
Where:
11
C is a function that reflects confidence of the quality of the model
12 13 14
T is a function of all the relevant tests that discriminate model correctness from inaccuracy or errors in data integrity. Such tests employ a logical process instead of Boolean. The local tests may result in three outcomes: locally valid, locally invalid, missing data and not testable.
15 16 17 18
In this paper, we outline our main contribution to define and categorize various rules to measure various data quality dimensions. The list of rules should be comprehensive enough from the start to ensure the acceptance of the measurement by all parties within the exchange process. In the next section we lay out the details of rules that will help us measure the IFC exchange model.
19
4 Defining and classifying the rules for measuring the quality of IFC data
20 21 22 23 24 25 26 27 28 29 30 31
Through this research work, we look into the issue of the quality of IFC file with the aim to provide a comprehensive first list of rules and provide a general classifications for the rules. With such classifications, we would share effort to cover wide range of validation rules needed today and to develop the longer term support needs to develop strong method for determining model quality. The needs are defined through collections of rules that need in the short term, prior to the research on algorithm assessment for quality., which are derived from well-known and well-defined rules, usually from the schema, from practical experience encountered throughout the years of involvements in BIM software development, the certification processes, and from the logical consequences of the domain specific concepts. We will treat export first and then on import in a dedicated section in the later part of this paper, since they involve quite distinct requirements. When dealing with export validation, the highest level question that needs to be answered is:
10
1 2
Given a drawing or other representation of a building model, can the systems export the target model accurately representing the intent in IFC?
3 4 5 6 7
Export cases are more direct compared to import since an export model has a publicly defined mapping representation. This allows a common set of tests to be applied across all application translators. Import model testing needs to recognize the varied native model structure each BIM supplication is made of. The need to access the native file that an IFC file is derived from, the import translation, must be required for tests to measure completeness.
8 9 10 11 12 13 14 15 16
There are two types of broad groupings of rules: One that is solely applicable within the IFC file, i.e. all test rule works with information contained inside the IFC file (self-contained), and the other one is the group that requires extra information (external dependency). The self-contained tests are useful to validate consistency of data already exported from the originating application. However, they cannot capture errors that may occur during export translation by the authoring tool. Additional information to verify completeness where there is no “missing information in translation” or “error during transmission” would be needed. We will discuss this issue in more detail in upcoming section, “Tests with Additional External Information”. In particular, the first two subsections that deal with Export Summary, Geometry Mass Properties and Sampling Points.
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
As the first step toward a rigorous validation of well-formedness of an IFC model, we collect various rules that need to be tested to ensure quality of IFC model. They are large number of rules known so far, both formally in form of schema validation such as those typically done by the IFC toolkits or a dedicated validation tool such as Expresso (http://sourceforge.net/projects/exp-engine/) and those already used in the certification processes, and those informally as practice in the QA process of IFC supporting applications and the visual check portion of the certification. For long term support, it is expedient for us to organize the rules into a well- defined classification with rule categories and subcategories. This will help us organize and manage additional rules that will be added over time. In defining the categories and sub-categories, we considered a closed structure ontology, but realized that there is no definition of completeness of a classification system. Instead we assembled this list in a more functional listing and its associated data quality dimension. We define one main classification and three other minor classifications: source, subdomain, and the current degree of difficulty for implementation of the automatic rule checking. The guidelines for assigning the degree of difficulty is outlined in [26]. They are independent but complimentary allowing users to view the rules from various perspectives.
32 33 34 35 36 37 38 39 40 41
4.1 Schema Data Structure (data quality dimension: Correct) To begin with, we consider the intrinsic quality of the IFC data by validating the correctness of the data by enforcing rules that are defined in the relevant version of the IFC schema [27] This includes syntactic check, validation of entity instance types, valid enumeration values, and geometry and topology related rules. Most of the rules, except geometry and topology related have been either included by the IFC toolkit or by certification rule sets. We treat geometry to be an equally important quality that has to be integrated as part of the automatic rule validation. Geometry rules include correctness of the geometry entities and also informal propositions as specified in the schema that are derived from ISO-10303 part 42 [28]. Topology related rules include crucial tests according to Euler formula. It is an important criterion that will saves users from getting an unpredictable outcome. The Euler formula
11
1 2
will give a quick indication whether a geometry is a valid manifold solid. These sets of rules are grouped into one class and named as schema data structure validations.
3 4 5 6 7 8 9 10 11 12 13 14
Table 3 lists more detailed functional classifications that measure correct data quality dimension. The heading of the classification for this dimension is called schema data structure validations (Category I, rows no. 1 - 11) that has subclasses that deal with syntactic check (rows no. 2 – 6), including general schema structure (row no. 3), schema WHERE rules (row no. 4), valid entity check (row no. 5), and valid enumeration check (row no. 6). The geometry tests (rows no. 7 – 9) includes informal proposition defined in the schema (row no. 8), and geometry integrity check based on the definition of various geometry in the schema (row no. 9). Besides the geometry, related tests on topology are also important (rows no. 10 - 11). There are a long list of rules that fall into the geometry and topology tests (see Table 6 under I.B. and I.C.). The topology here concerns the relationship at the level of data structure and geometry. For example relationship between face, edges and vertices. In Table 3, we give some examples of the rules whenever appropriate. It is not meant to be exhaustive but just to give an overview. Table 6 shows the complete listing of rules that we know to be relevant.
15
4.2 Semantic tests (data quality dimension: Meaningful)
16 17 18 19 20 21 22 23 24 25
Under this dimension, data quality is evaluated based on the expected “behavior” of the object under the definitions. What fall into this category are all those data that structurally are correct but do not make sense from the behavior point of view. For example, if a type with a specific name and attributes is defined, having two separate instances of types with two identical, or minor variation of the data are not meaningful (case I.D.4.3 in Table 6). Another example is a containment relationship. From containment structure point of view, as long as an object is contained in a container object, it is a valid structure, but viewed from a spatial perspective, it does not make sense if the object’s geometry is physically disjoint with the container (I.D.5.3 in Table 6). Many of the cases that fall into this category are related to topology. It is distinguish with the earlier dimension since the topology here refers to a topological relationship between entities and their spatial relationship.
26 27 28 29 30 31 32
4.3 MVD Conformance tests (data quality dimension: Conformance)
33 34 35 36 37 38 39 40
Table 7 of the Appendix lists the general themes of conformance testing. Each of dedicated MVD will have to provide the list of precise test items, valid (supported) entities, properties and proper values and use functionality of the common theme to perform the validation. The most common MVD is the IFC Coordination View 2.0 certification that is administered by the buildingSMART International to certify the IFC2x3 compliance. Many other MVDs that have been published have specific requirements that define further constraints to the coordination view or define some exceptions to it. Therefore it is important that the rules defined for MVD are logically consistent and not contradictory or lead to recursive condition.
Conformance tests are validation steps beyond the schema that help determine whether an IFC instance file conforms to the specific exchange requirement. Table 3 Category II (row number 18-23) list down general test categories typically included in conformance tests. They are often called “business rules” in IFC literature. It is not the purpose of this paper to discuss in depth all applicable conformance requirements, but generally there are recurring themes that the conformance tests usually perform.
12
1 2 3
Conformance tests has been actively developed as it is very essential in the support of robust exchange requirements between applications. They have included in automated tests in IFC CV2.0 certifications and other like certifications.
4.4 Export/Import Completeness tests (data quality dimension: Complete)
4 5 6 7 8 9 10
This dimension is unique and generally different from other rule category since to test completeness of export or import require additional information outside of the IFC file. We define three subcategories: export summary, geometry mass property and sampling point, and import tests. IFC exchange completeness can only be assured if the information from the source, i.e. the originating BIM tool, is known and therefore can be used as a reference to compare with what inside the IFC file. We cover the completeness tests using the external information in a separate section.
11 12
Table 3 - Coding scheme for the IFC test items
No.
Coding scheme
Description
Classification criteria
The Main Classification 1
I. Schema Data Structure Validation Data quality dimension: Correct
2
I.A
3 4
I.A.1 I.A.2
General entity data structure Schema WHERE rule
5
I.A.3
Valid entity check
6 7
I.A.4 I.B
8
I.B.1
Informal proposition test
9
I.B.2
Geometry data integrity check
10
I.C
11
I.C.1
12
II. Semantic test Data quality dimension: Meaningful
Syntactic test
Valid enumerated value check Geometry data structure test
Topology test Topological constraint correctness
All tests on the syntax of the IFC file according to the schema All tests for correctness of the schema structure All tests for consistency with the schema constraints as defined by WHERE rules All tests for valid entity types. This check is intended to ensure entities referenced in the P21 file are defined in the target schema. Example: IfcSlabStandardCase is a valid entity in IFC4, but it is not valid in IFC2x3 (test no. I.A.3.1) All tests to check consistent Enumerated values are used All tests related to geometry for conformance to the schema definition All tests for informal proposition as defined in the IFC schema and ISO 10303 part 42 Example: Only IfcPolyLoop is allowed for all the bounding loops of all faces for the IfcFacetedBrep (test no. I.B.1.4.1) All tests that check integrity of the geometry data according to the IFC schema definition Example: Face is not planar (test no. I.B.2.1) Tests related to the topological structure of the data All tests for conformance to a consistent topological constraints as defined by Euler formula and also Mobius condition: all face edges traverse together all B-rep solid faces twice, one in each direction
13
13
II.A.1
Topological attribute assignments
14
II.B.1
Topological relationship
15
II.C.1
Shape well-formedness
16
II.D.1
Model integrity
17
II.E.1
Spatial integrity
18
III. MVD Conformance tests Data quality dimension: Conformance
19
III.A
20
III.B
21 22
III.C III.D
Existence/non-existence of specific Entity type Existence/non-existence of attributes or properties Mandatory attributes tests Specific value tests
23
III.E
Implementer’s agreement
24
IV. Export/Import Completeness test Data quality dimension: Complete
25 26
IV.A IV.B
Export summary Geometry mass properties and sampling points
27
IV.C
Import tests
All tests related to data associated to a topology. Example: A flow-direction information on the connected Ports must come in fitting pairs (SINK and SOURCE) (test no. II.A.1.3) All tests that ensures completeness of a topology Example: A test whether a space is completely bounded by the space bounding objects All tests to ensure that every geometry is a correct and non-NULL geometry, i.e. a geometry with a valid body (e.g. has volume). All tests to check existence of entities that make a model useful Example: Every IFC file must have one IfcProject All tests that relates to spatial relations between objects and the other related entities
All tests for mandatory existence of entities or prohibited entities. All tests for property set or properties that are required or disallowed All tests for mandatory attributes or properties All tests for specific values assigned to an attribute or property All tests to additional agreed restriction on the use of certain data by the implementers
All tests to check completeness of export process All tests for geometry correctness compared to the original model using mass properties and/or sampling points All tests related to correctness on import
Classification by Source 28 29 30 31
S-Sch S-Impl S-MVD S-Dat
Source: Schema validation Source: Implementer’s agreement Source: MVD requirements Source: Data consistency
All tests that are based on IFC schema All tests defined by the implementer’s agreement All tests for the MVD conformance All tests related to data consistency
Classification by Sub-Domain 32
D-Gen
General
33 34 35 36 37 38
D-Arch D-MEP D-Stru D-Cons D-FM D-Fab
Architectural sub-domain MEP sub-domain Structural sub-domain Construction sub-domain Facility Management sub-domain Fabrication sub-domain
All tests common to the model without particular association to any of the domain All tests specific only to Architecture domain All tests specific only to MEP domain All tests specific only to Structure domain All tests specific only to Construction domain All tests specific only to Facility Management domain All tests specific only to Fabrication domain
14
39
D-Civil
Civil Engineering sub-domain
All tests specific only to Civil Engineering domain
40
D-Geom
Geometry sub-domain
All tests specific only to Geometry domain. It is separated from other rules since geometry tests form a significant portion of the list of rules.
Classification by Degree of Difficulty for Implementation 41
E
Easy
42
M
Medium
43
H
Hard
Rules that are relatively easy to implement, i.e. only require explicit information in the IFC file. It corresponds to the class-1 rules defined in [29]. Rules that require substantial efforts to develop. It requires simple derived data from the basic information in IFC. It corresponds to class-2 rules defined in [29]. Rules that require significant efforts to develop. It requires advanced derived data from the basic information in IFC. It corresponds to class-3 rules defined in [29].
1 2 3 4 5 6
7
This extended table cannot be assessed as to its completeness; it is rather a classification for tests, to build up a relatively comprehensive classification. Over time, such definitions should support various important benefits: 1) 2)
improved definition of types of rules and their coverage; provide a basis for defining rule checking methods, support a taxonomy of rule types;
5 Test Structure
8 9 10 11 12 13 14
The tests described here and further elaborated with examples in the Appendix are embedded in a large contextual structure with different levels. Some apply to particular object types, for example those dealing with stair well-formedness. Others such as geometry well-formedness rules also apply to all instances of geometry, at the multiple levels of the model hierarchy, possibly at the site, building, building element, space, embed, furniture and other objects in the aggregation spatial structure. The logical coverage is a requisite aspect of model checking. This paper does not address these contextual model conditions.
15
6 Tests with Additional External Information
16 17 18 19 20 21 22 23
Category I criteria generally provides the comprehensive coverage of testing any IFC model. One key limitation in this test is that it is unable to test a “good” error; the “good” error is a correct and consistent data in the model that is not the same as the original data, for example an object may be correct in term of its local definition, location, etc., but it is shifted from the original location in the source drawing due to defect in the translation process. Without means to validate the correct translation, even a few errors would create uncertainty and significantly affect translation confidence. To be able to test correctness of the data compared to the source, we may need to have information that the producer of an IFC model must provide in addition to the IFC data. This information will
15
1 2
facilitate further automated testing of export and later on import. Table 8 in the Appendix list the rules that support this category of tests as defined in Table 3 Category III (row number 24-27).
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
6.1 Export Summary One simple but important test of completeness of a translation process is a summary that provides object counts, and entity mapping information. The minimum requirement for object count should be provided as a log file during export or import process. This log file should be included inside the .ifczip format together with the IFC P21 file. Figure 6 shows the example of an export log information. On the left (A) is the example of the current log file from AutoCAD Architecture application that already output object count information. It could be improved to contain more information including the mapping between the entities in AutoCAD Architecture and IFC (B). Such information is very useful to ensure that the translation process has been completed as expected, even though it is only possible now with manual eyeball inspection. To facilitate automation, the log file should be written into a standardized xml file. Usually, a validation process will look at each count of object and compare it with the actual content in the IFC file. It will also check the error count as a possible issue in the translation process. Figure 7 shows one such schema that can be used to facilitate the standardize log information. [DefaultDomain:
]EXPORT STARTED: 12/19/2013 3:31:40 PM
[DefaultDomain: ] ============= Object Count: ============= - AcDbBlockReference : Total: (3) Fail: (2) - AecbDbDuct : Total: (39) - AecbDbDuctFitting : Total: (42) - AecbDbDuctFlex : Total: (6) - AecbDbLabelCurve : Total: (23) Fail: (23) - AecbDbMvPart : Total: (19) - AecDbDoor : Total: (61) - AecDbMvBlockRef : Total: (81) Fail: (68) - AecDbOpening : Total: (12) - AecDbRailing : Total: (6) - AecDbSlab : Total: (1) - AecDbStair : Total: (2) - AecDbWall : Total: (152) - AecDbWall** : Total: (2) - AecDbWindow : Total: (63) - AecDbWindowAssembly : Total: (11) - IfcFurnishingElement : Total: (2) ** = Object classified as other IFC object. [DefaultDomain: ]EXPORT ENDED: 12/19/2013 3:32:20 PM
18
(A) Current IFC export log output from AutoCAD Architecture
IFC Export Object Count Summary: =================================================================================== - BlockReference -> IfcFurnishingElement : Success (4), Ignored (0), Error (0) - BlockReference -> IfcFlowTerminal: Success (12), Ignored (1), Error (0) - Duct -> IfcFlowSegment: Success (39), Ignored (5), Error (1) - DuctFitting -> IfcFlowFitting: Success (42), Ignored (0), Error (0) - FlexDuct -> IfcFlowSegment: Success (6), Ignored (0), Error (0) - LabelCurve -> Success (0), Ignored (23), Error (0) - MvPart -> IfcFlowTerminal: Success (15), Ignored (49), Error (0) - MvPart -> IfcFlowMovingDevice: Success (4), Ignored (0), Error (0) - Door -> IfcDoor + IfcOpeningElement: Success (61), Ignored (0), Error (0) - MvBlockRef -> IfcFurnishingElement: Success (4), Ignored (20), Error (0) - MvBlockRef -> IfcBuildingElementProxy: Success (9), Ignored (0), Error (0) - Opening -> IfcOpeningElement: Success (12), Ignored (0), Error (0) - Railing -> IfcRailing: Success (6), Ignored (0), Error (0) - Slab -> IfcSlab Success (1), Ignored (0), Error (0) - Stair -> IfcStair: Success (2), Ignored (0), Error (0) - Wall -> IfcWallStandarCase: Success (120), Ignored (0), Error (0) - Wall -> IfcWall: Success (20), Ignored (0), Error (0) - Wall -> IfcFooting: Success (6), Ignored (0), Error (0) - Window -> IfcWIndow + IfcOpening: Success (63), Ignored (0), Error (0) - WindowAssembly -> IfcWindow: Success (9), Ignored (0), Error (0) - WindowAssembly -> IfcDoor: Success (2), Ignored (0), Error (0) Total: Success (437), Ignored (98), Error (1)
(B) Potential improvement in the export log, that includes mapping information
Figure 6 - Export Summary Object Count
19 20 21 22 23 24 25 26 27
One issue with this approach is that the BIM software developers should try to ensure the information is as useful as it can be. The log information is very useful for tracking issues of IFC export (and import) in day-to-day operations and not just for certification purposes. The natural way to view it is that the information should be already there for the certification to make use of it as part of the completeness validation purpose. Mapping information can be captured in the same log by separating the count for different mappings even though they may come from the same representation in the originating model (see example in the first 2 lines in Figure 6(B)). Explicit logging information for mapping could help to ensure transparent mapping process and help tremendously to track completeness of the translation.
16
1 2 3 4
Figure 7 - An XML schema for the standardized export log information that includes object count, mapping information and status of translation
5 6
6.2 Geometry Mass Properties and Sampling Points
7 8 9 10 11 12 13 14 15 16 17 18
The object count provides only one rough means to test consistency. For geometry related validation, mass properties of geometry from a native file and face sampling points can provide a good way to verify a level of consistency of the geometry exported to IFC. The exporting application needs to compute mass properties for each shape it exports and chose sampling points on the surface of the geometry to be exported automatically during the process of translation/export. Any point that lies on the surface of the geometry may be selected as sampling points, hence vertices would be obvious straightforward candidates for this purpose. These information will be used by the test rule to be compared with the re-created geometry from the IFC file. The combination of mass properties and sampling points provide representation of the original geometry and they offer good enough validation information to ascertain correct geometry translation from the source. Table 8 Items IV.B.1 and IV.B.2 show the rules that check and make use of the mass properties and sampling point information to test correctness of geometry translations.
19 20 21 22 23 24 25 26 27
We propose to store information on mass properties and sampling points into the IFC model as a new context identifier (=”ValidationData”) in the IfcRepresentationContext. The actual sampling points are identified using RepresentationIdentifier=”SamplingPoints(WCS)” in the IfcShapeRepresentation with Items of RepresentationType=”Point”. The Items contain set of IfcCartesianPoints from the sampling points taken randomly at the surface of the object geometry in the World Coordinate System (WCS). The mass properties should be captured into special Property Set Validation_Pset_Mass_Properties that stores three information, i.e. volume, surface area and tight bounding box. Figure 8 describes the MVD diagram of this mini “exchange” requirement. With possible differences in a faceting mechanism, these comparisons will not yield 100% equivalent results. Combining the sampling points and mass 17
1 2
properties, complemented with a small tolerance, will give generally high confidence of geometry consistency.
3 4
Exporting application that supports this test feature may provide an option to enable this information to be exported for a validation purpose that can be turned off in the normal use.
5 IfcProduct IfcRoot GlobalId OwnerHistory Name Description IfcObjectDefinition IfcObject ObjectType IfcProduct ObjectPlacement Representation
IsDefinedBy [INV]
IfcProductDefinitionShape IfcProductRepresentation Name Description Representations IfcProductDefinitionShape
Validation_Pset_Mass_Properties Volume SurfaceArea BoundingBox_XDim BoundingBox_Ydim BoundingBox_ZDim
IfcGeometricRepresentationSubContext IfcRepresentationContext ContextIdentifier ContextType IfcGeometricRepresentationContext IfcGeometricRepresentationSubContext ParentContext TargetScale TargetView UserDefinedTargetView
ValidationData
Model
SamplingPoints(WCS) IfcShapeRepresentation IfcRepresentation ContextOfItems RepresentationIdentifier RepresentationType Items IfcShapeModel IfcShapeRepresentation
6 7 8
9 10 11 12 13 14 15 16
Point
IfcCartesianPoint IfcRepresentationItem IfcGeometricRepresentationItem IfcPoint IfcCartesianPoint Coordinates
[0] IfcLengthMeasure (X) [1] IfcLengthMeasure (Y) [2] IfcLengthMeasure (Z)
Figure 8 - MVD for mass properties and sampling points Exchange Requirements
7 Import Tests Import is a more challenging issue to deal with as each BIM application has its own proprietary way of representing and managing objects. What we are interested in is to make sure that the application consistently imports all supported objects and geometries – by supported we mean that the application specifies that it is able to import them correctly. In dealing with import, we are addressing the question Given an IFC instance file, does the import translator read the file and “capture” all the meaningful data intended? 18
1 2 3 4 5
Import tests strongly require assessment of the native BIM model representation of the imported data, in its own proprietary format. Current IFC2x3 Certification relies solely on manual verification for import due to heavy dependency on the native format, which the respective vendor knows. This process takes a lot of expert knowledge and time. At this time, none of the import validations are automated.
6 7 8 9 10 11 12
To achieve the goal for automating the validation process including import, we propose two possible ways to do this, i.e. using a re-export method (to IFC) similar to the one outlined by Lipman [3], and using another format that may be simpler commonly used for visualization. It is recognized that it is not in our interest to have an identical model in an imported, then re-exported file, but it must be consistent. Table 4 describes the minimum requirements each of the application must support to allow automatic validation to be possible. The geometry requirements may be reduced to just BREP geometry and/or simple extrusions akin to the proposed reference view MVD [30].
13
Table 4 - Minimum exchange requirements for re-exported IFC model for validation
Item IFC Entities
GUID
Description Entity must be preserved. The minimum requirement is to have the Building Element and Spatial Structure element to be preserved and other entities identifiable as the original type. GUID must be preserved
Attributes
Object attributes must be preserved
PropertySet
PropertySet must be preserved
Geometry representation
Geometry must be re-exported. It may be different than the original one, i.e. simplified as BREP.
Remarks e.g. IfcStair must be re-exported as IfcStair
This is a general IFC requirement (but seldom enforced) It could be in form of flat list of property – value pair, grouped by fixed group name “IFC Element Attributes” It could be presented the same as the Attributes in simple property – value pair grouped by Pset name This is similar with the proposed reference view [30] that requires mostly simplified geometry
14 15 16 17 18 19 20 21 22 23
The second method is making use of a common format for visualization such as X3D, VRML, COLLADA, or OpenCTM [31-34] (it would be good to decide on one format), plus simple metadata to capture the original IFC entity and its properties of the geometry. In both cases, the validation tool will perform comparisons entity to entity, property to property and geometry to geometry (for overall shape, extent and location). The minimum requirement for the content in this format is identical with the IFC format described in Table 4. Since the geometry is often more simplified and approximately represented than the original geometry, this test must accept a small tolerance to allow minor differences due to the approximation effect. The exact tolerance value will need to be determined with empirical tests that will be good enough and avoid false positive results.
19
1
8 Model compare
2 3 4 5 6 7 8 9 10
A generic model comparison tool is necessary for most types of validation. It needs to support IFC-toIFC comparison and also IFC to the 3rd party format. Preserving GUID is crucial in this process since GUID is used as the primary key in a comparison step. The model compare tool must provide the comparison features of both properties and geometry. The model comparison tool such as EVASYS provides a good coverage of comparing two IFC P21 files [35]. To be complete it needs to have additional support for geometry tests to ensure consistency of geometry after import. Borrmann et al has extensively researched the subject of geometry approximation using Octree indexing that provides suitable tools to perform quick geometry comparisons based on the Octree indexes, unless more precise comparison is required [36], in which case a solid modeler will be required.
11
9 Conclusion
12 13 14 15 16 17 18 19 20 21 22
The case for quantifiable IFC validation has been presented and initially detailed. It will have a long and evolving process. The main purpose for this paper is to initiate efforts towards development of an open structure for identifying, cataloging a public library for validation criteria where the AECO research community can collaborate for the greater good. Table 3 of rules serves as seeds that we foresee will be further enhanced and updated over time by other researchers working in this area. The authors also believe that this information is a sufficient starting point for software vendors to consider developing tools that implement the rules. They can be applied in various use cases, e.g. as internal validation or QA tool by BIM authoring tools, as rule sets in the certification efforts which would complement and complete the current self-check tools, and as pre-validation by importing applications to ensure sufficient quality of IFC file that they are about to import. Later, we see that the methods for assessing validation rules will become standards in their own right.
23 24 25 26 27 28
We foresee that in the future most certification efforts will be automated to reduce the need for expensive and time consuming manual validations. To make the list of standard rules accessible to the entire AECO community, we think that an open central repository of rules available in the Internet is a needed initial step. The repository should be managed the same way as an open source project, or similar with IFC Solutions Factory [37] and the Open IFC Model Repository [38]. Eventually BuildingSMART should be the final authority to manage the rules centrally.
29 30 31 32 33 34 35 36 37
We recognize that it is an enormous undertaking for any single organization to be able to provide all the capabilities needed to support an implementation supporting thorough rule checking. It is based on this idea that open source development should be the reference for the BIM standards community to contribute. As the starting point, it is important that an open platform that allows plug-ins architecture and support for various plug-ins, including more traditional programming languages such as Java, .NET C# or C++, to simpler scripting languages should be also supported. This effort should be ideally an international collaboration that will pull expertise from various areas to contribute to such efforts, including 3D geometry modeling, Express language, rule checking, and domain expertise from various trades within AECO.
20
1 2 3
Combining manpower resources and funding resources will enable the community to achieve significant progress. It is critical in the effort to improve nagging issues the AECO community has been facing, i.e. interoperability challenges, predictable and good quality of exchanges.
4
10 Acknowledgement
5
We thank the Charles Pankow Foundation for their partial funding for this work.
6 7 8 9
11 The Originating Sources of the Rules We list below the originating sources of the rules found in Table 6 to Table 8 in the Appendix. To facilitate convenient cross-referencing, we add these codes into the appropriate rule items in the tables augmented in the source column inside a bracket.
10
Table 5 - The list of originating sources for the rules Reference code IFC Docu CV2.0 CV Impl User Data QA
GSA CodeChecking FMHandOver COBie Toolkit Other MVDs
Reference description IFC schema documentation: We base this on the latest IFC4 edition IFC coordination view 2.0 certification requirements IFC Coordination View Implementation Agreement This is based on various issues reported by the end users who encountered issues related to data exchange using IFC This is based on various tests and issues included as part of quality assurance process used in developing IFC export and import capability, mainly from AutoCAD Architecture/MEP, and some from Autodesk Revit Requirements and issues from GSA Concept Design 2010 Certification Requirements and issues from IFC2x2 Code Checking View Certification in Singapore, and further developments of FORNAX code checking application Requirements and issues from Basic FMHandOver MVD Requirements and issues from COBie challenge Reported errors from IFC toolkit, used in export and import Requirements from various MVDs, including new proposed IFC4 based MVDs such as IFC4 Reference View and IFC4 Design Transfer View
11
12
12 References
13 14 15 16 17 18 19 20 21 22 23 24
1. 2. 3.
4. 5.
Laakso, M. and A. Kiviniemi, The IFC standard-a review of history, development, and standardization. Journal of Information Technology in Construction, 2012. 17: p. 28. Amor, R. A Better BIM: Ideas from other Industries. in Proceedings of the 2008 CIB W78 Conference. 2008. Lipman, R., M. Palmer, and S. Palacios, Assessment of conformance and interoperability testing methods used for construction industry product models. Automation in Construction, 2011. 20(4): p. 418-428. Kiviniemi, A., IFC Certification process and data exchange problems, in eWork and eBusiness in Architecture, Engineering and Construction. 2008, Taylor & Francis. p. 517-522. Denmark, I.F., IFC Exchange Test between 3D CAD applications - April 2006. IAI Forum Denmark 2006(http://images.autodesk.com/adsk/files/modeling_for_ifc_with_aca__updated_for_aca2011.pdf). 21
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
6. 7. 8.
9. 10.
11. 12. 13. 14. 15.
16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30.
BuildingSMART, Coordination View 2.0 MVD. http://www.buildingsmarttech.org/specifications/ifc-view-definition/coordination-view-v2.0/summary. AEC3, KIT, and iabi. Global Testing Documentation Server. Chipman T, L.T., Wiese T, mvdXML. buildingSMART Model Support Group, 2013(http://www.buildingsmart-tech.org/downloads/accompanyingdocuments/formats/mvdxml-documentation/mvdxml-schema-documentation-v1-1-draft). BuildingSMART, Concept Design BIM 2010, in Generic MVD. 2009: online at http://www.blisproject.org/IAI-MVD/MVDs/GSA-005/Overview.pdf. Eastman, C. and P. Sanguinetti, BIM Technologies that Inform Concept Design, in AIA 2009 Fall Conference. 2009: online at http://www.aia.org/aiaucmp/groups/aia/documents/pdf/aiab081419.pdf. Eastman, C., GSA Automation of Early Concept Design. 2010: online at http://bimforum.org/wp-content/uploads/2010/10/GSA-PCD-Review.pdf. William East, E., N. Nisbet, and T. Liebich, Facility Management Handover Model View. Journal of Computing in Civil Engineering, 2012. 27(1): p. 61-67. PCI. Precast BIM STandard Project. http://dcom.arch.gatech.edu/pcibim/documents.aspx [cited 2014 04/04/2014]. Eastman, C., et al., Exchange Model and Exchange Object Concepts for Implementation of National BIM Standards. Journal of Computing in Civil Engineering, 2010. 24(1): p. 25-34. Solihin, W., Lessons learned from experience of code-checking implementation in Singapore. avialable online at https://www.researchgate.net/publication/280599027_Lessons_learned_from_experience_ of_code-checking_implementation_in_Singapore, 2004. Wand, Y. and R.Y. Wang, Anchoring data quality dimensions in ontological foundations. Communications of the ACM, 1996. 39(11): p. 86-95. Strong, D.M., Y.W. Lee, and R.Y. Wang, Data quality in context. Communications of the ACM, 1997. 40(5): p. 103-110. Batini, C., et al., Methodologies for data quality assessment and improvement. ACM Computing Surveys (CSUR), 2009. 41(3): p. 16. Scannapieco, M. and T. Catarci, Data quality under a computer science perspective. Archivi & Computer, 2002. 2: p. 1-15. Ellis, B.N., Basic concepts of measurement. 1966. Pap, E., Handbook of measure Theory: In two volumes. Vol. 2. 2002: Elsevier. Pipino, L.L., Y.W. Lee, and R.Y. Wang, Data quality assessment. Communications of the ACM, 2002. 45(4): p. 211-218. Behm, M., Linking construction fatalities to the design for construction safety concept. Safety Science, 2005. 43(8): p. 589-611. Gambatese, J.A., M. Behm, and J.W. Hinze, Viability of designing for construction worker safety. Journal of Construction Engineering and Management, 2005. 131(9): p. 1029-1036. Haslam, R.A., et al., Contributing factors in construction accidents. Applied Ergonomics, 2005. 36(4): p. 401-415. Solihin, W. and C. Eastman, Classification of rules for automated BIM rule checking development. Automation in Construction, 2015. 53(0): p. 69-82. BuildingSMART, IFC4 Online Documentation. avaliable at http://www.buildingsmarttech.org/ifc/IFC4/final/html/index.htm, 2013. ISO, Part 42: Integrated generic resource: Geometric and topological representation. Industrial automation systems and integration -- Product data representation and exchange 2003. Solihin, W. and C. Eastman, Classification of rules for automated BIM rule checking development, in Submitted for publication to Automation in Construction. 2014. BuildingSMART, IFC4 Reference View, in https://github.com/BuildingSMART/IFC4CV/blob/master/IFC4_Reference_View.md. 2014. 22
1 2 3 4 5 6 7 8 9 10 11 12 13 14
31. 32. 33. 34. 35. 36. 37. 38.
Web3D(www.web3d.org), X3D XML Encoding, in http://www.web3d.org/content/x3dencodings-xml-v33. 2013, ISO. (www.web3d.org), W.D., X3D ClassicVRML Encoding in http://www.web3d.org/content/x3dclassicvrml-encoding-v33. 2013, ISO. Group, T.K., Collada 1.5, in https://www.khronos.org/files/collada_spec_1_5.pdf. 2008, The Khronos Group. OpenCTM, OpenCTM format specification, in http://openctm.sourceforge.net/media/FormatSpecification.pdf. 2010, OpenCTM. Ma, H., et al. Testing semantic interoperability. in Joint International Conference on Computing and Decision Making in Civil and Building Engineering. 2006. Paul, N. and A. Borrmann, Geometrical and topological approaches in building information modelling. Journal of InformationTechnology in ConstructionVol, 2009. 14: p. 705-723. Alchemy, D. IFC solutions Factory. Auckland, T.U.o. Open IFC Model Repository.
15
23
13 Appendix Table 6 to Table 8 in this appendix expands categories and sub-categories listed in Table 3 with relevant rules. The rules are drawn from various sources that the authors had experience with over many years, including: -
IFC toolkit schema validation Certification testing: both automated (in Coordination View 2.0) and items that still evaluated manually Internal QA tests for the software development of IFC export/import Various issues encountered dealing with user models in the real world exchange scenarios
They are provided as a starting points toward more complete list that will evolve over time with contribution from the global AECO community. When appropriate, further illustrations, sample section of an IFC file, and one possible error condition is given in the tables. Most examples are from the real issues encountered from the various sources mentioned above. In the event we do not have the actual model, we simulated the case by re-creating a similar model with the same issues. In some cases, we make minor modification to the data to clarify or to highlight certain aspect of the model without affecting the actual issue. The Category column lists items according to the coding scheme by Category as defined in Table-1 for all the non-leaf items. The leaf items are individual rules under the category numbered sequentially. Additional classifications to the rules are augmented in the last three columns in the tables. Using spreadsheet one can easily filter and sort the rules from the various classifications according to the coding scheme. They are provided for convenience to manage the rules that are expected to grow over the time. The rules listed in the tables do not define the actual implementation, they define what to be tested and the expected outcome for correctness of the results.
24
Table 6 - Detailed descriptions of test items for Correctness Tests (Category I)
25
Category
Test Item
Description
I.
Schema Data Structure Validation
I.A.
(data quality dimension: Correct) Syntactic test
I.A.1.
General entity data structure
I.A.1.1
Syntactic correctness of an IFC file against the IFC schema (Example of IFC4 file imported into IFC2x3 schema)
I.A.1.2
Mandatory attribute missing
This is general validation to test that IFC file is correct according to the specified schema, e.g. IFC2x3, or IFC4
Illustration
Example/Additional information
Coding Scheme Source (Originating Source)
Subdomain
Diffic ulty
#173= IFCPIPESEGMENT --------------------------------^ *** Error: Cannot retrieve attributes for instance. Possibly undefined instance type *** on line #173. #220= IFCCLASSIFICATION('','',$,'Default Classification',$ ----------------------------------------------------------------------^ *** Error: No parameter expected *** on line #220. ,$ ---------------------------------------------------------------------------^ *** Error: Wrong number of attributes; expected: 4 found: 7 *** on line #220.
S-Sch (IFC docu, Toolkit, QA)
D-Gen
E
#2792= IFCSPACE('0XJ6cADHr488Rwv0mebxqO',$,'1',$,$,#115,#27 88,'Room',.ELEMENT.,.INTERNAL.,$);
S-Sch (IFC docu, Toolkit, QA)
D-Gen
E
The correct data: #41= IFCOWNERHISTORY(#38,#5,$,.NOCHANGE.,$,$,$,13839499 90); #2792= IFCSPACE('0XJ6cADHr488Rwv0mebxqO',#41,'1',$,$,#115,# 2788,'Room',.ELEMENT.,.INTERNAL.,$);
I.A.2
Schema WHERE rules
26
I.A.2.1
Check WHERE rules
#71= IFCDIRECTION((2.,6.12303176911189E-17,1.)); #73= IFCGEOMETRICREPRESENTATIONCONTEXT($,'Model',3,1.0 0000000000000E-5,#70,#71); IFCGEOMETRICREPRESENTATIONCONTEXT o ERROR: Violating WHERE rule: IFCGEOMETRICREPRESENTATIONCONTEXT. WR11 WR11: (IFC4) The TrueNorth direction, if provided, shall be a two-dimensional direction. #368= IFCFURNITURE('3p$JANLLv4IvM7LfFowKrW',#42,'M_TableCoffee:0610 x 0610 x 0610mm:140383',$,'0610 x 0610 x 0610mm',#367,#361,'140383',$); #432= IFCRELDEFINESBYTYPE('2saRsG$Ff4WOIVt7oD_uHg',#42,$, $,(#342,#368),#278); #278= IFCTRANSPORTELEMENTTYPE('1pcVOlFrT1_9n8_LVDs6t9', #42,'0610 x 0610 x 0610mm',$,$,$,(#274),'139746',$,.MOVINGWALKWAY.); #266= IFCSHAPEREPRESENTATION(#78,'Body','Brep',(#152,#175,# 198,#206,#232)); #274= IFCREPRESENTATIONMAP(#273,#266);
I.A.3
S-Sch (IFC docu, Toolkit, QA)
D-Gen
M
IFCFURNITURE o ERROR: Violating WHERE rule: IFCFURNITURE.CORRECTTYPEASSIGNED CorrectTypeAssigned: (SIZEOF(IsTypedBy) = 0) OR ('IFCSHAREDFACILITIESELEMENTS.IFCFURN ITURETYPE' IN TYPEOF(SELF\IfcObject.IsTypedBy[1].Relati ngType)); o ERROR: Violating WHERE rule: IFCSHAPEREPRESENTATION.WR24 WR24: Checks the proper use of Items according to the RepresentationType.IfcShapeRepresentati onTypes(SELF\IfcRepresentation.Represen tationType, SELF\IfcRepresentation.Items);
Valid Entity check
27
#5306= IFCROOFSTANDARDCASE('0XJ6cADHr488Rwv0mebxpW',#4 1,'Basic Roof:Generic - 12":184922',$,'Basic Roof:Generic 12"',#5302,$,'184922',.NOTDEFINED.); --------------------------------------^ *** Error: Error: Cannot create Instance *** on line #5306.
S-Sch (IFC docu, Toolkit, QA)
D-Gen
E
Invalid UniqueId value
#5306= IFCROOF('4XJ6cADHr488Rwv0mebxpW',#41,'Basic Roof:Generic - 12":184922',$,'Basic Roof:Generic 12"',#5302,$,'184922',.NOTDEFINED.);
S-Dat (CV2.0, QA)
D-Gen
E
I.A.4.2
Duplicate UniqueId (nonunique)
#5306= IFCROOF('0XJ6cADHr488Rwv0mebxpW',#41,'Basic Roof:Generic - 12":184922',$,'Basic Roof:Generic 12"',#5302,$,'184922',.NOTDEFINED.); #5346= IFCSLAB('0XJ6cADHr488Rwv0mebxpW',#41,'Basic Roof:Generic - 12":184922',$,'Basic Roof:Generic 12"',#5344,#5341,'184922',.ROOF.);
S-Dat (CV2.0, GSA, QA)
D-Gen
M
I.A.4.3
Invalid Enum value
#2792= IFCSPACE('0XJ6cADHr488Rwv0mebxqO',#41,'1',$,$,#115,# 2788,'Room',.ELEMENT.,.USERDEFINED.,$)
S-Dat (IFC docu, Toolkit, QA)
D-Gen
E
S-Dat (IFC docu, Toolkit, QA)
D-Gen
E
I.A.3.1
Invalid Entity value
I.A.4
Valid enumerated value check
I.A.4.1
Valid Enum values:
I.A.4.4
Inconsistent value type (a schema inconsistency)
TYPE IfcInternalOrExternalEnum = ENUMERATION OF (INTERNAL, EXTERNAL, NOTDEFINED); END_TYPE; #3447= IFCPROPERTYSINGLEVALUE('LoadBearing',$,'TRUE',$); *** Error: Inconsistency: Typed value is expected. *** on line #3447 The right data type: #3447= IFCPROPERTYSINGLEVALUE('LoadBearing',$,IFCBOOLEAN(. T.),$)
I.B
Geometry data structure test
I.B.1
Informal proposition test
I.B.1.1
IfcAdvancedBrep (IFC4)
Source: [27, 28]
28
I.B.1.1.1
IfcAdvancedBrep Face
I.B.1.1.2
IfcAdvancedBrep Edge type
I.B.1.1.3
IfcAdvanceBrep Edge Trimmed points
I.B.1.1.4
IfcAdvancedBrep facebound loop type IfcAdvancedBrepWithVoids (IFC4) IfcAdvancedBrepWithVoids – shells shall be disjoint
I.B.1.2 I.B.1.2.1
I.B.1.2.2
I.B.1.2.3
IfcAdvancedBrepWithVoids – void shell shall be enclosed only by the outer shell IfcAdvancedBrepWithVoids – No shell overuse
I.B.1.2.4
IfcAdvancedBrepWithVoids – types of faces
I.B.1.3 I.B.1.3.1
IfcFaceBasedSurfaceModel IfcFaceBasedSurfaceModel – faces shall not overlap
I.B.1.3.2
Faces shall be planar
I.B.1.4
IfcFacetedBrep
I.B.1.4.1
IfcFacetedBrep – only IfcPolyLoop is allowed IfcFacetedBrep – vertex references
I.B.1.4.2
I.B.1.4.3
IfcFacetedBrep – Faces shall be planar
I.B.1.5
IfcFacetedBrepWithVoids
Each face must be surface defined by elementary surface, swept surface or a b-spline surface Edges defining the boundaries of the face shall reference an edge curve that is of type conic, line, polyline or a b-spline curve All the edges shall be trimmed by vertex points
S-Sch (IFC docu, QA, Other MVDs) S-Sch (IFC docu, QA, Other MVDs)
DGeom
M
DGeom
M
S-Sch (IFC docu, QA, Other MVDs) S-Sch (IFC docu, QA)
DGeom
M
DGeom
M
S-Sch (IFC docu)
DGeom
M
S-Sch (IFC docu)
DGeom
M
S-Sch (IFC docu)
DGeom
M
S-Sch (IFC docu, Other MVDs)
DGeom
M
Covered by tests in I.C.1.1 to I.C.1.3
S-Sch (IFC docu, QA)
DGeom
M
Same check as I.B.2.1 Face is not planar
S-Sch (IFC docu, QA, User Data)
DGeom
M
S-Sch (IFC docu, QA) S-Sch (IFC docu, QA)
DGeom DGeom
M
All edges lie in surface All vertices lie on edges
Face bound loop shall not be of oriented subtype
Each void shell shall be disjoint from the outer shell and from every other void shell Each void shell shall be enclosed within the outer shell but not within any other void shell Each shell in the IfcManifoldSolidBrep shall be referenced only once All faces of the outer and void shells shall be of type IfcAdvancedFace
The connected face sets shall not overlap or intersect except at common faces, edges or vertices The faces have dimensionality 2 (planar)
All the bounding loops of all faces shall be of type IfcPolyLoop Each Cartesian point (vertex) shall be referenced by at least three polyloops Faces that are formed by a polyloop shall be coplanar
Same check as I.B.2.1 Face is not planar
M
S-Sch (IFC docu, QA, User Data)
29
I.B.1.5.1
IfcFacetedBrepWithVoids – shells shall be disjoint
I.B.1.5.2
IfcFacetedBrepWithVoids – void shell shall be enclosed only by the outer shell IfcFacetedBrepWithVoids – no shell overuse
I.B.1.5.3
I.B.1.5.4
I.B.1.6 I.B.1.6.1
I.B.1.6.2
I.B.1.6.3
I.B.1.7
IfcFacetedBrepWithVoids – Faces shall only be defined by IfcPolyLoop IfcFixedReferenceSweptAre aSolid (IFC4) IfcFixedReferenceSweptArea Solid – SweptArea shall be on X-Y plane IfcFixedReferenceSweptArea Solid – direction of FixedReference IfcFixedReferenceSweptArea Solid – Directrix shall be tangent continuous IfcHalfSpaceSolid
Each void shell shall be disjoint from the outer shell and from every other void shell Each void shell shall be enclosed within the outer shell but not within any other void shell Each shell in the IfcManifoldSolidBrep shall be referenced only once All the bounding loops of all faces shall be of type IfcPolyLoop
Same test as I.B.1.2.1
S-Sch (IFC docu, QA)
DGeom
M
Same test as I.B.1.2.2
S-Sch (IFC docu, QA)
DGeom
M
Same test as I.B.1.2.3
S-Sch (IFC docu, QA)
DGeom
M
Same test as I.B.1.4.1
S-Sch (IFC docu, QA)
DGeom
M
The SweptArea shall lie on the X-Y plane (Z=0)
S-Sch (IFC docu)
DGeom
M
The FixedReference shall not be parallel to a tangent vector to the directrix at any point along the curve The Directrix curve shall be tangent continuous
S-Sch (IFC docu)
DGeom
M
S-Sch (IFC docu)
DGeom
M
S-Sch (IFC docu, QA, User Data)
DGeom
M
S-Sch (IFC docu, QA)
DGeom
M
S-Sch (IFC docu) S-Sch (IFC docu, QA) S-Sch (IFC docu, QA)
DGeom DGeom DGeom
M
I.B.1.7.1
IfcHalfSpaceSolid – integrity of HalfSpace
I.B.1.7.2
IfcHalfSpaceSolid – unbounded surface IfcManifoldSolidBrep
The base surface shall divide the domain into exactly two subset. The Space shall be unbounded, except for subtype IfcBoxedHalfSpace, the space is an enclosure The base surface shall be unbounded (subtype of IfcElementarySurface) (applicable to all its subtypes)
IfcManifoldSolidBrep – shall be of dimensionality 3 IfcManifoldSolidBrep – Solid shall be finite IfcManifoldSolidBrep – all elements shall have geometry
The dimensionality of a manifold solid Brep shall be 3 The extent of the manifold solid brep shall be finite and non-zero All elements of the manifold solid Brep shall have defined associated geometry
I.B.1.8 I.B.1.8.1 I.B.1.8.2 I.B.1.8.3
M M
30
I.B.1.8.4
IfcManifoldSolidBrep – shell normal
I.B.1.8.5
IfcManifoldSolidBrep – no overuse face IfcManifoldSolidBrep – Euler equation IfcPolygonalBoundedHalfSp ace IfcPolygonalBoundedHalfSpa ce – polygon shall be closed
I.B.1.8.6 I.B.1.9 I.B.1.9.1
I.B.1.9.2
IfcPolygonalBoundedHalfSpa ce – segment types
I.B.1.9.3
IfcPolygonalBoundedHalfSpa ce – BaseSurface shall be IfcPlane IfcPolygonalBoundedHalfSpa ce – normal of the Plane
I.B.1.9.4
I.B.1.10
IfcRevolvedAreaSolid
I.B.1.10.1
IfcRevolvedAreaSolid – AxisLine position IfcRevolvedAreaSolid – no intersection of AxisLine IfcRevolvedAreaSolid – valid angle IfcRevolvedAreaSolidTapere d (IFC4) IfcRevolvedAreaSolidTapere d – no mirror
I.B.1.10.2 I.B.1.10.3 I.B.1.11 I.B.1.11.1
I.B.1.12
IfcSectionSpine
I.B.1.12.1
IfcSectionSpine – no intersection of sections IfcSectionSpine – sections shall not be on the same plane
I.B.1.12.2
The shell normals shall agree with the Brep normal and point away from the solid represented by the Brep Each face shall be referenced only once Shall satisfy Euler equation
S-Sch (IFC docu, QA)
DGeom
M
S-Sch (IFC docu, QA) S-Sch (IFC docu, QA)
DGeom DGeom
M
IfcPolyline or the IfcCompositeCurve providing PolygonalBoundary shall be closed The segments of IfcCompositeCurve shall only be of types IfcPolyline or IfcTrimmedCurve which has BasicCurve IfcLine or IfcCircle The BaseSurface defined at supertype IfcHalfSpaceSolid shall be of type IfcPlane The normal of the BaseSurface Plane shall not be perpendicular to the zaxis of the position coordinate system
S-Sch (IFC docu, QA)
DGeom
M
S-Sch (IFC docu, QA)
DGeom
M
S-Sch (IFC docu, QA)
DGeom
M
S-Sch (IFC docu, QA)
DGeom
M
The AxisLine shall lie on the plane of the SweptArea The AxisLine shall not intersect the interior of the SweptArea The Angle shall be between 0o and 360o, or 0 and 2π
S-Sch (IFC docu, QA) S-Sch (IFC docu, QA) S-Sch (IFC docu, QA)
DGeom DGeom DGeom
M
Mirroring within IfcDerivedProfileDef.Operator shall not be used
S-Sch (IFC docu)
DGeom
M
None of the cross sections shall intersect None of the cross sections shall lie on the same plane
S-Sch (IFC docu) S-Sch (IFC docu)
DGeom DGeom
M
Same test as I.C.1.5 Same test as I.C.1.3
M
M M
M
31
I.B.1.12.3
IfcSectionSpine – location of the section
I.B.1.13
IfcShellBasedSurfaceModel
I.B.1.13.1
IfcShellBasedSurfaceModel – Face shall be planar IfcShellBasedSurfaceModel – consistent shell
I.B.1.13.2
I.B.1.14 I.B.1.14.1
I.B.1.14.2
I.B.1.15 I.B.1.15.1
I.B.1.15.2
I.B.1.15.3 I.B.1.16 I.B.1.16.1
IfcSurfaceCurveSweptAreaS olid IfcSurfaceCurveSweptAreaSo lid – SweptArea shall be on X-Y plane IfcSurfaceCurveSweptAreaSo lid – Directrix on the ReferenceSurface IfcSweptDiskSolid IfcSweptDiskSolid - Check angle at transition of curve segments IfcSweptDiskSolid – Radius check IfcSweptDiskSolid – Directrix shall not be intersecting IfcSweptDiskSolidPolygonal (IFC4) IfcSweptDiskSolidPolygonal – FilletRadius test
The local origin of each cross section position shall lie at the beginning or the end of the composite curve segment SpineCurve
S-Sch (IFC docu)
DGeom
M
S-Sch (IFC docu, QA) S-Sch (IFC docu, QA)
DGeom DGeom
M
The SweptArea shall lie on the implicit X-Y plane (Z=0)
S-Sch (IFC docu, QA)
DGeom
M
The Directrix shall lie on the ReferenceSurface
S-Sch (IFC docu, QA)
DGeom
M
S-Sch (IFC docu)
DGeom
M
S-Sch (IFC docu)
DGeom
M
S-Sch (IFC docu)
DGeom
M
S-Sch (IFC docu)
DGeom
M
S-Dat (QA, User Data)
DGeom
M
The dimensionality of the shell based surface shall be 2 The shell shall not overlap or intersect except at common faces, edges or vertices
Check for extremely sharp transition between segment when the transition is not continuous The segment of the Directrix shall be long enough to apply the Radius (arc segment radius > disk Radius) Directrix shall not be self-intersecting
Same check as I.B.2.1 Face is not planar Covered by tests in I.C.1.1 to I.C.1.3
Implementer’s agreement may override the acceptable limit for tangent discontinuity
The FilletRadius, if provided, has to be: #187= IFCCARTESIANPOINT((533.75,25.,0.)); > #195= IFCCARTESIANPOINT((1601.25,25.,0.)); > #311= IFCCARTESIANPOINT((2135.,400.,0.))