Quanti¢cation methods of technical building ...

3 downloads 1385 Views 574KB Size Report
that underpin the day-to-day service management as part of the GSA's ... services (HVAC, water heater, lighting, etc.). One .... cold draft, occupants' variation, HVAC zoning, and a system's ..... and software tools for rapid deployment and inte-.
BUILDING RESEARCH & INFORMATION (MARCH –APRIL 2005) 33(2), 159–172

Quanti¢cation methods of technical building performance Godfried Augenbroe1 and Cheol-Soo Park2 1

College of Architecture,Georgia Institute of Technology, Atlanta,GA 30332-0155, US E-mail: [email protected] 2

Department of Architectural Engineering, SungKyunKwan University, Suwon, Kyonggi-do, Korea

A building performance assessment toolkit was developed for use by large corporate owners and building portfolio managers in the US. A variety of technical performance aspects are addressed such as energy, lighting, thermal comfort, maintenance and indoor air quality. Every assessment is based on a normative and objective Performance Indicator (PI). For easy data capture and calculation of PIs, the toolkit was implemented in a web hosted form, enabling facility managers and staff to collect the data during a walk-through enabled by PDA-based data entry. The current set of performance indicators is discussed and the results of the first benchmarks, most notably the energy benchmarks, are reported. Keywords: building performance, energy, lighting, thermal comfort, office buildings, United States Ce kit d’e´valuation des performances des baˆtiments a e´te´ mis au point par de grands proprie´taires d’immeubles et par des gestionnaires de portefeuilles immobiliers aux Etats-Unis. Il permet d’analyser plusieurs aspects des performances techniques. Les e´valuations sont base´es sur un indicateur de performances normatif et objectif. Pour faciliter la saisie des donne´es et le calcul des indicateurs de performances, ce kit est pre´sente´ sous forme d’un fichier Internet; les responsables d’installation et le personnel peuvent ainsi, lorsqu’ils visitent des proprie´te´s, collecter des donne´es et valider ces dernie`res en les entrant dans un personal digital assistant. L’article analyse les indicateurs de performances et donne les re´sultats des premie`res comparaisons, notamment sur le plan de l’e´nergie. Mots cle´s: performance des baˆtiments, e´nergie, e´clairage, immeubles a` usage de bureaux, confort thermique, Etats-Unis

Expressions of building performance Building performance plays a major role in the expectations expressed by owners and occupants and their fulfilment by designers and building operators. The disparity between expectations and fulfilment are rampant throughout the building delivery process. The improved match between the two is considered an important target for the building industry to become more client driven, to provide better value

overall and to (Koskela, 2000).

increase

customer

satisfaction

The need to move the industry in this direction has fostered the introduction of performance-based building methods (Foliente et al., 1998). These methods focus on the generation of performance-based statements of requirements, and management of a transparent process that guarantees their fulfilment. This

Building Research & Information ISSN 0961-3218 print ⁄ISSN 1466-4321 online # 2005 Taylor & Francis Group Ltd http: ⁄ ⁄www.tandf.co.uk ⁄journals DOI: 10.1080/0961321042000325327

Augenbroe and Park

essentially requires the support of the dialogue between designers, engineers and building managers. Two dialogues are of particular importance: .

.

architectural/engineering procurement: deals with the way services are procured by the design team to engineer building systems that meet functional needs and client expectations tenant/facility manager: deals with the proper maintenance and management of the facility in a way that expectations of the occupant, owner or portfolio manager are met, and maximum value from the facility is provided and maintained for all stakeholders

It needs little elaboration that the current methods to maintain and support these dialogues are hampered by a range of deficiencies that need to be overcome. Among them are the ‘asymmetric ignorance’ between the communicating partners, the lack of transparent requester-provider roles in the process (Beard et al., 2001), the lack of objectively quantifiable expressions of requirements and the lack of the proper assessment tools to ascertain that expectations have been fulfilled by a proposed design (in dialogue 1), or by a proposed tenant/building allocation (in dialogue 2). In addition to failure of the community at large to add quantified elements into the dialogues, there is also an apparent cultural resistance towards quantification of design performance, mainly from the architectural design side where many seem to believe that architectural performance cannot be measured. The habitual conjecture in this discussion is that many aspects of performance can only be interpreted based on qualitative judgements. It is also debated that these judgements have to rely on unpredictable manifestations of the design in its future and changing context of use. There is no disagreement though that such judgements will always be biased by the ‘value system’ of the person who measures, and may at best lead to some measure of quality, i.e. ‘a set of characteristics that are perceived to contribute to value’. In current practice, this is no longer good enough and it can be argued that many performance aspects of buildings can and should be objectively measurable. The performance characteristics that are most amenable to an objective statement are those that relate to functions that the building or one of its (sub)systems is designed to perform. Instead of relying on subjective ‘quality’, a more objective ‘utility’ should be introduced. The latter represents some client-relevant aggregation of objectively measurable performance characteristics. The aggregation is performed over systems and functions. Traditionally the dialogues mentioned above have been cast in prescriptive terms, i.e. by prescribing physical aspects of the solution rather than making statements 160

about the expected performance of the solution. Building codes and regulations have long contributed to this by basing their approach on prescriptive specification methods. This is no longer the case as many countries are moving parts of their regulations and standards to the performance domain (Foliente, 2000). Different methods have come into existence to support the dialogue between demand and supply side. An interesting method in use today is documented in the set of ASTM (American National) standards for Whole Building Functionality and Serviceability (ASTM, 2000). The standard lays out a methodology known as ‘Serviceability Tools and Methods’ (ST&M), which can be used through various stages of the project-delivery process. ST&M provides a method to measure how well a design proposal, or an existing facility, meets the requirements of the stakeholders. This is done for about 100 different performance topics. The current ASTM standards provide a broad-brush, macro-level method, appropriate for strategic, overall decision-making at the corporate client level. They deal both with demand (occupant requirements) and supply (serviceability of the building systems) (Szigeti and Davis, 2001). The expression of demand and supply is captured per topic in scales, each ranging from the lowest (0) to the highest (9) score. The scores reflect a rating of the (design of a) facility, both from the demand as well as from the supply perspective. What is rated on the supply side is the fitness of the facility to meet a given need, hence the term ‘serviceability’. Whereas the ST&M method provides a coarse method geared towards corporate use (many of the topics relate to the serviceability of organizational needs) and is based on a scoring by trained users, the work reported in this paper focuses on the introduction of a more finely grained set of performance quantifiers that rely solely on metrics derived from first-order physical principles, thereby ruling out the interpretation bias of experts. By doing so, it is argued that an expanding set of performance metrics is added as quantified elements in the expectation-fulfilment dialogue. It must be realized that quantifications are only useful when very precisely defined. Otherwise, they lead to misinterpretation and do not support a rational dialogue, e.g. between the portfolio manager and potential building tenants (dialogue 2).

Derivation of performance measures This section introduces the concept of a virtual experiment as a formal quantification method of a performance indicator (PI). Figure 1 shows the basic notion of a performance ‘analysis function’ as a mapping of experimental input variables, environmental and control

Quanti¢cation methods of technical building performance

(normative) statements of how observable variables and states result in a particular score. Examples of these normative procedures are the already introduced ST&M serviceability rating methods, and the sustainability rating method LEED (US Green Building Council, 2001). All of the above provide the PI(p) mapping, based on a set of rules that describe the experiment in precise terms to perform the following two steps without any ambiguity: Figure 1 Virtual experiment and aggregation that de¢nes the Performance Indicator (PI)

variables and system properties (p) to a PI(p) through a specified aggregation procedure. Figure 1 can be explained by looking at the calculation of a PI for thermal comfort. It can be stated that thermal comfort performance is delivered by the ‘comfort control system’, composed of the heating, cooling, control and enclosure systems. The calculation of the PI is based on the following experiment: a person is placed in a certain location in a given space of the building which is subjected to the local climate. The experiment itself is normally conducted virtually by performing a dynamic computer simulation. The experiment control variables are thermostat control, ventilation actions (opening of windows) and observer properties such as activity level and clothing. Note that there is no unique way to perform the aggregation over the output data (observable states) of the experiment. In fact, many types of PIs for thermal comfort can be introduced, e.g. based on (1) the number of hours per year that the room air temperature exceeds a certain comfort threshold or (2) the yearly predicted mean vote occurrence distribution (Fanger, 1970). Both PIs set up the same virtual experiment but use different metrics (a different aggregation method). The multiplicity of PIs is argued as being necessary to support rational dialogues because the context and purpose of the dialogue varies constantly. Going back to the different types of experiments that may be deployed, some observations can be made. First, the mapping from (p) to the behaviour of the system is the key part of the PI quantification. It should be clear that the theoretical foundation of this mapping determines the reliability of the PI. In case of a real experiment, the experimental set-up must be such that all disturbances are kept to a minimum and the monitoring noise does not significantly influence the ‘measured’ behaviour (the output state information). In a virtual experiment, one must guarantee that the calculation tool’s representation is adequate to predict accurately the behaviour of the system. In rating methods, biases must be avoided by clearly stated procedures, i.e. by unambiguous

Step 1: Perform an experiment that results in Object_state (experiment_variables, object (p); t) Step 2: Perform time and space aggregation over Object_state resulting in PI (experiment_ variables, p) All virtual experiments could be performed by a full dynamic simulation. However, this is time consuming and defeats the purpose of a rapid repetitive evaluation. A full simulation would also introduce bias into the PI quantification as no simulation can be performed without introducing assumptions and simplifications, often dependent on the type of simulation tool. The remedy against this bias is the introduction of ‘normative’ calculation procedures. These procedures are derived such that the indicators are indicative and objective indicators of a certain performance aspect. Although the resulting value cannot be taken as an absolute measure for an observable physical variable, the approach is ideal for comparative studies. An even bigger advantage of a normatively declared PI is that its value can be calculated directly from the relevant set of building and operation parameters. A normative indicator is in fact a measure defined through a simplified but indicative (virtual) experiment. The simplification of the experiment allows the derivation of and aggregation over the output state of the experiment to be expressed as closed equations. The resulting set of normative calculations represents the best of both worlds: they represent the physics-based approach of simulation and the normative nature and ease of use of the rating methods. They represent the approach taken in the performance toolkit, which is the main subject of this paper. In the toolkit, the above-mentioned normative calculations are executed while the user is prompted for all needed building and operation parameters, which can be derived from design records analysis or physical on-site walk-throughs.

Assessing existing buildings The need for continuous monitoring of the technical performance of buildings over their lifetime becomes obvious if one realizes that buildings undergo drastic changes and redefinition of their internal client 161

Augenbroe and Park

processes. These changes are caused by internal reorganizations, refurbishments, change of tenants, etc. The natural degradation of the technical systems is another reason to monitor performance and link it to maintenance scheduling. To compare over time and between buildings, one needs objectively quantifiable measures. In the toolkit, these measures are implemented as a set of uniquely defined ‘performance indicators’ that provide cost-effective, quantitative assessments of how well buildings enable specific client functions. The first version of the toolkit provides PIs for energy, lighting, thermal comfort and maintenance based on standardized and normative calculation routines that reflect the building in its current state, environment and in its actual usage situation. Based on laws from bio-physics and physiology, these measures quantify the contribution of a building system to producing a desired condition, related to an activity or need of the tenant or another stakeholder. In other words, given certain visual tasks and workstation location, a lighting system-related PI will quantify how closely the current ‘lighting system’ supports these visual tasks. The latter is in fact a complex interplay of many subsystems, i.e. the daylighting system, internal enclosure

system, artificial lighting system, furniture system and possibly others. A PI must be defined such that it allows the evaluator to understand how multiple building systems interact to produce a given level of building performance. A PI can be used to state expectations as well as quantify actual performance fulfilment. The set of PIs has been harnessed in an operational building performance assessment toolkit. The toolkit is relevant for large tenant organizations, design teams in the architecture/engineering (A/E) procurement stages and for corporate owners who realize that buildings should be constantly monitored to guarantee a safe, healthy and productive environment for their occupants. For a government organization such as the GSA, the use of the toolkit will deliver the data that underpin the day-to-day service management as part of the GSA’s tactical operations (Figure 2). Figure 2 shows how the buildings in the portfolio will be assessed regularly (e.g. once a year). The gathered data will be entered and PI quantifications will be made automatically by the toolkit routines. The data are consequently uploaded in a database where PIs are stored per category; currently, the toolkit contains roughly 35 indicators. The uploaded data can be browsed and linked to other data about the monitored assets (pictures, floor plans, maintenance

Figure 2 Landscape of building performance assessment. POE ¼ post-occupancy evaluation; ST&M ¼ serviceability tools and methods; A/E ¼ architectural and engineering services 162

Quanti¢cation methods of technical building performance

records, etc.). The access to the records of the facility over previous years provides a way to inspect deterioration trends or sudden changes in performance, i.e. when a new tenant is allocated to the building. PIs measure the building –tenant combination, as the virtual experiment measures the building’s behaviour relative to the functions and needs of an occupant. As Figure 2 implies, all performance data are accessible for total building quality management. All other sources of data such as local physical measurements, post-occupancy evaluation data as well as design programmes, ST&M records, etc. will be linked to enable data mining for the business intelligence gathering necessary to improve the design development, procurement and facility maintenance procedures.

Elements of the toolkit The currently available PIs are shown in Table 1. Each PI development starts from a scientifically proven method, which is then used to derive a certain performance measure, i.e. a measure that indicates the level of fulfilment of a desired capability of a building system. As illustrative examples, the rudimentary principles of some of the calculations are given.

Energy

The building energy calculation can be made at different levels of accuracy and complexity. The simplest approach disregards all transient effects leading to a set of algebraic equations. With the degree-days are given for a given geographical location, the annual energy consumption is estimated based on the overall thermal conductance of the building envelope. Many studies have shown this approach to be too simple. On the opposite end of the complexity scale, one finds sophisticated dynamic energy simulation programs such as the European ESP-r (Clarke, 1999), the US standard program DOE-2 and its successor EnergyPlus (Crawley et al., 1999), and the IDA program (Sahlin, 1996). These programs require that the whole building is modelled with the specifications of every single material parameter, dynamic environmental conditions, heating, ventilation and air conditioning (HVAC) system, controls, occupancy regime, etc. The disadvantage of this approach is that it is time consuming to model the building system and would require comprehensive data preparation and computing processing. For a complex facility, it will typically take weeks to perform a full dynamic simulation. The toolkit offers PI calculations situated between the two extremes. For the energy PI, all calculations

have been based on the Dutch NEN 2916, which provides the calculation standard that underpins the Energy Performance Norm (EPN; http://www.enper. org/index.htm?/pub/codes/codes.htm). The EPN was developed to regulate the total energy use of commercial buildings in the Netherlands and slight variations of this norm are being tested in other European countries. The application of the EPN results in a single number, the energy performance coefficient, with the calculation method standardized in NEN 2916 (1999). The main purpose of the EPN is to enable regulators to limit the total energy use of a building, including all installations and building services (HVAC, water heater, lighting, etc.). One prominent feature of this approach lies in the fact that it avoids any prescription on building system components, contrary to what is done by current prescriptive codes, e.g. ASHRAE Standards 92.1-2001. In line with the chosen NEN 2916 approach, the energy PIs calculate the energy consumption of the seven main energy consumers (heating, cooling, humidifying, lighting, pumps, fan, domestic hot water). All PIs are calculated through the set of specified normative calculations captured in a spreadsheet. All energy flows in the procedure are expressed in units of primary energy based on the multiplication of building system efficiency, generation and delivery efficiency. The utilization factors accounting for dynamic effects were determined through an extensive parameter estimation study on a large set of office buildings in Europe. This set of buildings was deemed to be representative enough for the norm to be used in the US without further adaptation. For easy deployment in the toolkit, the EPN calculation was programmed into an MS.NET ASP web application with prestored reference climate data for 252 cities in the US. Note that there are other popular approaches in the US to reflect a building’s energy performance such as Energy Star (www.energystar.gov), LEED (www. usgbc.org) and ASHRAE Standards 92.1-2001. The Energy Star labelling method is designed to find a percentile ranking that expresses how well the building performs compared with ‘peer’ buildings with similar use, based on the data set of the Commercial Buildings Energy Consumption Survey (CBECS). Thus, the Energy Star score provides a relative measure based on ‘peer comparison’ and lacks the objectivity needed for the ‘absolute’ evaluation of the building’s energy performance because the pool of similar buildings changes every year. The LEED rating method, developed as a design guideline, is a consensus-based sustainability rating system that evaluates environmental performance from a ‘whole building’ perspective over the building service life. The ASHRAE/IESNA90.1, a classical prescriptive approach, addresses criteria for 163

Augenbroe and Park

Table 1 Performance Indicators (PIs) Aspect

Function

PI

Meaning

Calculated by:

Energy

Energy

1^7

NEN (1999)

Lighting

Energy ef¢cacy

heating, cooling, humidifying, lighting, pumps, fans, hot water (MJ) electric lighting energy consumption (kW h/m2.year) luminous ef¢cacy of luminaires in LER (lumens/watt)

1 2 3

Task lighting

4

View to outside

5

Visual comfort

6

7

Thermal comfort

Air diffusion Asymmetrical thermal radiation due to hot/cold glazing

1 2 3 4

Cold draft caused by glazing

5 6 7

Occupants’ variation

8

Zoning

9

System’s capacity and response time

10 11

Maintenance

Ef¢ciency

1 2

Business and organization

3 4 5

daylighting autonomy: per cent of hours without requiring an electric lighting ratio of task illuminance as installed and as required outward visibility (view to outside): percentage of occupants who can see the outside from their workplaces daylighting glare avoidance: percentage of of¢ce hours in discomfort range (Daylighting Glare Index 24, just uncomfortable) shading devices for glare avoidance and energy saving (under development) occupants in comfort (%) hourly average Predicted Percentage Dissatis¢ed (PPD) during of¢ce hours over one year hours (%) where the PPD is in the comfort range (10%) average of PPD, where PPD is in the comfort range hourly average PD during of¢ce hours over one year hours (%) where the PD is in the comfort range (10%) average of PD, where PD is in the comfort range average PPD of workers in different activities and clothing levels air£ow rate variation in different rooms within a single thermostat zone minutes required to increase the zone temperature by 18C under the peak heating load minutes required to decrease the zone temperature by 18C under the peak cooling load Building Performance Indicator (BPI), scaled from 0 to 100 Maintenance Ef¢ciency Indicator (MEI) Manpower Sources Diagram (MSD): ratio of in-house and outsourcing expenditures Managerial Span of Control (MSC): ratio of a manager and subordinated personnel Business availability (%):available £oor area over an entire £oor area over one year

NEN (1999) National Electrical Manufacturers Association (2001) IESNA (2001) IESNA (2001) n/a

Chauvel et al. (1982)

n/a

ASHRAE (2001) ASHRAE (1992, 2001)

Fanger and Christensen (1986),Heiselberg (1994)

ASHRAE (2001) Friedman (2004) n/a

Shohet et al. (2003)

Chan et al. (2001)

(continued)

164

Quanti¢cation methods of technical building performance

Table 1 Continued Aspect

Function

PI 6

7

Failure frequency and timeliness

8

9 Policy

10

Meaning

Calculated by:

Manpower Utilization Index (MUI) (%): ratio of man-hours spent on maintenance and total available man-hours Preventive Maintenance Ratio (PMR) (%): ratio of man-hours spent on preventive maintenance and total maintenance (preventive plus corrective) Urgent Repair Request Indicator (URI) and General Repair Request Indicator (GRI): occurrence/10 000 m2 average time to repair (ATTR): unit repairing time (h) maintenance productivity: state/$ (under development)

different building components. The above three approaches are not contained in the toolkit as they do not represent a performance measure (ASHRAE) or lack the scientific basis to be used as one (LEED and Energy Star).

Lighting

Lighting is considered one of the most important features of the work environment that affect work satisfaction (Wineman, 1982). A survey of worker satisfaction in office buildings found that over 90% of the office workers felt that the amount and quality of light for reading and writing were very important, as were the amount and quality of light for computing, filing and other tasks (Ne’eman et al., 1984). When viewed in an economic sense, electric lighting is a significant energy consumer. About 20 – 25% of electricity used in buildings and about 5% of the total energy consumption in the US is used for lighting. Lighting also produces additional heat in buildings, which is sometimes beneficial in cold climates, but generally represents a significant cooling load on airconditioning systems. The heat from lighting typically accounts for 15 –20% of a building’s cooling energy (IESNA, 2001). Lighting (including daylighting) performance can be measured for the following performance aspects: energy efficacy, task lighting, view to outside and visual comfort. To reflect the aforementioned performance aspects, the following seven indicators are introduced based on scientific methods found in the literature (Table 1). What follows is a short description of the calculation of the PIs.

Barber and Hilberg (1995)

Chan et al. (2001)

Augenbroe and Park (2002)

Lighting energy consumption (PI1)

The annual lighting energy consumption per unit area can be calculated as follows (NEN, 1999): E¼

P  fcontrol  foccupancy  tday A

(1)

where E is the electric energy consumption for lighting in a year (kW h/m2), P is the total installed capacity for lighting armatures (kW), fcontrol is the factor for the lighting control system, A is the total floor area (m2), tday is the number of burning hours per year (h) and foccupancy is the occupancy factor. fcontrol is introduced to account for the energy savings due to the daylight control, dimming control, occupancy control, etc. fcontrol can be taken from NEN (1999); foccupancy is 0.8 if the occupancy sensor exists in more than 70% of the floor area and 1.0 for other cases. Luminous efficacy of luminaires (PI2)

The luminaire efficacy rating (LER) is a metric to investigate the energy efficacy of luminaires while the aforementioned PI1 assesses the energy efficiency of a building’s global lighting system. LER is calculated as shown in equation (2) according to the National Electrical Manufacturers Association (2001). The required inputs are acquired from the manufacturers’ catalogue: ( LER ¼

photometric efficiency  total lumens 

)

ballast factor luminaire input watts (2)

Daylighting autonomy (PI3)

Daylighting utilization can save energy by reducing the usage time of electric lighting and by decreasing 165

Augenbroe and Park

the heat generation from luminaires. By using to the Lumen method (IESNA 2001), the hourly interior day-lit illuminance at five points along a line at 0.1, 0.3, 0.5, 0.7 and 0.9D, where D is the depth of a space is calculated. PI3 is then defined as the percentage of hours without requiring an electric lighting (500 lux for office buildings). The relevance of this indicator is limited to a space with an individual (task) lighting switch. Task lighting (PI4)

Proper task illuminance at the work plane is essential to guarantee an adequate luminous environment under which a normal office task can be performed. The recommended values of task illuminance can be found in the literature (CIE, 1986; CIBSE, 1997; IESNA, 2001). To calculate the task illuminance level of a work plane, the zonal cavity method (IESNA, 2001) is employed. With the data entry such as the room geometry, workstation location, the layout and type of the luminaires, and a representative activity in a space, the toolkit returns PI4, the ratio of the task illuminance as designed and as required. View to outside (PI5)

People prefer windows because of their beneficial qualities such as a view, natural light and a feeling of openness. Thus, the outward visibility indicator is defined as the per cent of occupants who can see the outside from their workstation in a space. Daylighting glare avoidance (PI6)

The discomfort glare is caused by high or non-uniform luminance. One glare formula, the daylighting glare index (DGI), has been developed to determine the discomfort glare from windows (Chauvel et al., 1982). With inputs such as the dimensional data of a room and windows, etc., the toolkit calculates the hourly DGI over an entire year and the per cent of hours in the discomfort range (DGI . 24). Shading devices (PI7)

Office buildings have shading devices such as louvers, blinds, rolling screens, etc. These devices have a role in glare avoidance and energy savings. With regard to these two aspects, PI7 is under development.

Thermal comfort

To assess thermal comfort, 11 PIs have been developed to account for space air diffusion, asymmetric radiation, cold draft, occupants’ variation, HVAC zoning, and a system’s capacity and response time (Table 1). 166

The air diffusion performance index (ADPI) is well established and has been developed as a design guide for selecting appropriate air supply devices. It is a measure of how temperature variation, good air mixing and the occurrence of (cold) drafts affect thermal comfort in the occupied zone of the space (typically 150 mm to 1.8 m above the floor at given workstation locations). This is particularly relevant for the interior part of open office layouts, where air streams from diffusers may interact with each other, and for the perimeter zones, where air streams from diffusers interact with hot or cold perimeter walls (ASHRAE, 2001). ADPI is defined as the percentage of test points where the effective draft temperature (u) and air velocity (V) meet the desirable criteria (equation 3) and can be regarded as an indicator of the percentage of people who are comfortable in office occupations. In other words, an ADPI of 100% means the most desirable condition: 3W F  u  2W F 1:7W C  u  1:1W C V  70 fpm V  0:35 m/s

(3)

In the toolkit, the ADPI is calculated using the tables given in ASHRAE (2001), with information on the layout and specifications of the installed air diffusers, the volumetric airflow rate to the space, the characteristic length of the space, etc. Note that the ADPI (PI1) is based only on air velocity and effective draft temperature (a combination of local temperature variations from the room average) and does not account for the asymmetric thermal radiation and draft due to cold/ hot glazing surface. These effects are measured by separate PIs, as discussed below. To account for the asymmetric thermal radiation caused by hot/cold glazing surface, which is one of most common reasons for discomfort in the perimeter space, three PIs (PI2–4) are introduced. They are based on the predicted mean vote and the predicted percentage dissatisfied (PPD), estimated from the glazing surface temperature and the corresponding mean radiant temperature. For the sake of a normative calculation for predicted mean vote and its corresponding PPD, the following assumptions are made: .

room air temperature is assumed to be the optimal room air temperature, defined as the desired temperature to achieve a predicted mean vote of 0, a thermally ideal condition with given occupant’s office activity and clothing level

.

air velocity is assumed to be 0.1 m/s and the relative humidity is 50%

Based on the diurnal and seasonal hourly weather variations obtained from typical meteorological years (TMY2), the hourly glazing temperature is calculated

Quanti¢cation methods of technical building performance

by using a steady-state calculation method with information on the U-values of 48 different types of glazing systems provided by ASHRAE (2001). The corresponding mean radiant temperature is then estimated in accordance with ASHRAE standard 55 (1992), resulting in PI2–4 as follows: P4

PPDk n Pn [PPDk þ PI3 ¼ k¼1  100 n Pp PPDl PI4 ¼ l¼1 P PI2 ¼

k¼1

(4)

Pn

PDk n Pn ½PDk þ PI6 ¼ k¼1  100 n Pp PDl PI7 ¼ l¼1 p

PI5 ¼

k¼1

(11) (12) (13)

(5) (6)

where PI2 is the average PPD, n is the number of data points (if the sampling time is 1 h over one year for nine office hours with five working days a week, n ¼ 9 5 52 ¼ 2340), PI3 is the percentage of hours where the PPD is in the comfort range (10%), [PPDk]þ is set as 1 if PPDk  10%, PI4 is the average of PPD, where PPD is in the comfort range, and p is the number of occurrence where PPD is in the comfort range. Regarding the cold draft due to cold glazing surface, three PIs (PI5 –7) are introduced, based on the percentage of the population feeling draft when exposed to a given mean air velocity. With the calculated hourly glazing temperature used for PI2, the induced draft, the maximum air velocity (Umax(x); m/s) can be calculated (Heiselberg, 1994): pffiffiffiffiffiffiffiffi (7) Umax (x) ¼ 0:055 hDt if x , 0:4 pffiffiffiffiffiffiffiffi hDt if 0:4  x  2:0 (8) Umax (x) ¼ 0:095 x þ 1:32 pffiffiffiffiffiffiffiffi (9) Umax (x) ¼ 0:028 hDt if x . 2:0 where h is the height (m) of the vertical surface, Dt is the temperature difference (8C) between the cooled surface and the reference in the occupied space, and x (mm) is the distance from the cold vertical glazing surface. Based on the calculated maximum air velocity, the per cent dissatisfied can be calculated as follows (Fanger and Christensen, 1986): " PD ¼ 13800

Finally, in the same manner, the following three PIs are calculated:

# 2 v  0:04 þ 0:0293 0:000857 (%) ta  13:7 (10)

where PD is the percentage of dissatisfied people due to draft, ta is the air temperature (8C) and v is the mean air velocity (m/s), which can be estimated in relation to Umax (Fanger and Christensen, 1986).

where PI5 is the average PD, PI6 is the percentage of hours where the PD is in the comfort range (10%), [PDk]þ is set to 1 if PDk  10%, and PI7 is the average PD where PD is in the comfort range. Note that the PI5 –7 can apply only to people wearing normal indoor clothing and performing light, mainly sedentary office work. Those with higher activity levels are not so sensitive to draft (Jones et al., 1986). Additionally, if a heat supply device, e.g. a convector, is placed close to or under the cold vertical surface to cancel out the downward cold draft, the aforementioned PIs (PI5 –7) would not be applied because such a system is designed to prevent a draft. To account for the occupants’ variation in office activity and clothing, PI8 is introduced and is defined as the average PPD of different client workers under the condition that the HVAC systems ideally provide thermal environment for the ‘design person’ in a space. In other words, this indicator represents the difference in thermal comfort level between the design person that the HVAC system aims to satisfy and client workers who have different office activities and clothing levels from the design person. To evaluate HVAC zoning problems, a conventional technique is introduced. For zones to work with single-point thermostat control, the proportional capacity requirement must not change significantly for a variety of internal and external load variations. Thus, PI9 is introduced to reflect the load variation of different spaces in the single control zone. If the airflow at peak loads vary more than 10 –15% throughout the year, additional zoning is required (Friedman, 2004). The heating and cooling load calculations are based on the Radiant Time Series (RTS) method (ASHRAE, 2001, ch. 29). As shown in Table 2, the PI9 of room 1 can be calculated as follows: PI9room1 ¼ 100  j62:6  63:8j ¼ 98:8

(14)

Finally, to address the system’s capacity and response time, PI10 and PI11 are introduced. With information on the size and efficiency of the chillers and boilers, and the peak heating and cooling load, the system’s capacity is investigated (satisfactory/unsatisfactory) and the response time (T), defined as the time (min) 167

Augenbroe and Park

Table 2 Air£ow requirement per space Room

July (cooling) Air£ow (l/s)

1 2 3 Total

882 285 241 1408

January (heating)

Total air£ow (%) 62.6 20.3 17.1

655 179 191

100

1025

required to increase/decrease the zone temperature by 18C under the peak load, is calculated as follows: T¼

r Cp V  60 Qmech  Qload

Air£ow (l/s)

(15)

where rCp V is a lumped heat capacity of a building as a multiplication of the density, specific heat and volume, Qmech is the total energy (W) generated by the installed chillers or boilers and then delivered to the space, and Qload is the peak heating and cooling load (W) that has to be removed from the space, and is calculated by the aforementioned RTS method (ASHRAE, 2001). Equation (15) is derived in a lumped steady-state fashion and, thus, may not reflect the reality very accurately compared with three-dimensional full-blown dynamic simulations, but note that interest lies not in a high accuracy, but in a workable approach which is normative and easy to use.

Total air£ow (%) 63.8 17.5 18.7

98.8 97.2 98.5

100

Maintenance

The maintenance toolkit includes ten PIs that measure maintenance performance, efficiency and costeffectiveness. The currently existing indicators (PI1–9) are based on Shohet et al. (2003), Chan et al. (2001) and Barber and Hilberg (1995). An additional PI10 is under development to compare various maintenance policies on costs and maintained building state. PI10 is defined as a ratio of a building’s state and maintenance cost, and may be used for maintenance policy justification and budget allocation (Augenbroe and Park, 2002). More details on the maintenance measures will be published in forthcoming papers.

Realization: web-enabled toolkit The web-enabled execution of the toolkit provides the following advantages: easy data entry and storage;

Figure 3 Energy Performance Indicator entry screen example: entering mechanical system information 168

PI9 (%)

Quanti¢cation methods of technical building performance

Figure 4 Thermal comfort Performance Indicator entry example: calculation screen

ubiquitous access to the use of the toolkit; data sharing with other electronic repositories; and user friendliness. Figures 3 and 4 show two examples of dataentry forms in web browser mode. The general systems architecture of the web-enabled toolkit for GSA use is as shown in Figure 5. GSA facility managers enter building information through a standard web-browser window on either a personal digital assistant or desktop. For ease of data capture,

the GSA facility managers can use a wireless pocket computer as the input device during a walk through the building. With the data entered in a browser, the web server returns to the end user the PIs calculated from a programmed MS.NET ASP web application. With the increasing number of PIs and GSA buildings, it will become a necessity to develop a framework to store the assessed building’s performance data and link them to the facility and property managers’ database environment where the GSA’s strategic

Figure 5 Architecture of the web-enabled toolkit. PDA, personal digital assistant; DB, database 169

Augenbroe and Park

business decision can be optimally determined with the assessed PIs.

Benchmarking the toolkit To calibrate/validate the toolkit and investigate its applicability, usability and adequateness in daily practice, a medium-scale benchmark was conducted on ten selected GSA facilities. The current version of the toolkit was validated with metered/measured/gathered data, leading to necessary calibration to account for local, situational and sometimes idiosyncratic use, operation and maintenance. The applicability and usability of the toolkit were also investigated as to the required time and effort. Through the continuing benchmarking, the GSA internal use opportunities of the toolkit are being examined. The benchmark process consists of three stages: .

site visits: field survey and data collection with help of GSA facility managers

.

data entry to the toolkit

.

calculation of PIs and the resultant analysis

In this section, the focus is on the energy toolkit benchmark since the benchmarks of the lighting, thermal comfort and maintenance PIs are still in process. Energy toolkit benchmarking

As a target of the energy toolkit benchmarking, the Sam Nunn Atlanta Federal Center (AFC) in Atlanta, GA, US, was chosen. The AFC (Figure 6) is one of the largest federal office buildings on the East Coast and provides office facilities for multiple federal agencies. It consists of four connected buildings (a 24-storey tower, a six-storey bridge, a ten-storey

mid-rise tower and a six-storey building). The total building is regarded (as defined by the EPN) as one energy sector, served by five chillers. The total floor area is about 148 000 m2 accommodating approximately 4500 workers. Three visits were made to the building for data collection. First, floor area, wall area, window area and Uvalues were calculated from available building records. Then, relevant system information pertaining to the HVAC system and its operational scheduling, HVAC-fan power ratings (kW), etc., were obtained through the facility manager. After filling the toolkit data entry form with all relevant data items, the energy PIs show up instantly, as shown in Table 3. To compare with real consumption data, the computations were carried out with the actual Atlanta climate for 2000 and 2001. Note that for use in regular service management, the calculations were based on the standardized reference year for Atlanta, enabling a year-to-year comparison of the energy performance and one not biased by the weather variations. Although the toolkit calculation is normative (as explained above), the resulting calculation is in many cases accurate enough to be used as a reliable estimate of the expected real energy consumption. An additional advantage of the normative indicator calculation is that it can provide a breakdown of energy consumers (Table 3). The outcome of the breakdown will help building owners and facility managers have a sense of where attention should be focused and where the budget should be allocated to improve energy performance for a specific building, or in general for the whole building stock. Due to a lack of energy sub-metering in the AFC, the breakdown in Table 3 could not be compared with metered data on site. But diagnostic use is not the

Table 3 Breakdown of energy consumers in the Sam Nunn building Performance indicator

Energy consumer

1 2

Heating Fans for ventilation and air circulation Lighting Pumps Cooling Humidifying Domestic hot water

3 4 5 6 7 Figure 6 Sam Nunn Atlanta Federal Center (AFC), Atlanta, GA,US 170

Total

Average energy (MJ), 2000^01, and (%) 8 553 494 (7.6) 9 090 290 (8.0) 54 949 602 (48.6) 4 579 134 (4.0) 30 225 076 (26.7) 4 006 742 (3.5) 1 691157 (1.5) 113 095 495 (100.0)

Quanti¢cation methods of technical building performance

Figure 7 Comparison of the calculation and energy consumption (MJ)

primary purpose of the PI. Rather, it defines the performance of the different energy consumers enabling the comparison across buildings of the energy consumed by each category. It will enable GSA to define minimum performances for lighting, or cooling contingent on certain client and building types. This will lead to a much more refined ‘energy service management’ based on the PI instruments than currently offered by consumption measurement (with or without sub-metering). The differences between normative and actual total yearly energy consumption in 2000 and 2001 are 2.2 and 15.4%, respectively. The deviations mostly occur during the winter season (Figure 7). These represent deviations between the normative expectation and the real energy consumption, and are attributable to occurrences not part of the ‘as designed’ operation of the building. In this case, the two main reasons for the deviations became obvious during an on-site inspection: (1) the 24-hour operation of the HVAC system to prevent freezing (this was outside the use specification of the system and therefore not considered in the normative calculation) and (2) the use of personal

heaters by workers. This was because the thermostat settings of the ‘as design’ operation proved too low to provide sufficient levels of thermal comfort. The AFC did not obtain Energy Star labelling over the past two years. The rating was about 62 on the Energy Star office scale and the energy bill in 2001 was about US$1.6 million. In order for the building to obtain Energy Star labelling (a rating of 75), an 18% reduction of energy consumption, equivalent to annually 107 991 361 (MJ/year), is required (Oak Ridge National Laboratory, 2002). By using the toolkit, a simple sensitivity study was easily made by changing several parameters, showing how the building’s Energy Star rating could be improved from 62 to 68 (Table 4). As exemplified here, the ‘normative’ nature of the toolkit makes it readily usable for a sensitivity/feasibility study of new buildings in the design stage or existing buildings to be refurbished. Even with a combination of drastic design changes (orientation and window ratio) and operation changes (lighting controls), the building would not

Table 4 Obtainable annual energy savings calculation

Orientation Windows ratio Lighting Total

From

To

Savings (MJ)

$/year

south-east to north-west 40% central on/off

south^north 30% daylight switch

1373109 3783711 10 989 920

16 877 46 507 135 082

16 020 313

196 913 171

Augenbroe and Park

obtain an Energy Star rating of 75. In the current building, only a drastic redesign of the lighting controls (including task lighting) could possibly accomplish this level. All other PIs in the toolkit were benchmarked in the same exercise, resulting in similar positive results. Detailed accounts of these benchmarks will be provided elsewhere.

Conclusions The performance toolkit offers an instrument for precise and quantified dialogues between stakeholders in the building industry. For large corporate owners, continuous commissioning strategies and tactical services management over a building’s service life are enabled. The toolkit offers a set of normative and, therefore, objective PIs, data-gathering procedures and software tools for rapid deployment and integration in the owner’s asset management processes. The energy PIs, as a part of a complete building performance monitoring toolkit, were benchmarked on the federal building. The energy performance of a large federal building was audited. Future work includes (1) benchmarking of the energy toolkit on a greater variety of facilities located in different climates and situations and (2) further refinement of the toolkit for energy diagnosis and a feasibility study. It is foreseen that the toolkit will be expanded continuously to cover more performance aspects.

Acknowledgements Research was conducted in the framework of the General Services Administration (GSA) Building Performance Assessment Toolkit Development Project and was financed by the GSA. For the use of NEN 2916, the authors acknowledge the help of Eric Van den Ham, Climatic Design Consult, the Netherlands.

References ASHRAE (1992) Thermal Environmental Conditions for Human Occupancy. ANSI/ASHRAE Standard 55–1992. ASHRAE (2001) ASHRAE Fundamentals, ASHRAE, Atlanta, GA. ASTM (2000) ASTM Standards on Whole Building Functionality and Serviceability. ASTM, West Conshohocken, PA. Augenbroe, G.L.M. and Park, C.S. (2002) Towards a Maintenance Performance Toolkit for GSA. Interim Report submitted to the GSA (available at: http://www.publicarchitecture.gatech.edu/Research/project/gsatoolkit.htm) (accessed April 2004). Barber, F. and Hilberg, G. (1995) Comprehensive maintenance program ensures reliable operation. Power Engineering, December, 27– 31. Beard, J., Wundram, E. and Loulakis, M. (2001) Design–Build: Planning Through Development, McGraw-Hill, New York. Chan, K.T., Lee, R.H.K. and Burnett, J. (2001) Maintenance performance: a case study of hospitality engineering systems. Facilities, 19(13/14), 494–504. 172

Chauvel, P., Collins, J.B., Dogniaux, R. and Longmore, J. (1982) Glare from windows: current views of the problem. Journal of the Illuminating Engineering Society, 14(1), 31–46. CIBSE (1997) Code of Interior Lighting, Chartered Institute of Building Service Engineers, London. CIE (1986) Guide on Interior Electric Lighting Publication. No. 29/2 (TC 4.1), Commission Internationale de L’Eclairage, Vienna. Clarke, J.A. (1999) Prospects for truly integrated building performance simulation, in Proceedings of the 6th IBPSA (International Building Performance Simulation Association) Conference, Kyoto, Japan. Crawley, D.B., Pedersen, C.O., Liesen, R.J., Fisher, D.E., Strand, R.K., Taylor, R.D., Lawrie, L.K., Winkelmann, F.C., Buhl, W.F., Erdem, A.E. and Huang, Y.J. (1999) ENERGYPLUS, a new-generation building energy simulation program, in Proceedings of the 6th IBPSA Conference, Kyoto, Japan. Fanger, P.O. (1970) Thermal Comfort, Danish Technical Press, Copenhagen. Fanger, P.O. and Christensen, N.K. (1986) Perception of draught in ventilated spaces. Ergonomics, 29(2), 215 –235. Foliente, G. (2000) Developments in performance-based building codes and standards. Forest Products Journal, 50(7/8), 12–21. Foliente, G., Leicester, R. and Pham, L. (1998) Development of the CIB Proactive Program on Performance Based Building Codes and Standards. BCE Doc. 98/232, International Council for Research and Innovation in Building and Construction (CIB), Rotterdam. Friedman, G. (2004) Too hot too cold: diagnosing occupant complaints. ASHRAE Journal, 46(1), 157–163. Heiselberg, P. (1994) Draught risk from cold vertical surfaces. Building and Environment, 29(3), 297–301. IESNA (2001) Lighting Handbook, Illuminating Engineering Society of North America, New York, NY. Jones, B.W., Hsieh, K. and Hashinaga, M. (1986) The effect of air velocity on thermal comfort at moderate activity levels. ASHRAE Transactions, 92(2), 761–769. Koskela, L. (2000) An exploration towards a production theory and its application to construction. Dissertation, VTT Building Technology. Publ. 408, 2000 Espoo. National Electrical Manufacturers Association (2001) Procedure for Determining Luminaire Efficacy Ratings for Fluorescent Luminaires. Standards Publ. LE5-2001, NEMA, Rosslyn, VA. Ne’eman, E., Sweitzer, G. and Vine, E. (1984) Office worker response to lighting and daylighting issues in workspace environments: a pilot survey. Energy and Buildings, 6, 159–173. NEN (1999) NEN 2916: Energy Performance of Non-residential Buildings – Determination Method. Dutch Normalization Institute (NEN), Standardization Institute, Delft (available at: http://www.enper.org/index.htm?/pub/codes/codes.htm) (accessed March 2004) Oak Ridge National Laboratory (2002) Sam Nunn Atlanta Federal Canter Energy and Load Savings Opportunities Survey, Federal Energy Management Program, Oak Ridge, TN. Sahlin, P. (1996) Modelling and simulation methods for modular continuous systems in buildings. PhD dissertation, Bulletin No. 39. Building Services Engineering, KTH, Stockholm. Shohet, I.M., Lavy-Leibovich, S. and Bar-On, D. (2003) Integrated maintenance monitoring of hospital buildings. Construction Management and Economics, 21, 219–228. Szigeti, F. and Davis, G. (2001) Matching people and their facilities: using the ASTM/ANSI standards on whole building functionality and serviceability, in Proceedings of the CIB World Building Congress, April 2001, Wellington, New Zealand. US Green Building Council (2001) LEED Rating System, version 2.0, Washington, DC. Wineman, J.D. (1982) Office design and evaluation: an overview. Environment and Behaviour, 14(5), 271–298.

Suggest Documents