Lighting Res. Technol. 2016; 0: 1–18
Ubiquitous luminance sensing using the Raspberry Pi and Camera Module system AR Mead MS and KM Mosalam PhD Department of Civil and Environmental Engineering, University of California Berkeley, Berkeley, CA, USA Received 9 February 2016; Revised 11 April 2016; Accepted 21 April 2016 In this paper, the authors have calibrated a Raspberry Pi and Camera Module (RPiCM) for use as an absolute luminance sensor. The spectral response of the RPiCM chip as well as linear mapping to the standard CIE-XYZ colour space have been measured, calculated and presented. The luminance values are anchored to absolute luminance measurements. Further, by using high dynamic range imaging techniques making use of different shutter speeds in a sequence of images, the measurement of luminance values from approximately 10 to 50,000 cd/m2 is possible. Lens correction for vignetting is also addressed, while pixel point spreading is ignored. This measurement goes beyond a single point measurement, economically and accurately allowing each of the arrays within the RPiCM chip to act as an individual luminance meter over the entire field of view of the camera system. Applications and limitations of the embedded camera system are discussed. An Energy Plus model is constructed as a motivational application of a simple one room, one window space and simulated for a year using weather files from around the world. These simulations highlight the need for spatial luminancebased sensing within the built environment to counteract the experience of discomfort glare by building occupants.
Nomenclature B possible pixel values: B ¼ [0, 1023] – 10-bit sensor C vignetting correction factor space: ½0:0, 1:0 R F(x,y) vignetting correction function HDRI a set of four ml, each with different SS and the corresponding yScale ml measurement of luminance object M linear map from RGBdevice colour space to the XYZ colour space N natural numbers (including 0)
R, B, G1, G2 RAW pixel values from a single Bayer pattern array R real numbers SS shutter speed in microseconds X space from which horizontal pixel coordinates originate: X ¼ ½0, 2592 XYZ CIE-XYZ colour space yScale linear scaling constant from bits to luminance; depends on SS of ml Y space from which vertical pixel coordinates originate: Y ¼ ½0, 1944
1. Introduction Address for correspondence: AR Mead, Department of Civil and Environmental Engineering, University of California Berkeley, 760 Davis Hall, Berkeley, CA 94720, USA. E-mail:
[email protected]
Architects and engineers are increasingly placing more emphasis on the use of daylight
ß The Chartered Institution of Building Services Engineers 2016 Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
10.1177/1477153516649229
2
AR Mead and KM Mosalam
within the built environment. Motivated by the ‘green movement’ and its increased attention to both energy use reduction and indoor environmental quality, natural lighting from the sun comprises one element of a holistic design which fulfills these requirements. Daylight increases worker’s productivity and health1,2 while at the same time directly reducing electricity use by offsetting electric lighting, as well as second-order reductions in heating, ventilating and air-conditioning energy by reducing the heat loads associated with the use of electric lighting. However, daylighting use within the built environment is a traditionally challenging design problem. This stems from the fact that the sun position, and hence sunlight, is continuously changing from second to second throughout any given day, and from day to day throughout the year.3 Thus, the lighting designer must create a system capable of transporting the optimal amount of sunlight into the space for any given sun position throughout the year, meaning, not so much sun at solar noon that the space is over illuminated, yet as much as possible during other times of the day when only limited sunlight is available and it most probably needs to be supplemented by electric lighting. It should be pointed out that electric lighting will almost always be needed in a building. This is because regardless of the location of a building on Earth, at some point during the usable time of the building, no sunlight at all will be available (e.g. night time). The presence of both sunlight and electric luminaires within a space presents a challenging control problem: How to achieve the optimal lighting conditions by controlling daylighting (e.g. adjustable facades and blinds) and electric lighting (e.g. luminaire control setting)? Controllers designed for problems of this nature can follow different approaches. A simple control algorithm may be based on a schedule: The window blinds close every day from 10:00 to 14:00 hours. An example
implementing this strategy is the Campus for Research Excellence and Technological Enterprise Tower on the National University of Singapore’s UTown Campus. While simple open-loop controllers like this, based on tabulated search control laws, are better than nothing, the best controllers are closedloop in nature and use sensor readings to inform their next control output. Thus, any good shading/lighting controller will need to accurately measure the lighting environment to inform its next control option. Typically, two measures of light are considered within the built environment: (1) illuminance (Ev) – the luminous flux density incident on a surface, measured in lumens per square metre (lux), and (2) luminance (Lv) – the luminous flux per unit area of a source or surface per unit solid angle, measured in candelas per square metre (cd/m2). High quality, dependable sensors for illuminance typically range from about $5004 to $900,5 with alternatives lacking good cosine correction and spectral selectivity available for a much cheaper price, of the order of $10. To measure luminance, dedicated luminance meter ‘guns’ can be used costing about $2500,6 or high end digital single lens reflex (DSLR) cameras ranging from about $1000 to very high figures depending on features such as lens quality. The high cost per sensor means that typical lighting control sensing in a building is based on cheaper illuminance sensors.7 Recent years, however, have witnessed a boom in the smart phone industry leading to both the driving down of the cost and an increase in quality of complementary metaloxide semiconductor (CMOS) chips to the order of $10. CMOS chips are the sensing element in digital cameras and are based on the same technology as the luminance meter ‘guns’ and DSLR techniques. As such, cheap CMOS chips, along with an affixed lens and dedicated embedded computing, can now be used in the built environment to quickly and
Lighting Res. Technol. 2016; 0: 1–18 Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
Ubiquitous luminance sensing accurately measure the luminance environment. These spatially detailed luminance measurements can then be used for various purposes for daylight control as discussed above and further to investigate as-built spaces for retrofit, as explained in detail herein. This paper presents a luminance sensor based on high dynamic range imaging (HDRI) techniques using the ubiquitous Raspberry Pi 2 Model B (RPi) and Camera Module (CM) platform. 2. Background 2.1. Luminance sensing Luminance is analogous (as a weighted integral and linearly related) to radiance, and is the derivative of luminous flux with respect to both cosine-weighted source area and emission/reflection direction.8 Informally, and somewhat inaccurately, it is often referred to as the ‘‘brightness’’ of a source or surface. Its relevance in the built environment comes in the form of visible light entering a space from either a facade system (i.e. daylighting), an electric light source (e.g. incandescent, fluorescent, LED), or light from these sources being reflected off surfaces within the space. Appropriate luminance levels are important, both in outright absolute magnitude and spatial relative magnitude, for visual comfort to avoid under-lighting, over-lighting and glare. Given the importance of luminance within the built environment, measuring its magnitude is an often executed activity. Two techniques used in the industry to this end are briefly covered in the following subsections. 2.1.1. Luminance meter
Traditionally, and by far the most common method for luminance measurement has been to use a device called a luminance meter.9 This is a gun-type instrument with a field of view typically around 18. Typical examples include the Konica Minolta LS-100/LS-11010
3
and the Gossen MAVO-SPOT 2 USB.6 For more on luminance meters and the factors affecting their accuracy, refer to the IES Lighting Handbook, Section 9.6.11 2.1.2. Extended and Photosphere HDRI
More recently, DSLR cameras have also been used to measure luminance within the built environment.12 The process generally involves first taking an HDRI of a space then downloading the image data to a general purpose processor computer.13 Next, the camera response curve for each colour channel is generated using radiometric selfcalibration,14 a popular software for this is Photosphere,15 which relates the pixel values to real world luminances. This transforms the HDRI into a spatial luminance measurement of the field of view of the DLSR camera. To obtain absolute measurements, a luminance meter of the above-mentioned type must be used to anchor the luminance values of the HDRI measurements to SI units. 2.2. Raspberry Pi 2 and Camera Module A Raspberry Pi 2, Model B (RPi) is a credit card-sized computer with the ability to run a full Linux or Windows 10 Operating System. It has four USB ports to connect a mouse, keyboard and two other devices, an HDMI port for a monitor, a 1/8’’ jack for audio, and an Ethernet port for a network connection. It is truly a full general purpose computer in a small form factor.16 A common extension to the RPi platform is the Raspberry Pi Camera Module (RPiCM). It is a small (25209 mm3), light weight (3 g), five mega pixel camera17 which can be directly interfaced with the RPi through a built-in Camera Serial Interface (CSI-2) jack. The RPiCM is a CMOS chip-based camera (OmniVision OV5647) with a Bayer Pattern of green–blue–red–green (GBRG),18 from which a user can access the RAW pixel values using object oriented programming techniques through the Python Module Picamera.19 Lighting Res. Technol. 2016; 0: 1–18
Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
4
AR Mead and KM Mosalam
2.3. Glare: Definition, quantification and prediction Glare is when the luminance distribution within the built environment causes visual discomfort or disability. It comes in several forms: (1) high absolute luminance magnitude – simply too much luminance in the space resulting in squinting, blinking, or averting the eyes (saturation glare), (2) large luminance gradients – both high and low luminance sources near each other causing visual discomfort (discomfort glare) and reduced visual performance (disability glare). Discomfort glare within the built environment is by far the most common issue for the interior lighting designer, saturation glare and disability glare are rarely an issue. Thus, discomfort glare is focused on here. While the exact cause of discomfort glare is unknown, four main factors are known to play a part in its perception by building occupants: (1) luminance of the glare source, (2) size of the glare source, (3) position of the source in the field of view and (4) luminance of the background.11 These four factors have subsequently been used to develop several empirical prediction systems for glare quantization and prediction. For example, visual comfort probability (VCP)20 – primarily in North America – and the Commission International de l’ Eclairage (CIE) Unified Glare Rating (UGR)21 – world standard – are well validated for electric luminaires. Further, the discomfort glare index (DGI) was designed for glare from windows, and is also based on the four factors mentioned above. In general, VCP, UGR and DGI provide good predictions for groups of people, but not on the individual level,22 making them good candidates for glare-based controllers in environments such as open plan offices. It is noted that new glare indication systems continue to be developed.23,24 With the HDRI camera method, all known factors in glare quantification are available, making the HDRI luminance sensor ideal for use in a daylight-based
controller where glare within the space maybe an issue. For further discussion on glare in buildings and the calculation of glare prediction metrics, refer to IES Lighting Handbook, Sections 4.10 and 10.9.2, respectively.11 As glare causes discomfort to the occupant, it is prudent to analyse and reduce its impact within a space. Knowing glare affects people at different times of the day in different ways,25 ideally glare should be measured using sensors within the occupied space so abatement methods can be implemented based on real conditions currently being experienced by the occupants. 3. Application example As a demonstration of the RPiCM system applied to a real building, a single room Eþ computer model26 with a single south facing window is considered, as shown in Figure 1. To mimic the impact of having a controller based on luminance which is calculating a glare metric from a certain position within the space, a Daylighting: Controls object is assigned to the zone and used to calculate the built-in glare metric, DGI. The location of the reference point to calculate glare is the centre of the room, 0.9 m above the floor, oriented directly to the south facing window. Using this location, the DGI is calculated at every time step throughout a standard annual simulation for three locations and climate types: (1) Chicago, Illinois, USA, (2) Abu Dhabi, UAE and (3) Singapore, results in Table 1. Weather files from U.S. Department of Energy in the standard Eþ (*.epw) format (TMY3 data set) are used to define the conditions of the respective simulations. The results are shown in Table 1. The calculation details of the DGI by Eþ can be found in the Eþ documentation, Engineering Reference in the Time-Step Daylighting Calculation Section. The analysis in this paper demonstrates that the DGI, which is computed here in simulation, can be determined solely by the RPiCM in a real
Lighting Res. Technol. 2016; 0: 1–18 Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
Ubiquitous luminance sensing
2.5
Field of view
2.0
5
5.0
6.0
6.0
No
r th
Figure 1. Schematic representation of the Eþ example model (dimensions in m)
Table 1 Eþ Discomfort glare index exceedance (in hours) for an annual simulation DGI Chicago, IL, USA Abu Dabi, UAE Singapore
20.0 969 1156 133
22.0 372 449 0
building, thus it would become a control sensor reading if the RPiCM were to be deployed in the space. The DGI as calculated by Eþ can thus be used as a proxy of an example of what the RPiCM sensor can calculate when installed in a building, and then acted upon as a luminance-based controller. For example: When the DGI exceeds a certain threshold, deploy closure of the blinds. 4. RPi and CM: Luminance calibration and validation 4.1. Raspberry Pi Camera Module luminance calibration The camera system (lens, sensing chip, processing circuitry) used in this study is the
RPiCM based on the OmniVision OV5647 CMOS sensing chip. It has a fixed focus lens, F-Stop 2.9, and an adjustable shutter speed.17 The OV5647 is composed of 19442592 ¼ 5,038,848 base pixels and is overlaid with a 22 GBRG Bayer pattern array (BPA).18 Thus, there are 9721296 ¼ 1,259,712 BPAs, each producing a unique red, unique blue, and two unique green 10-bit pixel values (Figure 2). Using the Python Module Picamera,19 the unprocessed (i.e. without gamma correction, white balance, edge sharpening, or processing of any kind; known as ‘RAW’ image file in the photography community) 10-bit values are extracted for each BPA when a photograph is taken. This results in 1,259,712 red, green, and blue (the two green pixels are averaged to one value) measurements in a single photograph, each corresponding to a different position in the field of view of the camera system. In typical digital camera usage, complex nonlinear and often proprietary interpolation algorithms, known as demosaicing algorithms, Lighting Res. Technol. 2016; 0: 1–18
Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
6
AR Mead and KM Mosalam
1944 pixels
and ðR, G, BÞ is some BPA in the ml. Further, R, B, G1, G2 2 B, B ¼ ½0, 1023 N(B represents the 10-bit pixel value, 210 ¼ 1024), and
2592 pixels
(a) G
B
G
B
G
B G
R
G
R
G
R
G
B
G
B
G
B
R
G
R
G
R
G
G
B
G
B
R
G
R
G
G ¼ round ððG1 þ G2 Þ=2Þ
(b) G1
B
R
G2
Figure 2. Bayer pattern array on CMOS chip (a) Full CMOS chip pixel array, (b) Individual bayer pattern array(BPA).
would be used to create 5,038,848 red, green and blue values, each corresponding to the location of a base pixel on the used CMOS chip. This post-processing, while favourable for viewing a photograph, is counterproductive in this analysis as it transforms the highly linear red, green, and blue CMOS chip readings into a visually appealing, but photometrically meaningless measurements. Thus, in this investigation, it is assumed that each BPA is actually its own ‘pixel’ with a red, two green and a blue sensing component being subjected to the same light excitation. This assumption is favourable because in this way the red, average of the two greens, and blue values that are being processed for each BPA are the direct output of an extremely linear sensor with respect to luminance excitation, not some post-processing interpolation based on visual goals. Due to the small BPA to base pixel size ratio, this assumption is valid given the use goals and will reduce the sensing resolution by a factor of four, as each BPA has four base pixels in it. Expressing the above with set theory notation, a measurement of luminance (ml) is: ml ¼ ðR, G, BÞ9721296 ,
ð1Þ
ð2Þ
Here, natural numbers, N, include zero, i.e. each ml comprises 9721296 ¼ 1,259,712 measurements of red, green and blue luminance striking the CMOS chip in the RPiCM system. However, each ðR, G, BÞ value is device-specific, as they are defined by their respective R, G and B filters from the BPA that is applied to the base CMOS chip. While the filters are designed to mimic the CIE colour matching functions as closely as possible, variations will still exist between the RGB values produced by two similar CMOS chips with BPAs, even if they are capturing the same field of view. This device-specific colour space is referred to here as: RGBdevice ¼ B3 ,
ð3Þ
and varies from device to device, even within brand and model. If the ml is to be used in an absolute sense with respect to SI units (cd/m2) or displayed correctly on reproduction devices, such as a computer/smartphone screen or the printed page, the device specific RGBdevice BPA values will need to be transformed into a standard colour space, such as CIE-XYZ (referred to here as XYZ ¼ R3þ ). A clear explanation of devicespecific colour spaces (e.g. digital cameras, CRT/LCD monitors, printers, scanners) and their relationship to standardised colour spaces is presented by Vrhel and Trussell.27 In this work, the RPiCM produced ml is used for absolute luminance measurements, thus the mapping between RGBdevice and XYZ must be found. This mapping is assumed to be a linear transformation, M33 , as conducted by Martinez-Verdu et al.28 of the form: 8BPA 2 RGBdevice ,
Lighting Res. Technol. 2016; 0: 1–18 Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
ð4Þ
Ubiquitous luminance sensing The equivalent XYZ representation can be expressed as XYZBPA ¼ M BPA ,
ð5Þ
which is a standard linear transformation. The construction of this mapping, M, is known as spectral characterisation and a least squares regression methodology is used.27 For training data, the RPiCM was subjected to a monochrome beam of light ranging from 380 to 700 nm in increments of 10 nm. This was accomplished using a Bentham/IVT PVE300 spectral response system29 shining directly onto the CMOS chip as opposed to the target method discussed in the document, ‘Graphic Technology and Photography,’30 as this system is typically used for solar cell characterisation and was ready for use. This saved the effort of setting up an optical bench with the needed equipment and the same monochrome light was achieved, thus the authors believe this is a valid alternative. Due to a low energy level of the monochrome light source, a shutter speed of 450 ms was used, as this is the integration time of the built-in silicon photodiode detector used by the system. Correct shutter speed is crucial, as too short an exposure will result in no meaningful signal (i.e. noise), and too long an exposure will saturate the 10-bit channels. It should be noted that the shutter speed of the RPiCM is limited by the ‘frame rate’ parameter, and thus ‘frame rate’ must be adjusted downward from the default (i.e. 30 fps to 2 fps) to use a 450 ms exposure. This value was chosen by examining the highest level of energy monochrome wavelength and ensuring it did not saturate the 10-bit sensor output. If the highest energy wavelength monochrome light did not saturate the sensor, then none of the monochrome wavelengths will do so. The normalised results of the RPiCM response are presented along with the CIE colour matching functions in Figure 3(a) and (b), respectively. Notice the reasonable
7
qualitative agreement between the two sets of curves. This overall agreement is expected, because the RPiCM produces photographs similar to how the scene appears to the human eye, which is the purpose of the CIE colour matching functions. Therefore, the response is deemed acceptable and used as is for the luminance measurements in this study, i.e. no additional filters were placed on the CMOS chip. This is favourable because it eliminates the filter addition at the semi-conductor manufacturing level, which requires greater effort and that leads to higher costs. The above analysis was completed for a single set of four pixels comprising a unique BPA. Due to the high consistency in the silicon semiconductor industry, it is assumed that each BPA will respond spectrally in an identical manner. Thus, the above results are used for all the pixels in the camera image. Note that this response is independent of the amount of light striking the sensor, rather, it is the result of the spectral content of the light. During the analysis, all spectral responses of the light were normalised for spectral light energy to that of the used monochrome beam of light. The specific RPiCM used in this work has the RGB as a 333 matrix, with each column containing the RAW red, green and blue values produced by the CMOS chip under investigation when excited by monochrome light of 380 to 700 nm wavelength () light at 10 nm intervals. Further, XYZ is a 333 matrix, with each column containing the X, Y and Z values as defined by the CIE colour matching functions at the same wavelengths. These matrices are related, with the form of M assumed here, and this relation can be expressed formally as: XYZ ¼ M RGB ,
ð6Þ
With RGB and XYZ , as training data, and using a linear least squares regression Lighting Res. Technol. 2016; 0: 1–18
Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
8
AR Mead and KM Mosalam
CMOS chip normalized response amplitude
(a)
2 Blue Green Red
1.5 1 0.5 0 400
Color matching functions amplitude
(b)
450
500
550 Wavelengh [nm]
600
650
700
2 Blue Green Red
1.5 1 0.5 0 400
450
500
550 Wavelengh [nm]
600
650
700
Figure 3. Relative response at wavelengths 380–700 nm at 10 nm intervals interpolated with cubic splines: (a) RPiCM CMOS response, (b) CIE-XYZ colour matching functions
technique (note in the following superscripts T and 1 denote transpose and inverse, respectively), M can be expressed as: 1 M ¼ XYZ RGBT RGB RGBT , ð7Þ in terms of the experimentally determined RGB in this study. The linear mapping M in this study allows for the transformation of individual BPA measurements of an arbitrary field of view from the device-specific colour space, RGBdevice , to the standardised colour space XYZ expressed in a wavelength discretised form as XYZ . However, one last scaling must take place as the 10-bit values are somewhat arbitrary with respect to the amount of light actually striking the sensor, even though with the above analysis the spectral quality of the light is known. For this final conversion to SI units for luminance (cd/m2), the RPiCM was subjected to a source
of uniform luminance of a known value, which was then used to anchor the 10-bit values with respect to absolute units. The uniform luminance was attained using a CIE illuminant A source shining into an integrating sphere and then opening a different port of that same integrating sphere and exciting the RPiCM with the uniform luminance emitted from that port. The procedure was completed using an up-to-date calibrated integrating sphere typically used in the calibration of traditional luminance meters.9 With a shutter speed of 10 ms, the RPiCM was exposed to values of luminance 0, 20, 40, 60, 80, 120, 150 and 180 cd/m2, and the CMOS BPA pixel values were stored. It should be pointed out that the ‘‘Y’’ value in the CIE-XYZ is the luminance value of interest, while ‘‘X’’ and ‘‘Z’’ provide colour. The BPA pixel values were transformed to the XYZ space using the linear mapping M as described above, then scaled again to match
Lighting Res. Technol. 2016; 0: 1–18 Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
Ubiquitous luminance sensing the ‘‘Y’’ values to the respective excitation. The results are shown in Figure 4. This additional linear transformation step is permissible for the linear CMOS chip sensors. Note how the intercept of the function is not zero because of a phenomenon known as ‘‘dark signal’’ where the CMOS sensor still produces a signal, in this case 16, when no light is striking it. Dark signal can be measured by exposing the sensor to zero light and recording the signal, as done here. Updating the formal representation of an ml to include transformation to the standardised CIE-XYZ colour space gives: ð8Þ ml ¼ yScale, M, ðR, G, BÞ9721296 , where yScale2 R is used to anchor the 10-bit values to absolute SI units and M is used to transform the BPAs from RGBdevice (RGB Þ to XYZ (XYZ Þ colour space. From the results in Figure 4, it is observed that the 10-bit CMOS chip is saturated at only 180 cd/m2. With typical values of luminance in the built environment reaching 102 cd/m2, this severely limits the RPiCM’s ability to be used in measurements. However, the shutter speed can be adjusted on the Shutter speed 10 msec
9
RPiCM, thus limiting the amount of light reaching the actual CMOS sensor. By lowering the shutter speed by an order of magnitude from 10 to 1 ms, and knowing that the light excitation is a linear phenomenon, the amount of light striking the CMOS sensor is also reduced by an order of magnitude. This reduction of light striking the sensor means that for the same light excitation at shutter speed 10 ms, which just saturates the 10-bit sensor, shutter speed of 1 ms keeps the sensor in the 10-bit range to provide meaningful luminance values. Adjusting the shutter speed and thus extending the dynamic range of the RPiCM, the above luminance exposure method is repeated for shutter speeds 1, 0.1, and 0.01 ms with respective luminance values to excite responses of the dynamic range of the 10-bits in each shutter speed mode of operation. The data from this exercise are given in Figure 5. It is noted that, using the Python Module Picamera interface, the shutter speed settings were actually 9985, 991, 74 and 15 ms for each mode of operation. The behaviour is as expected for an order of magnitude reduction. Thus, for classification purposes, these modes are referred to by their base ten labels. Again, extending the ml definition: ml ¼ SS, yScale, M, ðR, G, BÞ9721296 , ð9Þ
Imparted luminance [cd/m2]
200 180
where SS 2 f104 , 103 , 102 , 101 grepresents the shutter speed in ms of the luminance measurement.
luminance(bits) = 5.85* bits - 16.0
160 140 120 100 80 60 40 20 0 0
600 1000 200 400 800 XYZ converted 10-bit sensor value
1200
Figure 4. Anchoring the 10-bit values to absolute SI units of luminance (cd/m2) for a shutter speed of 10 ms
4.2. High-dynamic range imaging Examining the shutter speed modes of operation and the respective luminance values they are capable of measuring, it is clear that there is no single shutter speed mode that can capture all the possible luminance values that could be present in the built environment field of view. However, if four consecutive luminance measurements were taken of the same field of view, each Lighting Res. Technol. 2016; 0: 1–18
Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
10
AR Mead and KM Mosalam Full dynamic range shutter speed: 104, 103, 102, 101 msec
105 luminance(bits) = 0.0216 * bits - 16.0
Imparted luminance [cd/m2]
104
luminance(bits) = 0.640 * bits - 16.0
103
luminance(bits) = 0.6120 * bits - 16.0
102 luminance(bits) = 5.852 * bits - 16.0
102
100
10−1
SS = 104 SS = 103 SS = 102 SS = 101
0
200
400
600
800
1000
1200
XYZ converted 10-bit sensor value
Figure 5. Anchoring the 10-bit values to absolute SI units of luminance (cd/m2) for different shutter speeds (SS)
luminance measurement being taken with a different shutter speed (i.e. 104, 103, 102, 101 ms), a much larger range of luminance values could be captured. This methodology is used to extend the dynamic range of the CMOS sensor and is referred to as highdynamic range image (HDRI) photography. HDRI works by examining each pixel of the luminance measurement captured with a shutter speed of 104 ms for saturation. If saturation occurs, examine the same pixel within the shutter speed 103 ms measurement, and continue to the lower shutter speed modes until a value registers in the applicable range of the CMOS sensor at the respective shutter speed. This process is repeated for each pixel within the luminance measurement, thus resulting in a single luminance measurement where each pixel is taken not from any single shutter speed defined ml, but rather the shutter speed defined ml that is best applicable for the luminance within the respective area of the field of view.
Figure 6 shows the process for a field of view within the built environment being captured by multiple shutter speeds luminance measurements, which are used to create an HDRI measurement of luminance. Details of HDRI, such as field of view alignment between luminance measurements, add complications to this procedure, but are surmountable as performed here, and discussed further by Madden.13 With HDRI, a new formal object can be defined with respect to a field of view consisting of four ml, each with a different SS and yScale, denoted: HDRI ¼ fmlSS¼104 , mlSS¼103 , mlSS¼102 , mlSS¼101 g ð10Þ
It should be noted that M is a property of the CMOS chip, independent of the excitation light source spectrum and thus will remain the same for all shutter speeds.
Lighting Res. Technol. 2016; 0: 1–18 Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
Ubiquitous luminance sensing
11
Build environment space
RPi/RPiCM
FOV
=> FOV
SS = 104
FOV
SS = 103
=>
FOV
SS = 102
=>
FOV
==>
HDRI luminance Measurement
SS = 101
Forming HDRI from individual exposures Figure 6. Field of view within the built environment and constructing an HDRI luminance measurement
4.3. Vignetting correction
The final adjustment to the RPiCM measurements that must be performed is a lens correction used to account for a phenomenon known as vignetting. This is the attenuation of the light coming through the lens near the periphery of the image with respect to the centre. Therefore, for a uniform excitation across a field of view, the lens will cause the pixels near the edges to be subjected to less light than those in the centre, even though in reality they are experiencing the same excitation. In order to account for the vignetting effect, the lens vignetting characteristics must be quantified, then the signal for each pixel in an ml field of view must be corrected with respect to this phenomenon. With proper lens characteristics known, techniques from Fourier optics can be used to solve a closed form lens correction equation.31 However, the
needed lens parameters were not immediately available for such an analysis, thus the authors used another technique described below. To quantify the vignetting effect of the RPiCM system lens, a uniform luminance source was used to excite the RPiCM across its field of view. Knowing that the luminance source is of equal magnitude, any difference in the RAW pixel values returned by the RPiCM can be attributed to the vignetting effect of the lens. Each BPA would need to be exposed to this uniform luminance source at the minimum in-focus distance for the RPiCM, i.e. 1.0 m. With a horizontal and vertical field of view of 53.58 and 41.28, respectively, this corresponds to a rectangular area of 1.00.75 m at a distance of 1.0 m from the lens. The uniform luminance source available to the authors was circular in Lighting Res. Technol. 2016; 0: 1–18
Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
AR Mead and KM Mosalam
F : X Y ! C,
ð11Þ
with X ¼ ½0, 2592 N, Y ¼ ½0, 1944 N and being the horizontal and vertical pixel positions, respectively, within the luminance measurement and C ¼ ½0, 1 R being the vignetting correction: F ðx, yÞ ¼0:4845 þ 2:933 104 x þ 5:756 104 y 1:509 107 x2 þ 7:324 1023 xy 2:221 107 y2
ð12Þ
(a)
0 200
Vertical field of view axis
shape with a 40 mm diameter. Assuming the source can be aligned with minimum overlap, that is the edges of the largest square in a circle (a square with side of 28.3 mm in this case), are collinear, 4032 circular sources would be needed to cover the 1.00.75 m field of view. These exposures could be done individually and the luminance measurements stacked together similar to the HDRI technique; however, 4032 measurements are far too many. As an alternative, bisymmetry of the lens system was assumed across both the vertical and horizontal axes, and a secondorder polynomial in both independent variables was assumed to capture the vignetting effect. Knowing the task is to estimate coefficients of an equation, rather than exposing the entire field of view, the number of luminance excitations was limited to nine exposures in the upper left quadrant of the RPiCM system, and exploiting bisymmetry resulted in 36 luminance excitations covering the full field of view of the RPiCM. From these 36 excitations and their respective RAW pixel values, a vignetting correction function was fitted using Matlab’s ‘fit’ function with model type ‘poly22’ indicating a second-order polynomial in both independent variables. The resulting polynomial is a function of the BPA location within the luminance measurement as follows:
400 600 800 1000 1200 1400 1600 1800 0
(b)
500 1500 2000 1000 Horizontal field of view axis
2500 0.9
0
0.85 Vertical field of view axis
12
0.8
500
0.75 0.7 1000
0.65 6
1500
55 0.5 0
500
1000
1500
2000
2500
0.45
Horizontal field of view axis
Figure 7. (a) Bi-symmetry employed for vignetting correction data, (b) function fitted for all CMOS chip pixels
The uniform luminance excitation locations and a colour contour plot of the fitted correction function can be found in Figure 7. As expected, the polynomial forms a paraboloid-like shape centred on the lens axis, which is very close to the centre of the luminance measurement collection of BPAs, indicating good lens/CMOS chip alignment within the RPiCM camera system. It should be noted that the point spreading is ignored herein. This spreading is where light from particularly bright pixels ‘spills’ over into adjacent pixels actually associated with a different luminance location, which affects all digital camera systems in various magnitudes, as quantified by Inanici.12 This simplification is made because it is expected that this spreading has negligible effects as observed by Inanici.12 Finally, it should be noted this calibration process can be used for
Lighting Res. Technol. 2016; 0: 1–18 Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
Ubiquitous luminance sensing many different types of digital camera systems beyond the RPiCM system used in this paper. 4.4. Validation of RPiCM luminance calibration To validate the luminance calibration of the RPiCM system, a traditional luminance meter was taken as a reference instrument and used to measure two points within a scene. Moreover, the RPiCM system was used to capture an HDRI measurement of luminance (HDRI ml) of the same scene, containing the luminance meter measured positions. Thus, after processing, the luminance meter and the HDRI ml can be compared as a validation of the HDRI technique and the calibration procedure described above. In Figure 8, the four ml with shutter speeds of 104, 103, 102, 101 ms are displayed which are used to construct the final HDRI
ml. As observed, many pixel values in the 104 and 103 ms shutter speeds mls have been saturated at 1023, meaning that the luminance from those areas of the field of view were larger than 180 and 1700 cd/m2, respectively, the maximum for these modes of operation of the RPiCM. Thus, for these pixel areas of the field of view, the shutter speed 102 ms was used as the luminance measurement, as the sensor is acting within its dynamic range in that mode of operation. Combining the four mls, converting from RGBdevice to CIE-XYZ, scaling to absolute SI units, and correcting for vignetting, the final HDRI ml is given in Figure 9. The field of view image is clearly visible (a scene from a roof garden). The locations corresponding to the luminance meter measurement locations are marked by circles labeled G and F. The luminance meter averages a small area,
1000 900
1000
800
1000 1000
800
700 500
600
600
500
400
500 400
0
300 2000
1000 0
1000
13
200
0 200 2000
1000 0 0
0
1000
0
500
1000 1000 1000
400
800 600
500
400 0
500
300 200
0 100
200 1000 0 0
1000
2000
1000
2000 0
1000 0 0
0
Figure 8. Luminance measured with different shutter speeds: 104 (top left), 103 (top right), 102 (bottom right), 101 (bottom left) microseconds
Lighting Res. Technol. 2016; 0: 1–18 Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
14
AR Mead and KM Mosalam 10 000
8000
6000
4000
2000
0 [cd/m2]
Figure 9. The final HDRI ml resulting combines the four luminance measurements in Figure 8
Table 2 Luminance measurements (cd/m2) of the scene in Figure 9 using a traditional luminance meter and RPiCM Method
Location G
Location F
Luminance meter RPiCM % Error
315 315 0
4500 4600 2.2
about a 18 planar angle revolved around the line of sight of the luminance meter, thus to compare to the HDRI ml, an average of several BPAs is used. The results are listed in Table 2. As observed, the results are very close, well within 5.0% (the usual accuracy of the hand-held luminance meter). These differences can come from many sources, including: (1) Not having selected the correct BPAs from the HDRI ml that correspond to the exact field of view of the luminance meter, and (2) possible changes in the luminous conditions of the scene during HDRI and gathered hand-held luminous meter measurements. Possible changes in the luminous environment, caused by dynamic sky conditions, is the primary reason the validation was limited to only two locations (F
and G). These two locations, however, are useful because: (1) they are geometrically separated from each other (two luminance meter measurements are needed, yet only one HDRI ml is needed to gather this information), and (2) the magnitude of the luminance measurements captured at different shutter speed mls from the HDRI ml, thus illustrating the advantage of using HDRI for increasing the dynamic range of the RPiCM system. Given the very consistent linear behaviour of the luminance calibration discussed above, the well-characterised spectral behaviour of the CMOS chip, the detailed correction for vignetting, the careful handling of data processing by the investigators in the HDRI ml processing, and finally knowing there exists uncertainty in the luminance meter itself, the authors believe the luminance calibration of the RPiCM system was successful. Within the industry standard for luminance meter confidence levels, and in the authors’ opinion, actually more trustworthy results are obtained for the RPiCM system than the luminance meter itself in measuring luminances in the present case study.
Lighting Res. Technol. 2016; 0: 1–18 Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
Ubiquitous luminance sensing 5. Applications and limitations While calibration of digital camera chips for luminance measurement is not new,12,32 it is typically performed with bulky, expensive digital SLR cameras, complicated lens, and data flows that involve downloading and analysing the luminance data on a general purpose processor computer.15 This process, while effective, involves costly equipment and manual manipulation of the data between system components. The proposed method in this paper uses a Raspberry Pi 2 Model B (RPi)16 as the processing element and the companion Raspberry Pi Camera Module (RPiCM). It is an economical, compact and total system alternative that has both advantages and limitations. 5.1. Low-cost luminance sensor The RPi and RPiCM are both extremely affordable, costing approximately $40 for an RPi and $30 for an RPiCM at the time of writing. This cost should be compared to a couple of thousand dollars for a bare minimum DSLR camera and processing computer, to several thousand dollars for a typical DSLR and lens system. Given concerted embedded systems development, using camera systems, processors and connecting components purchased in bulk, the price for the RPiCM system could certainly be brought down even further. 5.2. Compact size, low weight The RPi measures 85.65621 mm3 with a weight of 45 g, and the CM measures 2520 9 mm3 with a weight of 3 g. This is far smaller than a DSLR camera typically used in spatial luminance measurement, yet also provides needed computational abilities for postprocessing images and creating a luminance map. For a DSLR architecture, a standalone computer would be needed for this task in addition to the DSLR camera itself.
15
5.3. Permanent luminance sensor The total system is thus smaller and lighter than the average fire alarm used in commercial buildings throughout the world. Coupled with the low cost, an RPiCM system could thus be deployed on drop ceilings or affixed to permanent partition walls or even cubicle sides within the built environment, for new buildings and retrofit projects, in a permanent capacity to gather data for both site investigation and daylighting/electrical lighting control applications. While not aligned perfectly with the occupants’ field of view inside a building, the proposed application of the RPiCM system offers a measure of the glare conditions experienced by the occupants with some degree of noise. Ceiling mounted illuminance meters, acting as a proxy for desk top illuminance, are already available in the market.33 It is believed that a similar type of proxy measurement for glare experienced by the user could be employed using the system at various locations throughout the occupied space. 5.4. Expensive calibration process While the hardware is very affordable, the calibration processes are not. The equipment involved, both the Bentham/IVT PVE300 and an integrating sphere with a uniform luminance port, are large initial investments, require annual or semi-annual recalibration, and a qualified technician for operation. The authors’ answer to these limitations come in two forms: (1) with increased volumes of production and an established procedure common in manufacturing, the cost of calibration will come down, and (2) if glare is the primary use of the RPiCM system, absolute luminance measurement is not necessary, thus relative luminance values could be used forgoing the need for any calibration for absolute luminance all together. Lighting Res. Technol. 2016; 0: 1–18
Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
16
AR Mead and KM Mosalam
5.5. Pulsating light source issues It has been observed by the authors that digital cameras, and other light sensing instruments, do not act as expected if they are exposed to pulsating light sources. Human eyes cannot see this pulsating (assuming a pulsation rate above the critical flicker frequency (CFF)), but light sensors, such as the RPiCM, often do. A detailed explanation of the CFF phenomenon and its impact on lighting measurement is beyond the scope of this paper. However, it has to do with the integration time of the human visual system versus the integration time of the CMOS sensors for the same magnitude of luminance based on limitations in dynamic range. More information is given in The Lighting Handbook Section 4.17.11 Pulsating light sources present issues related to the absolute luminance measurement and the authors currently submit that they are insurmountable given the setup used in this work.
calculate glare metrics, in both operation and site investigations. Embedded camera absolute luminance calibration is complicated. However, if taken step by step such calibration can be implemented for embedded cameras. Spatial luminance sensing can be accomplished using small, affordable embedded cameras and processors, as shown here with the Raspberry Pi and Raspberry Pi Camera Module.
5.6. Portable luminance sensor Given that retrofitting buildings to improve energy use is a common exercise and typically involves luminance measurement, the RPiCM can be used as an affordable method for spatial luminance investigation of a site. It is envisioned that perhaps the RPiCM system can be brought to a site and used to gather data for low-budget lighting designs or by students to gain an understanding of luminance magnitude in a space. Studying luminance magnitude distributions across a field of view, e.g. Figure 9, helps build intuition of how the space is experienced by occupants from a lighting perspective.
This research was funded by the Republic of Singapore’s National Research Foundation through a grant to the Berkeley Education Alliance for Research in Singapore (BEARS) for the SingaporeBerkeley Building Efficiency and Sustainability in the Tropics (SinBerBEST) program. BEARS has been established by the University of California, Berkeley, as a centre for intellectual excellence in research and education in Singapore.
6. Conclusions In this paper the authors conclude: There exists a need in the built environment for spatial luminance-based sensing, to
Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
Acknowledgements The authors would like to thank Johnson Wong and Jian Wei Ho of the Solar Energy Research Institute of Singapore (SERIS) and Yuanjie Liu of the National Metrology Centre of Singapore for their helpful suggestions and assistance in the calibration of the RPiCM. Also, Clement Barthes is recognised for his early stage suggestions.
Lighting Res. Technol. 2016; 0: 1–18 Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
Ubiquitous luminance sensing References 1 Aries MBC, Aarts MPJ, Van Hoof J. Daylight and health: A review of the evidence and consequences for the built environment. Lighting Research and Technology 2015; 47: 6–27. 2 Edwards L, Torcellini PA. A Literature Review of the Effects of Natural Light on Building Occupants. Golden, CO: National Renewable Energy Laboratory, 2002. 3 Lechner N. Heating, Cooling, Lighting. 4th Edition, Hoboken, NJ: John Wiley and Sons, 2009. 4 LI-COR Incorporated. LI-210R Photometric Sensor. Retrieved 10 September 2015, from http://www.licor.com/env/products/light/ photometric.html 5 EKO Instruments. ML-020S Lux Sensor. Retrieved 11 September 2015, from http://ekoeu.com/products/solar-radiation-and-photonic-sensors/small-sensors/ml-020s-lux-sensor 6 Gossen. MAVO-SPOT 2 USB. Retrieved 8 August 2015, from http://www.gossen-photo. de/pdf/GOSSEN_Lichtmesstechnik_english. pdf 7 Popat PP. Closed-loop, daylight-sensing, automatic window-covering system insensitive to radiant spectrum produced by gaseous discharge lamps. US Patent 6084231, 4 July 2000. 8 Palmer JM, Grant BG. The Art of Radiometry. Bellingham, WA: SPIE Press, 2010. 9 Commission Internationale de l’Eclariage. Term List. Retrieved 14 September 2015, from http://eilv.cie.co.at/term/718 10 Konica Minolta. Luminance Meter LS 110 Instruction Manual. Retrieved 8 August 2015, from http://www.konicaminolta.com/instruments/download/instruction_manual/light/ pdf/ls-100-110_instruction_eng.pdf 11 DiLaura DL, Houser KW, Mistrick RG, Steffy G. (eds) The Lighting Handbook. 10th Edition, New York: Illuminating Engineering Society of North America, 2011. 12 Inanici MN. Evaluation of high dynamic range photography as a luminance data acquisition system. Lighting Research and Technology 2006; 38: 123–134.
17
13 Madden BC. Extended Intensity Range Imaging. Philadelphia, PA: University of Pennsylvania, 1993. 14 Mitsunaga T, Nayar SK. Radiometric selfcalibration: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, USA, 23–25 June 1999: pp. 197–208. 15 Ward G. Photosphere Quick Start. Retrieved 8 August 2015, from http://www.anyhere.com/ 16 Raspberry Pi Foundation. Raspberry pi 2 Model B. Retrieved 10 May 2015, from https:// www.raspberrypi.org/products/raspberry-pi-2model-b/ 17 Raspberry Pi Foundation. Camera Module. Retrieved 10 May 2015, from https://www. raspberrypi.org/documentation/usage/camera/ 18 Wikipedia. Bayer Filter. Retrieved 9 April 2015, from https://en.wikipedia.org/wiki/ Bayer_filter 19 Jones D. Picamera 1.10. Retrieved 29 June 2015, from https://pypi.python.org/pypi/picamera/1.10 20 Guth SK. A method for the evaluation of discomfort glare. Illuminating Engineering 1963; 58: 351–364. 21 Commission Internationale de l’Eclairage. CIE 117-1995 Discomfort Glare in Interior Lighting. Vienna: CIE, 1995. 22 Boyce PR, Crisp VH, Simons RH, Rowlands E. Discomfort glare sensation and prediction: Proceedings of the CIE 19th Session. Kyoto, Japan Paris: CIE, 1979. 23 Hirning MB, Isoardi GL, Cowling I. Discomfort glare in open plan green buildings. Energy and Buildings 2014; 70: 427–440. 24 Jakubiec JA, Reinhart CF, Van Den Wymelenberg K. Towards an integrated framework for predicting visual comfort conditions from luminance-based metrics in perimeter daylit spaces: Proceedings of the IBPSA Conference. 7–9 December 2015, Hyderabad, India: pp. 1189–1196. 25 Kent MG, Altomonte S, Tregenza PR, Wilson R. Discomfort glare and time of day. Lighting Research and Technology 2015; 47: 641–657. 26 Crawley DB, Lawrie LK, Winkelmann FC, Buhl WF, Huang YJ, Pedersen CO, Strand RK, Liesen RJ, Fisher DE, Witte MJ, Glazer J. EnergyPlus: Creating a new-generation Lighting Res. Technol. 2016; 0: 1–18
Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016
18
27
28
29
30
AR Mead and KM Mosalam building energy simulation program. Energy and Buildings 2001; 33: 319–331. Vrhel MJ, Trussell HJ. Color device calibration: a mathematical formulation. IEEE Transactions on Image Processing 1999; 8: 1796–1806. Martı´ nez-Verdu´ F, Pujol J, Vilaseca M, Capilla P. Characterization of a digital camera as an absolute tristimulus colorimeter: Proceedings of the International Society for Optics and Photonics Conference on Electronic Imaging. 20 January 2003, Santa Clara, CA, USA: pp. 197–208. Bentham Instruments Limited. PVE300 Photovoltaic Spectral Response. Retrieved 30 August 2015, from http://www.bentham.co. uk/pdf/PVE300.pdf International Standards Organization. ISO 17321-1/2012 Graphic Technology and
Photography - Colour Characterisation of Digital Still Cameras (DSCs) – Part 1: Stimuli, Metrology and Test Procedures. Geneva: ISO, 2012. 31 Goodman JW. Introduction to Fourier Optics. Greenwood Village, CO: Roberts and Company Publishers, 2005. 32 Bellia L, Cesarano A, Minichiello F, Sibilio S, Spada G. Calibration procedures of a CCD camera for photometric measurements: Proceedings of the IEEE Instrumentation and Measurement Technology Conference, 20 May 2003: 89–93. 33 Lutron. Radio Power Saver. Retrieved 10 January 2016, from http://www.lutron.com/ en-US/Products/Pages/Sensors/RadioPowr SavrDaylightSensor/Overview.aspx
Lighting Res. Technol. 2016; 0: 1–18 Downloaded from lrt.sagepub.com at UNIV NEBRASKA LIBRARIES on May 20, 2016