Document not found! Please try again

Software-Defined and FPGA Computational Photometry

0 downloads 0 Views 4MB Size Report
Apr 19, 2016 - Land Towers (Light Stations, Ports, Weather Stations) ..... True-color night vision (TCNV) fusion system using a VNIR EMCCD and a.
Software Defined Multi-Spectral Imaging for Arctic Sensor Networks SPIE – Technology for Multispectral Imagery

April 19, 2016

 Sam Siewert

The Current SDMSI Team Research Team (Here today in audience) – – – – – –

Sam Siewert – PI at ERAU, Adjunct CU-Boulder, SDMSI Lead Kenrick Mock – PI at U. of Alaska, ADAC Sensor Networks Ryan Claus - ERAU, DE1-SoC FPGA Power Analysis Drivers Matthew Demi Vis - ERAU, NVIDIA Jetson Power Analysis Drivers Ramnarayan Krishnamurthy – CU Boulder, CUDA Benchmarks for NVIDIA Jetson Surjith B. Singh - CU Boulder, OpenCL Benchmarks for DE1-SoC

Sponsored in part by Arctic Domain Awareness Center, U. of Alaska – – –

https://adac.hsuniversityprograms.org/centers-of-excellence/adac/ Acknowledgement. ‘This material is based upon work supported by the U.S. Department of Homeland Security under Grant Award Number, DHS-14-ST-061-COE-001A-02.’ Disclaimer. ‘The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security.’”

ERAU Internal Grant 13450 University Collaborators – – –

ERAU Prescott U. of Alaska Anchorage CU Boulder

Past Industry Sponsors: Intel, Altera, Mentor Graphics, NVIDIA  Sam Siewert

2

Arctic, Alaska – Global Perspective Russia–US Border Between Big & Little Diomede Island Kamchatka Peninsula to South, North Korea, Japan, Scandinavia Bering Sea, Chukchi Sea, Beaufort Sea, Arctic Ocean https://nordpil.com/portfolio/mapsgraphics/arctic-topography/

Diomede, Bering Strait

Anchorage, AK Petropavlovsk Kamcatskij

 Sam Siewert

https://www.google.com/maps/place/Anchorage,+AK

3

Smart Camera Deployment - Marine Land Towers (Light Stations, Ports, Weather Stations) Self-Powered Ocean Buoys Mast mounted on Vessels Mast Mount

Buoy Mount

http://www.uscg.mil/d17/cgcspar/

Pole Mount

http://www.oceanpowertechnologies.com/  Sam Siewert

http://www.esrl.noaa.gov/gmd/obop/brw/

4

Smart Camera Deployment - Aerial UAV Systems - ERAU ICARUS Group, 600 gram payload UAV and Experimental Aircraft – (ERAU, U of Alaska) Kite Aerial Photography, Balloon Missions (ERAU, CU)

Sam Siewert – ERAU ICARUS Group

 Sam Siewert

5

Self-ID Fusion, Opportunistic Uplink Integration and System of Systems Between ADS-B and S-AIS for Vessel / Aircraft / UAV Awareness Smart Cameras Can Monitor and Plan Uplink Opportunity

System Fusion & Uplink

https://www.flightradar24.com/59.37,-156.71/6

 Sam Siewert

http://www.marinetraffic.com/en/ais/home/centerx:-151/centery:61/zoom:8

6

Research Goals and Objectives Low-Cost Multi-Channel Imager – Primary: Visible + IR for Multi-spectral Imaging – Secondary: Two Channel Visible for Passive 3D Imaging

Operate for 1 Year Unattended in Arctic – 6 months of DARKNESS, 6 months of SUN – Nominal Operating Temperature Range of -40F to 80F

Low Power (10 to 20 Watts) – Power Efficient (Fuel Cell Operation) – No Batteries – Continuous Fusion and 3D Transforms

Smarter “Go-Pro Like” Instrument for Safety, Security, SAR Ops – – – – –

Integrate Off-the-Shelf LWIR, Visible, NIR for Real-Time Image Fusion Drop-in Place on UAV, Marine Vessel Mast Mount, Buoy, Port Pole Mount Smarter (Segmentation, Fusion, Saliency), Multi-Channel LIDAR Verification of 3D Passive Mapping Intelligent Uplink To Vessels, Aircraft, UAVs

Multi-Spectral Fusion and Passive 3D Mapping – With GP-GPU or FPGA Co-processing – CPU Used for Saliency, Interface and Sensor Network Uplink  Sam Siewert

7

Multi-Spectral: Visible, NIR, LWIR Visible – 350 to 740 nm (0.35 to 0.74 micron) NIR – 0.7 to 1 micron (Vegetation – NDVI) LWIR – 8 to 14 micron (Thermal Imaging, Water/Ice) Melt-water drainage Visible SLR

DRS Tamarisk 640 LWIR  Sam Siewert

8

Feasibility for SAR Ops & Port Security Add camera systems to USCG Cutter (mast mount), Ports (pole mount) Detect bodies in the water, Port trespassing, Complements Aircraft FLIR Complement with UAV Mapping and Monitoring

Surfers in the Water Hand-held, Mast Mounted, Buoys Complements Existing Helicopter and C130 FLIR (Field Test – June 2015, Malibu)  Sam Siewert

Trespassers at Night Shown on Jetty Hand-held, Port Drop-in-Place, Buoys Complements Existing Security Off-Grid Installations Possible (Field Test – June 2015, San Pedro) 9

Scene Understanding - Saliency Behavior Modeling of Targets and Threats Skeletal Transformation, Posture, Threat Assessment

 Sam Siewert

10

Concept #1 - FPGA Acceleration Thermal Fusion Assessment

Saliency & Behavioral Assessment

USB3.0 SD (Panchromatic, NIR, RGB)

Many multispectral focal planes …

FPGA CVPU (Computer Vision Processing Unit)

DE1-SoC Flash SD Card (local database)

SD Analog (LWIR)

 Sam Siewert

Cloud Analytics and Machine Learning

2D/3D Spatial Assessment

11

Concept #2 – GP-GPU Acceleration Thermal Fusion Assessment

Saliency & Behavioral Assessment

USB3.0 HD (Panchromatic, NIR, RGB)

Many multispectral focal planes …

Jetson Tegra X1 With GP-GPU Co-Processing

Flash SD Card (local database)

SD Analog (LWIR)

 Sam Siewert

Cloud Analytics and Machine Learning

2D/3D Spatial Assessment

12

Test Config. #1 – DE1-SoC FPGA 5 Watts at Idle, Plus 1.5W per Camera = 9.5W

 Sam Siewert

13

Test Config. #2 – Jetson TK1 GP-GPU 2 Watts at Idle, Plus 1.5 Watts per Camera = 6.5W

 Sam Siewert

14

FPGA Results - Sobel ALUTs: 10187 Registers: 13,561 Logic utilization: 7,427 / 32,070 ( 23 % ) Table 2. Sobel Continuous Transform Power Consumption by Cyclone V FPGA Resolution

Transform (Watts)

(Pixel/sec) per Watt

Saturation FPS

Bus transfer rate (MB/sec)

320x240

5.655

2,050,716

151

11.06

640x480

5.700

2,107,284

39.1

11.46

1280x960

5.704

2,143,506

9.95

11.66

2560x1920

5.696

2,157,303

2.50

11.72

 Sam Siewert

15

FPGA Results – Pyramidal ALUTs: 24456 Registers: 34,062 Logic utilization: 17,721 / 32,070 ( 55 % ) ( 55 % ) Table 3. Pyramidal Laplacian Resolution Up-Conversion Continuous Transform Power Resolution Transform (Watts)

(Pixel/sec) per Watt

Saturation FPS

Bus transfer rate (MB/sec)

320x240

6.009

889,546

69.6

5.10

640x480

6.013

904,281

17.7

5.19

1280x960

6.038

905,624

4.45

5.21

2560x1920 6.192

889,054

1.12

5.25

Table 4. Pyramidal Gaussian Resolution Down-Conversion Continuous Transform Power Resolution Continuous Transform (Pixel/sec) / Watt Saturation Bus transfer rate (MB/sec) Power (Watts) FPS 320x240 640x480 1280x960 2560x1920

5.968 6.018 6.023 6.109

 Sam Siewert

2,445,040 2,399,202 2,427,813 2,309,154

190 47.0 11.9 2.87

13.92 13.77 13.95 13.45 16

GP-GPU Results - Sobel Table 5. Sobel Continuous Transform Power Resolution Continuous Power at 1Hz (Watts)

Continuous Power at 30Hz (Watts)

(pixels/sec (pixels/sec) Saturation ) per Watt per Watt @ FPS @ 1Hz 30Hz

320x240 640x480 1280x960 2560x1920

4.932 4.984 5.142 7.326

18,109 72,180 288,045 1,136,462

4.241 4.256 4.266 4.325

 Sam Siewert

467,153 1,849,117 7,169,195 20,127,764

1624 840 237 55

17

GP-GPU Results - Pyramidal Table 6. Pyramidal Up and Down Conversion Continuous Transform Power Resolution Continuous Continuous (pixels/sec (pixels/sec) Saturation Power at 1Hz Power at 20Hz ) / Watt @ / Watt @ FPS (Watts) (Watts) 1Hz 20Hz 320x240 640x480 1280x960 2560x1920

 Sam Siewert

4.104 4.116 4.152 4.224

4.824 5.460 6.864 13.44

18,713 74,636 295,954 1,163,636

477,612 1,687,912 5,370,629 10,971,429

1120 325 82 20

18

Future Work We Have Completed Hough Lines Continuous Transform Test, Available on GitHub Hough Power Curves Not Yet Produced – In Progress Goal to Identify all Continuous Transform Primitives Used in Infrared + Visible Fusion and 3D Mapping Pixel Level Emphasis, But Also Plan to Review Feature Level – – – –

Camera Extrinsic and Intrinsic Transformations Registration Resolution and AR Matching Methods of Pixel Level Fusion in Review [10] [11], [12], [14]

 Sam Siewert

19

Conclusion Please Download our Benchmarks – https://github.com/siewertserau/fusion_coproc_benchmarks – MIT License

Test on NVIDIA GP-GPU or FPGA SoCs (Altera, Xilinx) Share Results Back Please Help Us Add Benchmarks Critical to Continuous 3D Mapping and Infrared + Visible Fusion (Suite of Primitives) Open Source Hardware, Firmware, Software for Multispectral Smart Camera Applications  Sam Siewert

20

Research Goals Near Term (2016) – – – –

Hardware Acceleration – GP-GPU vs. FPGA Embedding and Efficiency – Watts / Transform / sec Fusion Algorithms for LWIR+Visible From U. of Alaska College of Engineering Roof Basic Target Tracking and Threat Detection [Moose, Bear, People, Vehicles] Standard Algorithms, Improved Performance One Year Operation in Sub-Arctic

Longer Term (2017) – Fuel Cell Power from Wind and Solar Re-charge, Super-capacitor storage Opportunistic Uplink/Downlink – Test Deployment in Arctic (Port, Vessel, Buoy, UAV)

Fundamental – – – –

Passive 3D and Multi-Spectral Scene Parsing Salient Feature Capture [Threats, Targets, Surprise] Multiple Detectors Acting as a Single Multispectral Imager No Batteries

 Sam Siewert

21

Arctic Domain Awareness Center U. of Alaska (ERAU Capstone Arctic Power Project)

SMART CAM ARCTIC POWER SUBSYSTEM  Sam Siewert

ADAC Sensor Network Goals ADAC New Low-Cost Wireless Sensors for Arctic Monitoring ADAC is developing low-cost, wireless sensors that do not require batteries for remote Arctic monitoring. These low power sensors can form ad-hoc sensor networks for remote vessel tracking, surveillance, and monitoring of climate change (e.g., ice flow, depth). These sensors can collect, transmit, and store data for long periods of time without external power. They can then transmit the data to unmanned aerial sensors or vessels of opportunity.

Smart Cam Node - Power Requirements [estimate 20 Watts] – – – – –

LWIR Cameras – ≈1.5W in Continuous Operation x 2 = 3W [DRS Tamarisk, FLIR Vue] Processing [Jetson TK1, DE1-SoC] – ≈6W in Continuous Operation Networking (Unknown) Storage (Unknown) Efficiency and Margin (Unknown)

Operate for 1 Year Unattended [6 months of DARKNESS, 6 months of SUN], Nominal Operating Temperature Range of -40F to 80F

 Sam Siewert

23

How To Generate 20 Watts without Batteries? PEM (Proton Exchange Membrane) Fuel Cells – – –

Powered by Hydrogen [Gas Canister] Expensive, but Off-the-Shelf E.g. Horizon 20W H2 Fuel Cell

Ultra-capacitors [Quick Store and Discharge, -40 to 149F Operation] – –

http://www.maxwell.com/products/ultracapacitors/ http://batteryuniversity.com/learn/article/whats_the_role_of_the_supercapacitor

H2 Fuel – Industrial or Innovative H2 Economy [HyCan] Solar Cells [Summer Only] Wind Power Generation [Extreme Wind Variation] Tidal or Hydroelectric [Coastal and USCG Use] Diesel Generators and Wind Diesel [State of Practice] Other?  Sam Siewert

24

Fuel Cell Design Feasibility of Year Long Unattended Power [20W] in Arctic Conditions Integration of Power Generation, Storage, Management and Distribution Power Electronics Power Monitoring, Health & Status, Safety Demonstration of Proof-of-Concept Field Test by U. of Alaska ADAC  Sam Siewert

25

Summary

Open Reference Design for Research Configurable Research Platform for 3D Passive & Active Mapping and Multi-spectral Low Cost Arctic Research Platform, No Batteries, Drop-in-Place UAV - Battery Powered, Soil Erosion, Vegetation, Animal Surveys, SAR Ops  Sam Siewert

26

REFERENCES 1. 2. 3. 4.

5.

6. 7.

8.

9. 10. 11.

Dominguez, A., Kleissl, J., Luvall, J. C., Rickman, D. L. "High-resolution urban thermal sharpener (HUTS)," Remote Sensing of Environment, 115(7), 1772-1780 (2011). Hines, G. D., Rahman, Z. U., Jobson, D. J., & Woodell, G. A. (2003, August). Multi-image registration for an enhanced vision system. In AeroSense 2003 (pp. 231-241). International Society for Optics and Photonics. Gyaourova, A., Bebis, G., & Pavlidis, I. (2004). Fusion of infrared and visible images for face recognition. In Computer Vision-ECCV 2004 (pp. 456-468). Springer Berlin Heidelberg. Kriesel, J. M., & Gat, N. (2010, April). True-color night vision (TCNV) fusion system using a VNIR EMCCD and a LWIR microbolometer camera. In SPIE Defense, Security, and Sensing XIX, 7697. International Society for Optics and Photonics. Cubero-Castan, M., Chanussot, J., Achard, V., Briottet, X., & Shimoni, M. (2015). A physics-based unmixing method to estimate subpixel temperatures on mixed pixels. Geoscience and Remote Sensing, IEEE Transactions on, 53(4), 1894-1906. Agam, N., Kustas, W. P., Anderson, M. C., Li, F., & Neale, C. M. (2007). A vegetation index based technique for spatial sharpening of thermal imagery. Remote Sensing of Environment, 107(4), 545-558. Siewert, S. B., Shihadeh, J., Myers, R., Khandhar, J., & Ivanov, V. (2014, May). Low-cost, high-performance and efficiency computational photometer design. In SPIE Sensing Technology+ Applications, 9121. International Society for Optics and Photonics. Thompson, D. R., Allwood, A. C., Bekker, D. L., Cabrol, N. A., Fuchs, T., & Wagstaff, K. L. (2012, March). TextureCam: Autonomous image analysis for astrobiology survey. In Lunar and Planetary Science Conference (Vol. 43, p. 1659). Liu, Z. (2010). Investigations on multi-sensor image system and its surveillance applications. UniversalPublishers. Piella, G. (2003). A general framework for multiresolution image fusion: from pixels to regions. Information fusion, 4(4), 259-280. Blum, R. S., & Liu, Z. (Eds.). (2005). Multi-sensor image fusion and its applications. CRC press.

 Sam Siewert

27

REFERENCES CONTINUED 12.

13. 14. 15. 16.

17. 18.

19. 20.

21. 22. 23.

Liu, Z., Blasch, E., Xue, Z., Zhao, J., Laganiere, R., & Wu, W. (2012). Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 34(1), 94-109. Simone, G., Farina, A., Morabito, F. C., Serpico, S. B., & Bruzzone, L. (2002). Image fusion techniques for remote sensing applications. Information fusion, 3(1), 3-15. Mitchell, H. B. (2010). Image fusion: theories, techniques and applications. Springer Science & Business Media. Alparone, L., Aiazzi, B., Baronti, S., Garzelli, A. (2015). Remote Sensing Image Fusion. Signal and Image Processing of Earth Observations, CRC Press. Szustakowski, M., Ciurapinski, W. M., Zyczkowski, M., Palka, N., Kastek, M., Dulski, R., & Sosnowski, T. (2009, September). Multispectral system for perimeter protection of stationary and moving objects. In SPIE Europe Security+ Defence 7481. International Society for Optics and Photonics. Apollo Mapping Inc. (https://apollomapping.com/), McCarty, B.A., Nelson, K., (2016), “Image Hunter,” https://imagehunter.apollomapping.com/ , Boulder, Colorado, USA. National Aeronautics and Space Administration, Moderate Resolution Imaging Spectroradiometer (http://modis.gsfc.nasa.gov/ ), Maccherone, B., Frazier, S. (2016), “Data,” http://modis.gsfc.nasa.gov/data/, NASA Earth Science Division and NASA Goddard Space Flight Center, Greenbelt, Maryland, USA. United States Geological Survey Landsat Missions (http://landsat.usgs.gov/), (2016), “Earth Explorer,” http://earthexplorer.usgs.gov/, United States Department of the Interior, USA. Miller, D.W., (July 2015). 2015 NASA Technology Roadmaps, TA4: Robotics and Autonomous Systems. National Aeronautics and Space Administration (http://www.nasa.gov), Office of the Chief Technologist, (http://www.nasa.gov/offices/oct/home/roadmaps/index.html). Sharma, G., Jurie, F., & Schmid, C. (2012, June). Discriminative spatial saliency for image classification. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on (pp. 3506-3513). IEEE. Toet, A. (2011). Computational versus psychophysical bottom-up image saliency: A comparative evaluation study. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(11), 2131-2146. Valenti, R., Sebe, N., & Gevers, T. (2009, September). Image saliency by isocentric curvedness and color. In Computer Vision, 2009 IEEE 12th International Conference on (pp. 2185-2192). IEEE.  Sam Siewert

28

REFERENCES CONTINUED 24. 25. 26. 27. 28. 29. 30. 31. 32.

33.

34. 35.

Wang, M., Konrad, J., Ishwar, P., Jing, K., & Rowley, H. (2011, June). Image saliency: From intrinsic to extrinsic context. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on (pp. 417-424). IEEE. Liu, F., & Gleicher, M. (2006, July). Region enhanced scale-invariant saliency detection. In Multimedia and Expo, 2006 IEEE International Conference on (pp. 1477-1480). IEEE. Cheng, M. M., Mitra, N. J., Huang, X., & Hu, S. M. (2014). Salientshape: Group saliency in image collections. The Visual Computer, 30(4), 443-453. Maini, R., & Aggarwal, H. (2009). Study and comparison of various image edge detection techniques. International journal of image processing (IJIP), 3(1), 1-11. Duda, R. O., & Hart, P. E. (1972). Use of the Hough transformation to detect lines and curves in pictures. Communications of the ACM, 15(1), 11-15. Ranchin, T., & Wald, L. (2000). Fusion of high spatial and spectral resolution images: the ARSIS concept and its implementation. Photogrammetric Engineering and Remote Sensing, 66(1), 49-61. Boyer, K. L., & Kak, A. C. (1988). Structural stereopsis for 3-D vision. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 10(2), 144-166. Szeliski, R. (2010). Computer vision: algorithms and applications. Springer Science & Business Media. Tagliavini, G., Haugou, G., Marongiu, A., & Benini, L. (2015, June). A framework for optimizing OpenVX applications performance on embedded manycore accelerators. In Proceedings of the 18th International Workshop on Software and Compilers for Embedded Systems (pp. 125-128). ACM. Stokke, K. R., Stensland, H. K., Griwodz, C., & Halvorsen, P. (2015, March). Energy efficient video encoding using the tegra K1 mobile processor. In Proceedings of the 6th ACM Multimedia Systems Conference (pp. 8184). ACM. De La Piedra, A., Braeken, A., & Touhafi, A. (2012). Sensor systems based on FPGAs and their applications: A survey. Sensors, 12(9), 12235-12264. Genovese, M., & Napoli, E. (2014). ASIC and FPGA implementation of the gaussian mixture model algorithm for real-time segmentation of high definition video. Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, 22(3), 537-547.  Sam Siewert

29

REFERENCES CONTINUED 36. 37. 38. 39.

40. 41. 42. 43.

Eriksen, T., Høye, G., Narheim, B., & Meland, B. J. (2006). Maritime traffic monitoring using a space-based AIS receiver. Acta Astronautica, 58(10), 537-549. Krapels, C. K., Driggers, C. R. G., & Garcia, C. J. F. (2007). Performance of infrared systems in swimmer detection for maritime security. Optics express, 15(19), 12296-12305. Hover, G., Mazour, T., Osmer, S., & Nash, L. (1982, September). Evaluation of forward looking infrared (FLIR) as a coast guard SAR sensor. In OCEANS 82 (pp. 491-495). IEEE. Allen, J., & Walsh, B. (2008, May). Enhanced oil spill surveillance, detection and monitoring through the applied technology of unmanned air systems. In International oil spill conference (Vol. 2008, No. 1, pp. 113-120). American Petroleum Institute. Altera Inc., (November 2015). Cv_5v4 Cyclone V Hard Processor System Technical Reference Manual. Altera Cyclone V SoCs (https://www.altera.com/products/soc/portfolio/cyclone-v-soc/overview.html ), Quartus 15.1. NVIDIA Inc., (October 2014). Technical Reference Manual – NVIDIA Tegra K1 Mobile Porcessor. DP-06905001_v03p. Altera Inc., (November 2015). UG-OCL003, Altera SDK for OpenCL Best Practices. Bradski, G., & Kaehler, A. (2008). Learning OpenCV: Computer vision with the OpenCV library. O'Reilly Media, Inc.

 Sam Siewert

30