Considerations and Framework for Foveated Imaging Systems - MDPI

0 downloads 0 Views 7MB Size Report
Jul 19, 2018 - A wide-angle lens design intended for foveated imaging ...... Applications: UV through LWIR, Baltimore, MD, USA, 17–19 April 2016; pp.
hv

photonics

Article

Considerations and Framework for Foveated Imaging Systems † Ram M. Narayanan 1, * 1 2

* †

ID

, Timothy J. Kane 1 , Terence F. Rice 2 and Michael J. Tauber 2

Department of Electrical Engineering, The Pennsylvania State University, University Park, PA 16802, USA; [email protected] US Army Armament Research, Development and Engineering Center, Picatinny Arsenal, NJ 07086, USA; [email protected] (T.F.R.); [email protected] (M.J.T.) Correspondence: [email protected]; Tel.: +1-814-863-2602 This work was presented partially at Long, A.D.; Narayanan, R.M.; Kane, T.J.; Rice, T.F.; Tauber, M.J. Analysis and implementation of the foveated vision of the raptor eye. In Proceedings of the SPIE Conference on Image Sensing Technologies: Materials, Devices, Systems, and Applications III, Baltimore, MD, USA, 20–21 April 2016; pp. 98540T-1–98540T-9, and Long, A.D.; Narayanan, R.M.; Kane, T.J.; Rice, T.F.; Tauber, M.J. Foveal scale space generation with the log-polar transform. In Proceedings of the SPIE Conference on Image Sensing Technologies: Materials, Devices, Systems, and Applications IV, Anaheim, CA, USA, 12–13 April 2017; pp. 1020910-1–1020910-8.

Received: 7 March 2018; Accepted: 12 July 2018; Published: 19 July 2018

 

Abstract: Foveated sight as observed in some raptor eyes is a motivation for artificial imaging systems requiring both wide fields of view as well as specific embedded regions of higher resolution. These foveated optical imaging systems are applicable to many acquisition and tracking tasks and as such are often required to be relatively portable and operate in real-time. Two approaches to achieve foveation have been explored in the past: optical system design and back-end data processing. In this paper, these previous works are compiled and used to build a framework for analyzing and designing practical foveated imaging systems. While each approach (physical control of optical distortion within the lens design process, and post-processing image re-sampling) has its own pros and cons, it is concluded that a combination of both techniques will further spur the development of more versatile, flexible, and adaptable foveated imaging systems in the future. Keywords: scale space; receptive fields; log polar; foveation; computer vision; raptor eye; wide angle; local magnification; distortion control

1. Introduction and Motivation 1.1. Features of Raptors’ Vision Birds of prey, such as raptors (e.g., falcons, hawks, and eagles) have two regions of the retina in each eye that are specialized for acute vision: the deep fovea and the shallow fovea. The line of sight of the deep fovea points forwards and approximately 45◦ to the right or left of the head axis, while that of the shallow fovea also points forwards but approximately 15◦ to the right or left of the head axis. The anatomy of the foveae suggests that the deep fovea has the higher acuity. Most raptor eye traits include a retina that acts like a telephoto lens and provides stereo vision and UV light detection. Raptors thus employ modified lens and foveal arrangements and widened spacing of their photoreceptors in order to achieve the image resolution needed to locate their prey. Their special eye structure allows for simultaneous perception of distant and near objects and scenes. Using ray plotting through foveal outline traces, it was noted that sharp images were formed beneath the center and shoulders, with an enlarged center image accounting for the high acuity of raptors [1]. Photonics 2018, 5, 18; doi:10.3390/photonics5030018

www.mdpi.com/journal/photonics

Photonics 2018, 5, 18

2 of 19

Deep foveas were also seen to exaggerate eccentricity of off-center images of a point source. Real raptors also appear to have higher visual acuity along the line of sight (LOS) of the deep fovea than along the LOS of the shallow fovea, because the receptor density in the deep fovea is higher than that in the shallow fovea. It was suggested that the deep fovea further enhances acuity by acting as a telephoto lens [2]. The brown falcon is exceptional in having equal receptor densities in the deep and shallow foveae [3]. Behavioral studies also suggest that real raptors have their highest visual acuity along the LOS of the deep foveae. In addition, birds have sharper vision than humans and can see in certain spectral bands, like the ultraviolet (UV), which humans cannot [4]. This is due to their significantly larger number and types of retinal photoreceptive “cones” located at the back of the eye for enhanced color perception and significantly larger number of retinal photoreceptive “cell rods” for enhanced visual acuity. The speed of accommodation, i.e., the ability to change focus, was found to be very rapid in various species of owls [5]. 1.2. Prior Work on Foveated Imaging If an image sensor with similar properties is constructed, it would be useful for machine vision applications where high acuity in (select portions of) the far field of view and wide view angle in the near field of view are of equal importance. A two-camera composite sensor combining a front end comprising of a set of video cameras imitating a falconiform eye and a back end employing a log-polar model of retino-cortical projection in primates was demonstrated [6]. The sensor output was a composite log-polar image incorporating both near and far visual fields into a single homogeneous image space. A wide-angle lens design intended for foveated imaging applications was presented, and typical tradeoffs were discussed [7]. It was shown that, with appropriate lens design and spatial light modulator (SLM) resolution, a diffraction limited performance could be achieved across a wide field of view (FOV) and that diffraction could be reduced by choosing the minimum SLM resolution to correct the wavefront error. A dynamic dual-foveated imaging system in the infrared band was developed for two fields of interest (FOIs), wherein a transmissive SLM was used as an active optical element placed near the image plane instead of the pupil plane to separate the rays of two different selected fields [8]. Compared to conventional imaging systems, a dual-foveated imaging approach simplifies the wide-FOV optical system by using SLM, which also decreases the data needed for program processing and transmission. A foveal infrared focal plane array (FPA) was developed having higher spatial frequency of pixel channels near the center of attention (COA) and a radially symmetric spatial frequency diminishing radially out from the COA [9]. This architecture allowed for coverage of the entire field of view at high frame rates by dynamically forming larger superpixels on the FPA in regions of relative unimportance, thus reducing the total number of pixel values required to be multiplexed off the FPA [10]. A design study of various tradeoffs related to axial and off-axis foveation approaches addressed solutions to overcome problems such as aberrations and distortions [11]. The study suggested that a reimaging relay lens was the best solution to add the required correction and maintain the required focal length. An alternate approach was presented for an optical surveillance system featuring a panomorph lens operating over a 360-degree area with 100% coverage yielding enhanced performance with better pixel/cost ratio [12]. In such a system, the panoramic image was able to follow events (such as moving objects) in real time, thereby allowing operators to focus their activities on a narrow field pan/tilt camera without losing any information in the field. An outline for target identification capabilities for a given number of pixels per target was also presented. A conceptual stereo-foveation model was presented to achieve the benefits of binocular foveation to photogrammetry and an implementation of it for anaglyph imaging. Foveating stereoscopic images provides several advantages bandwidth and hardware resources related to real-time processing and data transmission as well as visualization of images with large datasets [13].

Photonics 2018, 5, 18

3 of 19

In log-polar sampled images, lines are distorted in a non-linear way, and their distances are not preserved. To overcome this difficulty, the geometric properties of lines in log-polar coordinates have been studied, and efficient computational methods have been derived [14]. Converting from a log-polar plot to a Cartesian grid can be performed using computer software with a set of equations so that the location of a pixel in the log-polar space will match up to its location in the Cartesian grid. This method allows for a single image that will show both the far field and the near field, with some errors. It has also been shown that log-polar foveation and multiscale local spectral analysis can be regarded as similar processes, taking place in conjugated domains, and being related through a Fourier transform [15]. A wavelet-based scalable foveated video coding system was developed to assist foveation on the visually important components in a video sequence. By using an adaptive frame prediction algorithm, better frame prediction was achieved while controlling error propagation [16]. A biologically motivated algorithm was developed to select visually-salient regions of interest in video streams for multiply foveated video compression, wherein regions were selected based on low-level visual cues followed by a dynamic foveation filter that increasingly blurred every frame, based on its distance from salient locations [17]. An approach approximating the spatially variant properties of the human visual system was developed to maximize the sampling efficiency, information throughput, and bandwidth savings of the foveated system [18]. A three-channel imaging system which simultaneously captures multiple images having different magnifications and FOVs on an image sensor was developed permitting different image processing algorithms to be implemented on different segments of an image sensor [19]. It was shown that an array of dissimilar lenslets, as opposed to identical lenslets, is capable of emulating a single conventional lens and providing high-resolution super-resolved imaging with significant reduction in track length [20]. A similar result was obtained using a planar array of identical cameras combined with a prism array to achieve super-resolution reconstruction of a mosaicked image. Such an approach offers enhanced capability of achieving large foveal ratios from compact, low-cost imaging systems [21]. Recently, a novel approach to foveated imaging was developed based on dual-aperture optics that superimposes two images on a single sensor, thus attaining pronounced foveation with reduced optical complexity. With each image capturing the scene at a different magnification, the system can simultaneously process a wide field of view with high acuity in the central region and achieve enhanced object tracking and recognition over a wider field of view [22]. 1.3. Motivation for Present Work The motivation for the work presented herein is to develop optical systems for mounting on weapon and UAV platforms to detect and track humans and vehicles of interest for applications such as homeland, border, and coastal security. The goal is to develop real-time processing with high resolution video streams by utilizing foveation approaches to assess scene information through multiple channels and track targets over the full range of possible scales and viewpoints, without significant loss of ability to classify the object being fixated upon. The implemented tracking and classification algorithms must be robust against scale, rotation, and viewpoint changes, and be valid for both static and moving platforms. This paper integrates preliminary versions of our work presented and published in conference proceedings [23,24]. Here, we provide further details and additional results on our approach to foveated imaging. This paper is structured as follows. Section 2 presents a brief overview of the characteristics of the raptor eye which motivates the development of an adaptable foveated imaging system. Sections 3 and 4 address various aspects related to optical and electronic approaches to foveated imaging, respectively, and presents a framework for further development and analysis. Finally, Section 5 presents conclusions and directions for future work in this area, with an emphasis on combining hardware and software approaches.

Photonics 2018, 5, x FOR PEER REVIEW

4 of 19

2. Raptor PhotonicsEye 2018, Characteristics 5, 18

4 of 19

Foveation in biological systems is the result of small localized depressions on the surface of the retina as shown in Figure 1. These depressions naturally only affect a small range of field angles 2. Raptor Eye Characteristics within the FOV of an eye because the lens of the eye acts to localize the ray bundles in the region Foveation in biological systems is the result of small localized depressions on the surface of the close to the retina. This depression in the fovea of the raptors acts like a small, powerful negative retina as shown in Figure 1. These depressions naturally only affect a small range of field angles within element placed to thethe imaging plane. Thistomay be the thought of as in a telephoto lens to where the FOV of an close eye because lens of the eye acts localize ray bundles the region close the the negative element allows the focal length of the lens to be longer than the total track length of the lens, retina. This depression in the fovea of the raptors acts like a small, powerful negative element placed alsoclose shown in imaging Figure 1.plane. The longer focal length corresponds to alens magnification of the object to the This may be thought of as a telephoto where the negative elementon the allows thein focal the lens to be longer than the total track length of the lens, also shown in image plane thislength regionof[25]. Figure 1. The longer focal lengthtissue corresponds a magnification of the object on thepresents image plane in The cross-section of retinal from atobird’s eye, shown in Figure 1a, the layered this region [25]. organization of the cells wherein the photosensitive rods and cones are located near the back of the The The cross-section of retinal tissue from a bird’s eye, shown in 1a,of presents the layered retina [26]. optic nerve and several layers of interneurons lieFigure in front the receptor cells, and organization of the cells wherein the photosensitive rods and cones are located near the back of the light passes through the overlying tissue to reach the light-sensitive layer. The deep cleft in the retina retina [26]. The optic nerve and several layers of interneurons lie in front of the receptor cells, and light is a fovea, a region of high visual acuity. At the surface of the fovea, light is refracted in a pattern that passes through the overlying tissue to reach the light-sensitive layer. The deep cleft in the retina is tends to magnify the image projected on the photoreceptor cells. a fovea, a region of high visual acuity. At the surface of the fovea, light is refracted in a pattern that In the there exists a concave portion of the foveal pit which functions as a negative tends to raptor magnifyeye, the image projected on the photoreceptor cells. lens which, together with the positive power of the cornea andpit lens, actsfunctions like the as telephoto optical In the raptor eye, there exists a concave portion of the foveal which a negative m isand system Figure 1b. The magnification of the of system given [2]like the telephoto optical lens of which, together with the positive power the cornea lens,by acts system of Figure 1b. The magnification of the system m is given by [2]

 s  n  nc  m  1 s nr r− nc   m = 1 +  R  nc  R n

(1)

(1)

c

where refractive indices the medium to and the right left and right of the spherical n cn and where and nn rareare thethe refractive indices of theof medium to the left of the spherical cavity c

r

s the respectively, R is theRradius of curvature the surface,ofand distance from of thefrom spherical cavity respectively, is the radius ofofcurvature thes the surface, and theapex distance the apex surface to the image plane. of the spherical surface to the image plane.

(a)

(b)

Figure 1. (a) Microscope ofaafovea foveawherein wherein “v”-shape as a negative element in Figure 1. (a) Microscopeimage image of the the deepdeep “v”-shape acts asacts a negative element in the the lens lenssystem system[19]; [19];(b)(b) Model of a telephoto lens system and a modification to illustrate its similarity Model of a telephoto lens system and a modification to illustrate its similarity to to the thenegative negative lens is replaced a spherical (concave) [2]. thefovea fovea wherein wherein the lens is replaced by aby spherical (concave) surfacesurface [2].

Raptors have two foveae sideand andforward forwardregions regions of their FOV as shown in Figure 2. Raptors have two foveaelocated locatedin in the the side of their FOV as shown in Figure 2. These foveae have depths,which which in turn correspond to different visual in each of These foveae havedifferent different depths, in turn correspond to different visual acuity in acuity each of these regions. The The forward looking fovea isfovea the shallower of the two and is primarily used focusingused on for these regions. forward looking is the shallower of the two and is for primarily near objects. Far away objects are viewed deep with foveathe located the side regionin of the the raptor focusing on near objects. Far away objectswith are the viewed deepinfovea located side region eye FOV. The logic behind the placement of the shallow and deep foveae is related to the spiral of the raptor eye FOV. The logic behind the placement of the shallow and deep foveae isflight related to and hunting of raptors [27]. of raptors [27]. the pattern spiral flight patternmethods and hunting methods

Photonics 2018, 5, 18

5 of 19

Photonics 2018, 5, x FOR PEER REVIEW

(a)

5 of 19

(b)

FigureFigure 2. (a)2.Schematic of aofgeneric raptor head ofthe thetwo twofoveae foveae (a) Schematic a generic raptor headshowing showingthe the locations locations of in in thethe eye.eye. The line sight for the fovea is shown. The shallowfovea foveais is pointed forward Theofline of sight for deep the deep fovea is shown. Theline lineofofsight sight for for the the shallow pointed forward toward the beak of the raptor [27]; (b) Visual acuity as measured for a Chilean Eagle. The labeled toward the beak of the raptor [27]; (b) Visual acuity as measured for a Chilean Eagle. The labeled regions of relatively higher acuity resultofofthe thefoveae foveae in in the eyes have only regions of relatively higher acuity areare thethe result the raptor raptoreye. eye.Human Human eyes have only one central peak [27]. one central peak [27].

foveas in each allow birdtotosee seeclearly clearly along along more visual axis. TheThe axesaxes Dual Dual foveas in each eyeeye allow thethe bird morethan thanone one visual axis. the frontally directed foveas within narrowband band of of binocular additional foveas for theforfrontally directed foveas lie lie within a anarrow binocularvision. vision.The The additional foveas provide good monocular acuity at angles where the human eye offers only vague peripheral vision. provide good monocular acuity at angles where the human eye offers only vague peripheral vision. The biological motivationfor for multiple regions is clear. one At timeone or another, a predator The biological motivation multiplefoveated foveated regions is At clear. time or another, a such as a raptor must be able to view both near and far objects keenly. In a designed system, predator such as a raptor must be able to view both near and far objects keenly. In a designed system, implementing such a feature in an application specific manner could also see uses. For instance, implementing such a feature in an application specific manner could also see uses. For instance, a a forward-looking wide FOV camera on a vehicle may need to observe distant objects such as other forward-looking wide FOV camera on a vehicle may need to observe distant objects such as other vehicles as well as be able to acutely measure and interpret objects and patterns on the ground. vehicles as well as be able to acutely measure and motivates interpret us, objects patterns on the aground. The dual foveation exhibited by many raptor eyes more and generally, to consider movableThe dual foveation exhibited by many raptor eyeswill motivates us, more generally, to consider a movable or or adaptable deep foveation region, which enable target tracking, etc. adaptable deep foveation region, which will enable target tracking, etc. 3. Optical Approach to Foveated Imaging

3. OpticalThe Approach to Foveated Imaging optical approach to foveated imaging is covered in the following sub-sections, which cover an introduction to optical foveation, and sequential modeling in Zemax® .

The optical approach to foveated imaging is covered in the following sub-sections, which cover an introduction to optical foveation, and sequential modeling in Zemax®. 3.1. Optical Considerations Foveated imaging, broadly, involves spatially variant imaging around one or more fixation points [28]. The information content locally within an image in a foveated system may be primarily characterized by the broadly, effective magnification at thatvariant location imaging normalized to some maximum Foveated imaging, involves spatially around one or morevalue, fixation at the fixation point. Effective magnification may beinalternatively by distortion pointsusually [28]. The information content locally within an image a foveated described system may be primarily curves of an optical system. Distortion is defined as the deviation of the real image height on the image characterized by the effective magnification at that location normalized to some maximum value, plane from the theoretically ideal image height, both of which are determined by effective focal length. usually at the fixation point. Effective magnification may be alternatively described by distortion Figure 3 shows a plot of common projections relating image height to field angle.

3.1. Optical Considerations

curves of an optical system. Distortion is defined as the deviation of the real image height on the image plane from the theoretically ideal image height, both of which are determined by effective focal length. Figure 3 shows a plot of common projections relating image height to field angle.

Photonics 2018, 5, 18 Photonics 2018, 5, x FOR PEER REVIEW Photonics 2018, 5, x FOR PEER REVIEW

6 of 19 6 of 19 6 of 19

Figure 3. Plot of common projections used in optical systems displayed as image height vs. field angle. Figure Plot of of common common projections projections used optical systems systems displayed field angle. Figure 3. 3. Plot used in in optical displayed as as image image height height vs. vs. field angle.

Ideal image height follows a linear relationship between the field angle in object space and the Ideal image height follows linear relationship between the field field anglebetween in object objectangles spacein and the Ideal image height follows relationship between the angle in space and the image height on the detector. In aaanlinear afocal system, there is a linear mapping object image height on the the detector. In In an afocal afocal system, there is aa linear linear mapping between angles in“ideal” object image height on detector. an system, there is mapping between angles object and image space respectively. Figure 4 shows a plot of one such target projection against anin and image space respectively. Figure 4 shows a plot of one such target projection against an “ideal” and image space respectively. Figure 4 shows a plot of one such target projection against an “ideal” F-theta projection. This was derived using plots of raptor acuity as a function of field angle. All optical F-theta projection. This the wasideal derived using plots degree, of raptor raptorand acuity as aa function function of field angle. angle. Allisoptical F-theta This was derived using plots of acuity as field All optical systemsprojection. deviate from case to some distortion is an of aberration that to be systems deviate from the ideal case to some degree, and distortion is an aberration that is to be be systems deviate from the ideal case to some degree, and distortion is an aberration that is to minimized or placed within some acceptable bounds [29]. minimized or placed within some acceptable bounds [29]. minimized or placed within some acceptable bounds [29].

Figure 4. Modeled raptor eye projection compared to an F-theta model, displayed as image height vs. Figure 4. Modeled raptor eye projection compared to an F-theta model, displayed as image height vs. field angle. Figure 4. Modeled raptor eye projection compared to an F-theta model, displayed as image height vs. field angle. field angle.

Distortion, D , in the range of a few percent is a typical value for imaging systems. Although D thedifferent Distortion, range ofdefinitions, a few percent a typical forwork imaging systems. Although distortion can have ,ainfew for is the purposevalue of this we define distortion as distortion can have a few different definitions, for the purpose of this work we define distortion as

Photonics 2018, 5, 18

7 of 19

Distortion, D, in the range of a few percent is a typical value for imaging systems. Although distortion can have a few different definitions, for the purpose of this work we define distortion as follows, where y is the real image height, f 0 is the focal length of the system on axis, θ is the field angle, and f (θ ) is the real focal length of the system for field angle θ:  D=

y − f0 θ f0 θ





× 100% =

f (θ ) − f 0 f0



× 100%

(2)

From this, we see that positive distortion corresponds to a focal length longer than the on-axis length and negative distortion corresponds to a focal length smaller than the on axis focal length. For wide angle optical systems, the distortion is necessarily negative and large in magnitude to accommodate all the field angles on the imaging detector. This requires a “shrinking” of the image in those fields. 3.2. Lens Design in Sequential Mode As a test bed for the raptor eye projection, an afocal optical system was proposed for use in the field in either rifle mounted or spotting applications. The term “afocal” refers to the fact that these systems condition the collected light rays for imaging by the human eye and therefore do not produce an image as their output. These systems do often have one or more intermediate image planes contained somewhere between the first and last optics. Afocal systems are typically comprised of a long focal length objective lens and a relatively short focal length eyepiece, both of which may consist of one or more individual lenses. The ratio between the focal length of the objective and the focal length of the eyepiece is the magnification of the system. The exit pupil for these systems is placed outside the lens system and is designed to be placed in the same location as the pupil of the user. Since the pupil of the user acts as a limiting aperture, the exit pupil is typically constrained to be around 7 mm in diameter to avoid wasting light. Furthermore, the distance from the rear surface of the optical system and the exit pupil is referred to as eye-relief as it is the distance between the user’s eye and the optical system when the system is in use. The apparent field of view (AFOV) is the field of view as perceived by the user when their eye is properly aligned with the exit pupil of the optical system. This has a maximum value that is easily determined geometrically using   A AFOV = 2 tan−1 (3) 2de where de is the eye-relief and A is the aperture of the rearmost optic in the eyepiece. It was decided that the final product should have a wide apparent field of view, as in the raptor eye. Therefore, for the sake of simplifying the problem, a spotting scope design was favored over the riflescope, because riflescopes require large eye-relief to account for the recoil of the gun after shooting. This constraint on the eye-relief coupled with a large apparent field of view necessitates large optics in the eyepiece, as per the equation above. On the other hand, a spotting scope may have minimal eye relief and is therefore easier and less costly to build with a large apparent field of view. A projection function for ultra-wide-angle lenses has been developed where the central portion of the field of view of a spotting scope has different magnification from that of the periphery in order to maintain a wide-angle view while maximizing visibility in the central region [30]. Further simplifying the design process, we sought to incorporate an eyepiece which was already available off the shelf. A short focal length coupled with a large aperture increases the number of aberrations in an optical system. An eyepiece which is well corrected for spherical, comatic, and chromatic aberrations and field curvature can incorporate as many as eight or more lenses. A wide variety of well corrected eyepieces for telescopes are available from vendors. An Erfle eyepiece, shown in Figure 5, is a well-known solution with a wide apparent field of view [29]. This reduces the complexity of the optical design problem and moves the focus toward the development of an objective lens which incorporates our raptor eye projection.

Photonics 2018, 5, 18 Photonics 2018, 5, x FOR PEER REVIEW

8 of 19 8 of 19

Figure 5. 5. Construction Erfle eyepiece eyepiece [29]. [29]. Figure Construction of of an an Erfle

The field of view of the system was narrowed to more easily accommodate a large magnification The field of view of the system was narrowed to more easily accommodate a large magnification with the custom distortion curve. For a system with minimum distortion, the magnification across with the custom distortion curve. For a system with minimum distortion, the magnification across the field of view is roughly equal to the ratio of the apparent field of view from the eyepiece and the the field of view is roughly equal to the ratio of the apparent field of view from the eyepiece and the actual field of view of the objective lens. In such systems, the field of view of the objective must be actual field of view of the objective lens. In such systems, the field of view of the objective must be less less than that of the eyepiece or demagnification occurs. Of course, in the scenario of a custom than that of the eyepiece or demagnification occurs. Of course, in the scenario of a custom distortion distortion curve, local regions of high magnification could be obtained. However, such regions would curve, local regions of high magnification could be obtained. However, such regions would require require large variations in the distortion pattern as it is a “give-take” scenario. That is, each region of large variations in the distortion pattern as it is a “give-take” scenario. That is, each region of high high magnification must be compensated with demagnification in some other region of the field of magnification must be compensated with demagnification in some other region of the field of view to view to maintain an average magnification corresponding to the ratio of the apparent field of view maintain an average magnification corresponding to the ratio of the apparent field of view and the and the actual field of view of the system. After establishing a method for the design of an optical actual field of view of the system. After establishing a method for the design of an optical system with system with the desired characteristics for a narrow field of view, the same approach can be extended the desired characteristics for a narrow field of view, the same approach can be extended to a wider to a wider field of view system with some modifications if necessary. field of view system with some modifications if necessary. A similar optical design for a small spotting scope with an axially located foveal region was A similar optical design for a small spotting scope with an axially located foveal region was presented and discussed [31,32]. Their objective lens design is shown in the Figure 6, along with an presented and discussed [31,32]. Their objective lens design is shown in the Figure 6, along with an accompanying spot diagram. Note that the scale of the spot diagram is 4 mm and the performance of accompanying spot diagram. Note that the scale of the spot diagram is 4 mm and the performance of this system is not particularly good, although it is difficult to tell without the ray fan plots. Note also this system is not particularly good, although it is difficult to tell without the ray fan plots. Note also the the large exit angles of the extreme rays on the second and eighth surfaces. These large angles large exit angles of the extreme rays on the second and eighth surfaces. These large angles contribute contribute to the spherical aberration contribution from each surface. In this design, as with others, to the spherical aberration contribution from each surface. In this design, as with others, two aspheric two aspheric surfaces are employed before and after the aperture stop of the system. In this manner, surfaces are employed before and after the aperture stop of the system. In this manner, the focal length the focal length is locally lengthened on the left side of the aperture stop and the field curvature is is locally lengthened on the left side of the aperture stop and the field curvature is compensated for compensated for by the aspheric surface on the right side of the stop. The controlled variation of the by the aspheric surface on the right side of the stop. The controlled variation of the distortion curve distortion curve of such a design can be attributed in part to the small aperture stop of the system. of such a design can be attributed in part to the small aperture stop of the system. This causes the This causes the entrance pupil to translate across the surface of the front lens, ensuring that a given entrance pupil to translate across the surface of the front lens, ensuring that a given field experiences field experiences only the local lens curvature. This means that for a highly aspheric surface, such as only the local lens curvature. This means that for a highly aspheric surface, such as the one in the the one in the figure, two fields of view may “see” vastly different powers of the same front optic. figure, two fields of view may “see” vastly different powers of the same front optic. After exploring research in similar optical systems, it was decided that a four-lens system with the aperture stop in the middle would be a suitable starting point for the design of an objective. Having run some tests with a four-lens objective, it was decided that more degrees of freedom were necessary to reduce aberration and converge upon the desired solution. Hence, a six-element objective was run through a Zemax macro. These macros have access to many of the low-level features offered in Zemax and act as a powerful tool in lens development. While the macro developed for the raptor eye system plays the key role in acquiring the custom distortion pattern sought after, a good first order model is required as a starting point for the system. The macro does not design the system by itself and without a good starting point as it can easily converge on unrealistic or all around bad solutions. The starting point for this optimization is shown in Figure 7. (a)

the focal length is locally lengthened on the left side of the aperture stop and the field curvature is compensated for by the aspheric surface on the right side of the stop. The controlled variation of the distortion curve of such a design can be attributed in part to the small aperture stop of the system. This causes the entrance pupil to translate across the surface of the front lens, ensuring that a given field experiences only the local lens curvature. This means that for a highly aspheric surface, such as Photonics 2018, 5, 18 9 of 19 the one in2018, the5,figure, two fields of view may “see” vastly different powers of the same front optic. Photonics x FOR PEER REVIEW 9 of 19

Photonics 2018, 5, x FOR PEER REVIEW

(a)

9 of 19

(b) Figure 6. (a) Wide-angle foveated objective presented by Shimizu, et al. [32]; (b) Corresponding spot diagram for a number of field positions. Full scale on each subplot is 4 mm.

After exploring research in similar optical systems, it was decided that a four-lens system with the aperture stop in the middle would be a suitable starting point for the design of an objective. Having run some tests with a four-lens objective, it was decided that more degrees of freedom were necessary to reduce aberration and converge upon the desired solution. Hence, a six-element objective was run through a Zemax macro. These macros have access to many of the low-level features offered in Zemax and act as a powerful tool in lens development. While the macro developed for the raptor eye system plays the key role in acquiring the custom distortion pattern sought after, a (b) good first order model is required as a starting point for the system. The macro does not design the Figure 6. (a) Wide-angle foveated objective presented by Shimizu, et al. [32]; (b) Corresponding spot system by 6. itself and without a goodobjective startingpresented point as itby can easily et converge on Corresponding unrealistic or all around Figure (a) Wide-angle foveated Shimizu, al. [32]; (b) spot diagram for a number of field positions. Full scale on each subplot is 4 mm. baddiagram solutions. The starting point for this optimization is shown in Figure 7. for a number of field positions. Full scale on each subplot is 4 mm. After exploring research in similar optical systems, it was decided that a four-lens system with the aperture stop in the middle would be a suitable starting point for the design of an objective. Having run some tests with a four-lens objective, it was decided that more degrees of freedom were necessary to reduce aberration and converge upon the desired solution. Hence, a six-element objective was run through a Zemax macro. These macros have access to many of the low-level features offered in Zemax and act as a powerful tool in lens development. While the macro developed for the raptor eye system plays the key role in acquiring the custom distortion pattern sought after, a good first order model is required as a starting point for the system. The macro does not design the system by itself and without a good starting point as it can easily converge on unrealistic or all around bad solutions. The starting point for this optimization is shown in Figure 7. Figure 7. Starting point for spotting scope objective optimization. Figure 7. Starting point for spotting scope objective optimization.

The macro developed for use with the raptor eye projection was iterative on several levels. The macro developed for use with the raptor eye projection was iterative on several levels. The The main portion of the macro runs five times, and the nested portion inside of that runs 20 times. main portion of the macro runs five times, and the nested portion inside of that runs 20 times. For For each iteration of the nested portion of the code, a piecewise polynomial is used to determine the each iteration of the nested portion of the code, a piecewise polynomial is used to determine the target target distortion for each of 18 field coordinates at three wavelengths each. The target distortion is distortion for each of 18 field coordinates at three wavelengths each. The target distortion is weighted weighted by the remaining number of iterations and the difference between the current distortion at that by the remaining number of iterations and the difference between the current distortion at that field field coordinate and wavelength and the desired coordinate. Once the distortion and field curvature coordinate and wavelength and the desired coordinate. Once the distortion and field curvature targets are updated a damped least squares optimization is run, followed by a hammer optimization. targets are updated a damped least squares optimization is run, followed by a hammer optimization. This is repeated 20 times to converge upon the target distortion curve. Subsequently, the main for-loop This is repeated 20 times to converge upon the target distortion curve. Subsequently, the main forloop is invoked and the distortion curve is scaled up by a set amount while the same shape is Figure 7. Starting point for spotting scope objective optimization.

The macro developed for use with the raptor eye projection was iterative on several levels. The main portion of the macro runs five times, and the nested portion inside of that runs 20 times. For

Photonics 2018, 5, 18

10 of 19

Photonics 2018, 5, x FOR PEER REVIEW

10 of 19

is invoked and the distortion curve is scaled up by a set amount while the same shape is maintained. maintained. It was this method converged more readily a solution “low withspot” the It was found that this found methodthat converged more readily on a solution with theonintermediate intermediate “low spot” in the distortion curve. Immediately setting the targets at the full-scale in the distortion curve. Immediately setting the targets at the full-scale distortion would result in a flat distortion would result a flatsimultaneously curve in that region. simultaneously defines edge and curve in that region. Thisinmacro definesThis edgemacro and center thickness parameters, as well center parameters, asas well as firstfocal orderlength. parameters such as effective focal length. Figure 8 as firstthickness order parameters such effective Figure 8 shows a result of the optimization shows a result thecorresponding optimization procedure with the corresponding distortion plot. procedure withofthe distortion plot.

(a)

(b) Figure 8. (a) Result of the iterative optimization process; (b) Resultant field curvature and distortion Figure 8. (a) Result of the iterative optimization process; (b) Resultant field curvature and distortion curves. curves.

3.3. Issues Related to Optical Approach to Foveated Imaging 3.3. Issues Related to Optical Approach to Foveated Imaging Large nonlinearities in the distortion plots represent large changes in the information content Large nonlinearities in the distortion plots represent large changes in the information content in in regions of an image. Whereas distortion is an acceptable quality in foveating lens systems, regions of an image. Whereas distortion is an acceptable quality in foveating lens systems, other other aberrations are undesirable and affect image quality. It is difficult to induce large nonlinearities in aberrations are undesirable and affect image quality. It is difficult to induce large nonlinearities in the distortion profile without introducing other undesirable aberrations such as coma or astigmatism. the distortion profile without introducing other undesirable aberrations such as coma or astigmatism. In general, the foveation induces spatially constant blur throughout the area outside the foveal In general, the foveation induces spatially constant blur throughout the area outside the foveal zone. The blurring of an image is the result of the convolution of the point spread function of the zone. The blurring of an image is the result of the convolution of the point spread function of the optical system with the object-space input. The transition region between the central region of the optical system with the object-space input. The transition region between the central region of the simulated image and the area outside the foveal zone has nonuniform blur. Correcting the spatially simulated image and the area outside the foveal zone has nonuniform blur. Correcting the spatially variant blur induced by the foveating optic is critical. Two types of blur occur: one due to aberration variant blur induced by the foveating optic is critical. Two types of blur occur: one due to aberration (such as lack of focus), and the second due to motion and time variation. Deblurring approaches (such as lack of focus), and the second due to motion and time variation. Deblurring approaches often often incorporate iterative methods with a variety of different techniques to more quickly converge incorporate iterative methods with a variety of different techniques to more quickly converge on an on an acceptable solution. It must be said, however, that the final solution of these algorithms can acceptable solution. It must be said, however, that the final solution of these algorithms can never be never be as clear as the original uncorrupted image [33]. Balancing the amount of acceptable image as clear as the original uncorrupted image [33]. Balancing the amount of acceptable image blur within blur within particular regions of the FOV and the magnification of the foveated region is a design particular regions of the FOV and the magnification of the foveated region is a design tradeoff. tradeoff. Solutions have been suggested for solving image blur due to object or camera motion using Solutions have been suggested for solving image blur due to object or camera motion using an an underexposed image and an image with the desired exposure time. The underexposed image has underexposed image and an image with the desired exposure time. The underexposed image has excessive amounts of noise but is not corrupted by the blur from motion. The data contained within

Photonics 2018, 5, 18

11 of 19

Photonics 2018, 5, x FOR PEER REVIEW

11 of 19

excessive amounts of noise but is not corrupted by the blur from motion. The data contained within this image are then used to help solve the deblurring problem in the image with the correct exposure time [34]. 4. Electronic Electronic Approach Approach to to Foveated Foveated Imaging Imaging 4. The purely purely optical static location The optical approach approach to to foveating foveating has has several several drawbacks drawbacks including including static location of of fixation points in the field of view of the imaging system, inability to dynamically adjust foveation fixation points in the field of view of the imaging system, inability to dynamically adjust foveation parameters, parameters, and and complexity complexity of of the the optical optical system. system. Electronic foveation Electronic foveation involves involves downsampling downsamplingan aninput inputimage imageusing usingsoftware softwareon onthe thebackend backendofofa traditional image capture system or even as a means to further augment the optical systems outlined a traditional image capture system or even as a means to further augment the optical systems outlined in Section 3. This eliminates many of the drawbacks of the optical approach with the downside in Section 3. This eliminates many of the drawbacks of the optical approach with the downside of of increased processing complexity in the front end of the processing system. This could be mitigated with increased processing complexity in the front end of the processing system. This could be mitigated dedicated hardware for thefor foveating preprocessing system.system. Furthermore, electronic foveation allows with dedicated hardware the foveating preprocessing Furthermore, electronic foveation for arbitrary arrangement of receptive fields in the downsampling step [35,36], which is somewhat allows for arbitrary arrangement of receptive fields in the downsampling step [35,36], which is similar to the processing in the raptor’s brain. somewhat similar to the processing in the raptor’s brain. Electronic foveation is discussed in the following following sub-sections sub-sections covering covering the the problem problem of of scale, scale, Electronic foveation is discussed in the receptive field ourour approach to downsampling. FigureFigure 9 illustrates the effectthe of aeffect foveating receptive fieldlayout, layout,and and approach to downsampling. 9 illustrates of a transform on an input image from the open source Caltech 256 database [37]. foveating transform on an input image from the open source Caltech 256 database [37].

(a)

(b)

Figure 9. Example of foveated image. (a) Input (full-resolution) image; (b) Foveated image using logFigure 9. Example of foveated image. (a) Input (full-resolution) image; (b) Foveated image using polar receptive field layout. log-polar receptive field layout.

4.1. Scale Issues in Foveated Image Processing 4.1. Scale Issues in Foveated Image Processing Image processing applications seeking to identify targets of interest within a field of view have Image processing applications seeking to identify targets of interest within a field of view have an an intrinsic problem of scale [38–40]. Scale in this context refers to the size of the object as recorded intrinsic problem of scale [38–40]. Scale in this context refers to the size of the object as recorded by by the image acquisition system. With no a priori information regarding expected scale of the objects the image acquisition system. With no a priori information regarding expected scale of the objects of of interest, all scales must be searched for independently [39]. Recent approaches to this problem interest, all scales must be searched for independently [39]. Recent approaches to this problem involve involve the generation of a scale space, whereby an input image is decomposed into multiple levels the generation of a scale space, whereby an input image is decomposed into multiple levels of scale of scale each of which is searched for constant sized objects of interest. For efficiency, this is typically each of which is searched for constant sized objects of interest. For efficiency, this is typically done done using a cascade approach with Gaussian or Haar filters. using a cascade approach with Gaussian or Haar filters. Foveated approaches differ from space invariant approaches in that the spatial support of each Foveated approaches differ from space invariant approaches in that the spatial support of each scale in the derived scale space increases with larger scale. Figure 10 illustrates this varying support scale in the derived scale space increases with larger scale. Figure 10 illustrates this varying support [41]. [41]. The log-polar transform represents the outer shell of a foveal scale space that may then be The log-polar transform represents the outer shell of a foveal scale space that may then be derived derived from the log-polar image. The log-polar transform has been used in correlation based from the log-polar image. The log-polar transform has been used in correlation based approached approached for object recognition with some success [42]. In receptive field foveation, scale refers to for object recognition with some success [42]. In receptive field foveation, scale refers to the size of the size of the receptive fields. In traditional imaging applications with space invariant receptive the receptive fields. In traditional imaging applications with space invariant receptive fields (pixels), fields (pixels), the native scale correlates to the size of the pixel and relates to real-world object height through the focal length of the imaging system.

Photonics 2018, 5, 18

12 of 19

the native scale correlates to the size of the pixel and relates to real-world object height through the focal length system. Photonics 2018, of 5, xthe FORimaging PEER REVIEW 12 of 19 Photonics 2018, 5, x FOR PEER REVIEW

12 of 19

Figure 10. 10. Illustration Illustration of of varying varying spatial spatial support support across across scales scales in in foveal foveal scale scale space space [41]. [41]. Figure Figure 10. Illustration of varying spatial support across scales in foveal scale space [41].

Generating a foveal scale space implicitly prioritizes “near” scales over “far” scales [39,41]. That Generating a foveal scale space implicitly prioritizes “near” scales over “far” scales [39,41]. That is, is, the scale space awhere one would expect near objects to“near” appear would have larger receptive fields Generating foveal space implicitly over “far” scalesreceptive [39,41]. That the scale space where onescale would expect near prioritizes objects to appearscales would have larger fields and correspondingly larger support. In this framework, the information lost through the foveal is, correspondingly the scale space where onesupport. would expect near objects to appear would havelost larger receptive and larger In this framework, the information through thefields foveal compression is essentially “missing pixels” from the different levels of scale. By generating a foveal and correspondingly larger support. In this framework, the information lost through the foveal compression is essentially “missing pixels” from the different levels of scale. By generating a foveal scale space, the high priority scale content a scene extracted from raw input. With no a compression essentially pixels”offrom the is different levels of the scale. Bypixel generating foveal scale space, theishigh priority“missing scale content of a scene is extracted from the raw pixel input. aWith no a priori about the scene, priority the is content is simply inversely proportional scaleinformation space, the high priority scale the content of a of scene extracted from the raw pixel input. Withto nosome a priori information about the scene, the priority of the content is simply inversely proportional to some distance measure from thethe current point. In acontent case where a target has already been to identified priori information about scene,fixation the priority of the is simply inversely proportional some distance measure from the current fixation point. In a case where a target has already been identified or distance is currently being tracked, therefixation is a priori information abouta the scene. In this been case, identified the fixation measure from the current point. In a case where target has already or is currently being tracked, there is a priori information about the scene. In this case, the fixation or iscan currently tracked, there isataany priori information scene. In this fixation point follow being the object of interest particular scaleabout whilethe nearer scales arecase, still the observed and point can follow the object of interest at any particular scale while nearer scales are still observed and point can follow object of interest any particular nearer are still and searched with fullthe support. Figure 11 at illustrates somescale scalewhile levels fromscales a foveal scaleobserved space derived searched with full 11 illustrates some scale levels from a foveal scale space derived searched with fullsupport. support.Figure Figure using the log-polar transform [24]. 11 illustrates some scale levels from a foveal scale space derived using the log-polar transform [24]. using the log-polar transform [24].

(a) (a)

(b) (b) Figure Illustrationofofaafoveal foveal scale scale space space generated (a)(a) Image in in Figure 11.11. Illustration generated using usingthe thelog-polar log-polartransform. transform. Image Figure 11. Illustration of a foveal scale space generated using the log-polar transform. (a) Image in “native” cartesian coordinates wherein the shrinking of the image from left to right illustrates how it “native”cartesian cartesiancoordinates coordinateswherein whereinthe theshrinking shrinking of of the the image image from from left left to to right right illustrates illustrates how how itit “native” moves towards the center of the image through scale space; (b) Foveal region corresponding to images moves towards towards the the center center of of the the image image through through scale scale space; space; (b) (b) Foveal Foveal region region corresponding corresponding to to images images moves in (a) showing how the support of the foveal region grows with increasing scale. in (a) (a) showing showing how how the the support support of of the the foveal foveal region regiongrows growswith withincreasing increasingscale. scale. in

4.2. Receptive Field Layout 4.2. Receptive Field Layout The receptive field model for foveation follows biologically motivated models where the support field have modeloverlapping for foveation followswith biologically wheretothe support of The cellsreceptive in the retina support Gaussianmotivated profiles. models This equates placing of Gaussian cells in the retina have overlapping support with Gaussian profiles. This equates to placing mask functions on an image in a desired pattern in order to downsample the image to Gaussian mask functions on an image in a desired pattern in order to downsample the image to

Photonics 2018, 5, 18

13 of 19

4.2. Receptive Field Layout The receptive field model for foveation follows biologically motivated models where the support of cells the5,retina support with Gaussian profiles. This equates to placing Gaussian Photonicsin 2018, x FOR have PEER overlapping REVIEW 13 of 19 mask functions on an image in a desired pattern in order to downsample the image to produce an producewith an output with lower dimensionality [43]. Biologically motivated models from human vision output lower dimensionality [43]. Biologically motivated models from human vision follow the follow thelayout log-polar layout where the field density decreases and the of spatial log-polar where the receptive fieldreceptive density decreases radially and theradially spatial support each support offield eachincreases receptive field increases linearly with eccentricity or distance from the center. receptive linearly with eccentricity or distance from the foveal center. Thisfoveal differs from Thisbi-foveated differs from the bi-foveated raptors in that for it does not account forthe multiple foveae in the vision of raptors invision that itof does not account multiple foveae in eye of the raptor. theintroducing eye of the raptor. Bychannels introducing multipledownsampling, channels of foveated downsampling, following a By multiple of foveated each following a desiredeach high-resolution desired high-resolution receptive field, theregions contentinfrom disjointed in the image may be receptive field, the content from disjointed the image may regions be maintained. The log-polar maintained. The Gaussian log-polar receptive transformfields usingisGaussian receptive is just onelayout pattern of is receptive transform using just one pattern offields receptive field and wholly field layout and is wholly determined by three parameters: number of wedges, minimum receptive determined by three parameters: number of wedges, minimum receptive field size, and overlap of the field size,fields. and overlap of illustrates the receptive 12 illustrates this layout of receptive layout fields. receptive Figure 12 this fields. layout Figure of receptive fields. Alternatively, a Cartesian Alternatively, Cartesian layout may beprocessing. used to simplify post transform processing. may be used toasimplify post transform

Figure 12. 12. Log-polar with the the periphery colored red red and and fovea fovea colored colored blue. blue. Figure Log-polar layout layout of of receptive receptive fields fields with periphery colored

In either choice of receptive field layout, it is desirable to generate a scale space where each scale In either choice of receptive field layout, it is desirable to generate a scale space where each has constant relation between pixels at that scale. This could be done either by first maximally scale has constant relation between pixels at that scale. This could be done either by first maximally downsampling the input image with the minimum number of Gaussian receptive fields required to downsampling the input image with the minimum number of Gaussian receptive fields required generate the desired scale space and then deriving the scale space from that reduced image or by to generate the desired scale space and then deriving the scale space from that reduced image or generating each level of the scale space individually from the original input image. Owing to the by generating each level of the scale space individually from the original input image. Owing to irregularity of the placement of the receptive fields, the latter option is less suitable for image the irregularity of the placement of the receptive fields, the latter option is less suitable for image processing applications due to artifacts in the derived scale space. This is because the centers of the processing applications due to artifacts in the derived scale space. This is because the centers of the desired Gaussian receptive fields are not aligned throughout the image. desired Gaussian receptive fields are not aligned throughout the image. Generating the foveal scale space directly from the input image requires a greater number of Generating the foveal scale space directly from the input image requires a greater number of operations to be performed on the relatively large number of input pixels, but if done in a parallel operations to be performed on the relatively large number of input pixels, but if done in a parallel fashion in hardware, could still serve to generate a low dimensional scale space suitable for fast fashion in hardware, could still serve to generate a low dimensional scale space suitable for fast sequential operations. Given the input resolution requirements of modern state-of-the-art detectors, sequential operations. Given the input resolution requirements of modern state-of-the-art detectors, the benefits of generating a foveal scale space only become clear when dealing with very highthe benefits of generating a foveal scale space only become clear when dealing with very high-resolution resolution input images. This is because the sliding window approach to detection requires a ‘scale input images. This is because the sliding window approach to detection requires a ‘scale slice’ to be at slice’ to be at least as large and preferably larger than resolution of the detection window. least as large and preferably larger than resolution of the detection window. The direct Cartesian approach to foveated downsampling is appealing due to the output being The direct Cartesian approach to foveated downsampling is appealing due to the output being readily compatible with modern detectors—that is, it is already a Cartesian image. With the log-polar readily compatible with modern detectors—that is, it is already a Cartesian image. With the log-polar method, it is necessary to interpolate the pixel space in the log-polar scale space domain back onto a Cartesian grid or develop more complex methods for handling irregularly packed pixels. 4.3. Foveated Downsampling Computational approaches to foveated downsampling must be fast enough to warrant application of foveation as opposed to more traditional convolution approaches to scale space

Photonics 2018, 5, 18

14 of 19

method, it is necessary to interpolate the pixel space in the log-polar scale space domain back onto a Cartesian grid or develop more complex methods for handling irregularly packed pixels. Photonics 2018, 5, x FOR PEER REVIEW

14 of 19

4.3. Foveated Downsampling array (FPGA) technology. The downsampling approach can be considered to be a form of foveated Computational approaches to foveated downsampling must be fast enough to warrant application image compression. of foveation as opposed to more traditional convolution approaches to scale space generation. Real-time Since the foveation operation is linear, a sparse matrix transform may be used where each row operation may be implemented in hardware using field programmable gate array (FPGA) technology. of the transformation matrix consists of coefficients pertaining to each individual Gaussian receptive The downsampling approach can be considered to be a form of foveated image compression. field. This transformation matrix is very sparse but of very large dimensionality. The number of Since the foveation operation is linear, a sparse matrix transform may be used where each row of columns in the transformation matrix corresponds to the number of pixels in the input image the transformation matrix consists of coefficients pertaining to each individual Gaussian receptive field. (typically in the millions), and the number of rows corresponds to the number of receptive fields in This transformation matrix is very sparse but of very large dimensionality. The number of columns the output (typically in the thousands). In the direct Cartesian approach to foveation, the number of in the transformation matrix corresponds to the number of pixels in the input image (typically in receptive fields is simply the number of desired scales times the number of receptive fields in each the millions), and the number of rows corresponds to the number of receptive fields in the output scale. (typically in the thousands). In the direct Cartesian approach to foveation, the number of receptive The mathematical formulation of this transform is implemented as matrix multiplication where fields is simply the number of desired scales times the number of receptive fields in each scale. the number of columns in the transformation matrix, K , is equal to theasnumber pixels in the input The mathematical formulation of this transform is implemented matrix of multiplication where image and the number of rows in the transform matrix is equal to the sum of the number of the number of columns in the transformation matrix, K, is equal to the number of pixels in the the receptive fields each corresponding a pixel inmatrix the peripheral of the input image and(RFs), the number of rows in thetotransform is equal toand the foveal sum ofregions the number of transformed is each shown in Equation (4), is peripheral the input orand Cartesian image vector I Cthe the receptiveimage. fields This (RFs), corresponding to awhere pixel in foveal regions of the and outputThis image vector. I LP is theimage. transformed is shown in Equation (4), where IC is the input or Cartesian image vector and ILP is the output image vector. I LP  KI C (4)(4) ILP = KIC Figure thethe structure of the transform matrix as it relates to the to twothe regions in the output Figure1313shows shows structure of the transform matrix as it relates two regions in the vector, peripheral and foveal respectively. Each Gaussian RF has support across the entire image, but output vector, peripheral and foveal respectively. Each Gaussian RF has support across the entire most of these values are very small. increase transformation matrix and greatly image, but most of these values areTo very small.the To sparsity increase of thethe sparsity of the transformation matrix reduce the required storage size and computation time of the transform, this matrix is implemented and greatly reduce the required storage size and computation time of the transform, this matrix is using a sparse using matrixa sparse construction thresholding the values of the each Gaussian RFGaussian and setting implemented matrixby construction by thresholding values of each RF the and coefficients below the desired threshold zero. setting the coefficients below the desiredtothreshold to zero.

Figure13. 13.Illustration Illustrationofofthe theorganization organizationofofaalog-polar log-polarfoveating foveatingtransformation transformationmatrix. matrix. Figure

Feature Featuredetection detectionrequires requiressearching searching for for desired desired features features at at each each level level of of scale scale [34,36].This [34,36]. Thiscould could be done natively in the transformed domain consisting of peripheral and foveal be done natively in the transformed domain consisting of peripheral and foveal regions regions using using different differentsized sizedfeature featuredetection detectionmasks masksor orititcould couldbe bedone doneby bytransforming transformingthe theimage imageinto intoaaseries seriesof of Cartesian-like images each corresponding to different levels of scale representation as in the standard Cartesian-like images each corresponding to different levels of scale representation as in the standard implementation implementationofofscale scalespace spacefeature featuredetection. detection.This Thispaper papertakes takesthe thelatter latterapproach approachand andaamethod method was developed to perform diffusion for this layout of RFs (for the current effort, the diffusion willwill be was developed to perform diffusion for this layout of RFs (for the current effort, the diffusion isotropic, but this approach will enable future exploration of anisotropic diffusion, possibly better suited to specific targets) While a closed form solution for diffusion in the peripheral and foveal regions could be implemented, this implementation of the transform matrix does not incorporate a closed form solution for the locations of the RFs foveal region. Due to the lower dimensionality of the transformed domain, it is desirable to perform the diffusion operation natively in that domain.

Photonics 2018, 5, 18

15 of 19

be isotropic, but this approach will enable future exploration of anisotropic diffusion, possibly better suited to specific targets) While a closed form solution for diffusion in the peripheral and foveal regions could be implemented, this implementation of the transform matrix does not incorporate a closed form solution for the locations of the RFs foveal region. Due to the lower dimensionality of the transformed domain, it is desirable to perform the diffusion operation natively in that domain. Equation (5) shows the desired operation where a foveal image is derived from a base foveal image Photonics 2018, 5, x FOR PEER REVIEW 15 of 19 by iterative multiplication of some diffusion matrix, and where ILP( N ) is the derived foveal image, ILPby is the base foveal image, is the diffusion matrix, and N is the of multiplications: multiplication of A some diffusion matrix, and where is the derived foveal image, (0) iterative I number I LP ( 0 ) is the base foveal image,

A

LP(N )

is the diffusion matrix, and N is the number of multiplications: N

ILP( N ) = A ILP(0)

I

N

A I

(5)

(5)

LP(N)in the LP (0) To derive one level of scale from another foveal domain, a diffusion matrix is defined by computing the inner between twointransform at different levels of scale, To derive one levelproduct of scale from another the fovealmatrices, domain, aeach diffusion matrix is defined by or computing minimum the RF inner size. product This approximates inverting the first operation with the transpose of between two transform matrices, each at different levels of scale, orthe transformation matrix and applying the second transformation matrix. with The inner product computes minimum RF size. This approximates inverting the first operation the transpose of the thetransformation overlap of each of theand second transformation matrix’s RFs with eachThe RF inner of theproduct first transformation matrix applying the second transformation matrix. computes matrix. The resultant diffusion matrix is square and mapsRFs a log polar image into another log the overlap of each of the second transformation matrix’s with eachdomain RF of the first transformation matrix. The resultant matrix is square maps log polarof domain image into another log polar domain image ofdiffusion equal size. Equation (6)and shows thea method computation of this diffusion polar where domainKimage Equation (6) shows the method of computation of this matrix, Kequal the first and second transform matrices, respectively. Thediffusion difference σ1 andof σ2 aresize. matrix, σwhere and themselves and secondRFtransform respectively. matrix, The K σ are the between are first the minimum sizes for matrices, each transformation 1 and σ2K, σwhich defines the size of the step space that A will realize. The operation ‘rownorm {·}’ forces difference between and  scale  1 through 2 , which themselves are the minimum RF sizes for each thetransformation rows of the step matrix A to have elements sum to 1. matrix, defines the size of thewhich step through scale space that A will realize. The 1

2

operation ‘rownorm {·}’ forces the rows of the stepnmatrix

A too have elements which sum to 1.

A = rownorm Kσ2 [Kσ1 ] T

A  rownorm {Kσ2 [Kσ1 ]T }

(6) (6)

Figure 14 shows the sparsity of the diffusion matrix. Blue values in this figure correspond to Figure 14 in shows the sparsity of theand diffusion valuescontaining in this figure correspond to nonzero entries the diffusion matrix, whitematrix. valuesBlue are areas 0 valued elements. nonzero entries in the diffusion matrix, and white values are areas containing 0 valued elements. This This matrix is square and has a large number of elements, but the sparse nature of the matrix allows square and has aoflarge numbermultiplication. of elements, butExtraction the sparse of nature of the matrix forlog formatrix rapid isimplementation the matrix the foveal regionallows of each rapid implementation of the matrix multiplication. Extraction of the foveal region of each log polar polar scale space image is done by applying the inverse of only the foveal section of the log polar scale space image is done by applying the inverse of only the foveal section of the log polar transformation matrix. The foveal region of the transformation matrix is roughly orthogonal and the transformation matrix. The foveal region of the transformation matrix is roughly orthogonal and the inverse is approximated as the transpose of the foveal section of the transformation matrix. This is inverse is approximated as the transpose of the foveal section of the transformation matrix. This is done after each diffusion operation and results in a number of slices of the scale space representation done after each diffusion operation and results in a number of slices of the scale space representation of the original Cartesian image where the support of each slice is limited by the support of the foveal of the original Cartesian image where the support of each slice is limited by the support of the foveal region corresponding to to thethe slice inin the log polar domain. region corresponding slice the log polar domain.The Theresult resultisisaanumber numberofofslices sliceseach eachwith withan equal number of pixels and each with increasing spatial support which is related to resolution. an equal number of pixels and each with increasing spatial support which is related to resolution.

Figure 14. Illustration of the sparse structure of the transform matrix. Blue areas correspond to nonFigure 14. Illustration of the sparse structure of the transform matrix. Blue areas correspond to zero coefficients. non-zero coefficients.

The fact that the second transformation matrix maps the image into a lower scale representation relative to the first transformation matrix means that the diameter of the foveal region in the lower scale log polar image is larger than the diameter of the foveal region in the first log polar image. The nature of the computation of the centers of each RF in the foveal region means that each region contains the same number of RFs. Accordingly, there is an implicit shift within the log polar domain

Photonics 2018, 5, 18

16 of 19

The fact that the second transformation matrix maps the image into a lower scale representation relative to the first transformation matrix means that the diameter of the foveal region in the lower scale log polar image is larger than the diameter of the foveal region in the first log polar image. The nature of the computation of the centers of each RF in the foveal region means that each region contains the same number of RFs. Accordingly, there is an implicit shift within the log polar domain and zeros are along the outer border of the periphery shrinking the support of the log Photonics 2018, 5, xintroduced FOR PEER REVIEW 16 polar of 19 representation with each subsequent diffusion operation. Figure 15 illustrates the overlap between between two transformations with different RF sizes. layout RFs two transformations with different minimumminimum RF sizes. Figure 15aFigure shows15a theshows layoutthe of RFs withofsome with some minimum RFnumber size andofnumber ofThese wedges. These values weretochosen to make the easier concept minimum RF size and wedges. values were chosen make the concept to easier to visualize and in the practice theofnumber in the transform wouldlarger. be much larger. visualize and in practice number wedgesofinwedges the transform would be much Figure 15b Figure two overlapping circles corresponding each RF are colored shows 15b twoshows overlapping transforms.transforms. The circlesThe corresponding to each RFtoare colored differently differently for eachtotransform to help the two separateThe RF only layouts. The only difference for each transform help contrast the contrast two separate RF layouts. difference between these between these two transformations is that the RF transformation size for the transformation corresponding two transformations is that the minimum RFminimum size for the corresponding to the blue to the blue istransform is slightly than theRF minimum RF other size for the otherThis transform. This is transform slightly larger than larger the minimum size for the transform. is necessary to necessary to additional achieve thebenefit additional benefit scaling or theeach image with eachof application of achieve the of scaling or of shrinking theshrinking image with application the diffusion the diffusion matrix, which in turn istonecessary to increase support of region the foveal region matrix, which in turn is necessary increase the supportthe of the foveal pixels uponpixels each upon each application of thematrix. diffusion matrix. application of the diffusion

(a)

(b)

Figure 15. (a) Illustration of the layout of receptive fields (RFs) for a transformation with a small Figure 15. (a) Illustration of the layout of receptive fields (RFs) for a transformation with a small number of wedges; (b) Same transformation shown in (a) (in red) with an additional transformation number of wedges; (b) Same transformation shown in (a) (in red) with an additional transformation (in (in blue) whose only difference is an increase in the minimum RF size relative to that of the red blue) whose only difference is an increase in the minimum RF size relative to that of the red transform. transform. This illustrates the increase in the spatial support of the foveal region from one transform This illustrates the increase in the spatial support of the foveal region from one transform to the other. to the other.

Computationof ofthe the transformation transformationmatrices matricestakes takestime timebut butmust mustonly onlybe bedone doneonce oncein in aa case case Computation where the foveation parameters are fixed (another reason this approach was considered). The matrix where the foveation parameters are fixed (another reason this approach was considered). The matrix canthen thenbebe applied to the image in time which is typically theoforder a few milliseconds. can applied to the image in time which is typically on the on order a few of milliseconds. Steering Steering of the fixation point can then be done by translating the input image prior to application of of the fixation point can then be done by translating the input image prior to application of the the transformation matrix. The vector output by the transformation operation must then be reshaped transformation matrix. The vector output by the transformation operation must then be reshaped into a three-dimensional space corresponding to parameters the parameters of the transformation. result ainto three-dimensional scalescale space corresponding to the of the transformation. TheThe result is is a number of equal size scale space slices that may then be operated on to search for the desired a number of equal size scale space slices that may then be operated on to search for the desired objects objects of interest. of interest. 4.4. Issues Related to Electronic Approach to Foveated Imaging Electronic foveated imaging has potential for use in practical applications, particularly where a large depth of scale is required and scenes have fixed geometry. For example, in automotive pedestrian detection it is necessary to identify “large” scale pedestrians near the vehicle as well as “small” scale pedestrians at a distance but close to the road. If using a common imaging sensor with a regular grid of pixels and setting the pixel pitch to match the necessary sampling for detection of the most distant pedestrian of interest, there will be many pixels residing in regions of the image

Photonics 2018, 5, 18

17 of 19

4.4. Issues Related to Electronic Approach to Foveated Imaging Electronic foveated imaging has potential for use in practical applications, particularly where a large depth of scale is required and scenes have fixed geometry. For example, in automotive pedestrian detection it is necessary to identify “large” scale pedestrians near the vehicle as well as “small” scale pedestrians at a distance but close to the road. If using a common imaging sensor with a regular grid of pixels and setting the pixel pitch to match the necessary sampling for detection of the most distant pedestrian of interest, there will be many pixels residing in regions of the image where such high-resolution information is unnecessary. This is an example application where electronic foveation could be extremely useful as a preprocessing method to reduce the bit rate of information streaming from a video capture system that would otherwise be too large to process in real time sequentially. Hurdles in applying electronic foveation to video streams include developing hardware capable of performing the transform in real time to high-resolution video streams. Once that is achieved, the foveation parameters may be tuned to ensure that the resultant bit rate is compatible with the back end sequential processing capabilities of the embedded system 5. Conclusions Foveation in image processing is a promising field of research. Loosening the restrictions of a space invariant system means that resolution parameters can be optimized based on processing capabilities and goals without total loss of vision capabilities. Foveating only in the optical front end is challenging due to the cost and complexity of hardware as well as design issues related to image quality. Electronic foveation seems like a promising complimentary method capable of reducing bit rate while maintaining a usable depth of scale. Foveation parameters should be optimized based on the application of interest. Using such an adaptable foveation approach (as opposed to a fixed dual foveation system like a raptor eye), a system with optimal parameters would dynamically change foveation parameters while tracking an object of interest to minimize excess information in the post-foveation bit stream. A relevant application of the raptor eye characteristics is the concept of foveated vision to allow efficient multi-scale processing of a large scene, and the ability to perform target detection and tracking within a polar coordinate frame. Such a concept could be integrated into and enhance the accuracy of real time stereo-vision pedestrian detection algorithms [44]. Recently, a hybrid bionic image sensor (HBIS) was developed in which features of planar and curved artificial compound eyes as well as foveated vision are integrated to achieve FOV extension, super resolution of ROI, and a large foveal ratio [45]. In addition, Risley prisms having accurate and fast beam were employed to imitate the movement of the fovea and generate sub-pixel shifts of sub-images for super-resolution reconstruction. This paper has addressed important aspects related to foveated imaging from the perspectives of optics and electronics. While each approach has its own advantages and disadvantages, it is believed that a combination of both techniques will further spur the development of more versatile and flexible foveated imaging systems in the future. Author Contributions: R.M.N. and T.J.K. conceived the concept, developed the approach, and performed the simulations; T.F.R. and M.J.T. provided logistical support and assisted with data analysis; all authors participated in data interpretation; R.M.N. wrote the first draft of the paper, and the other authors contributed to its final form. Funding: This work was supported by the US Army Armament Research, Development and Engineering Center (ARDEC) Joint Service Small Arms Program (JSSAP) through Contract Number N00024-12-D-6404 (Delivery Order 0275). DISTRIBUTION STATEMENT A applies: Approved for Public Release. Distribution is Unlimited. Acknowledgments: We appreciate the assistance of Aaron Long in writing the computer programs and assisting with the data analysis. Conflicts of Interest: The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Photonics 2018, 5, 18

18 of 19

References Locket, N.A. Problems of deep foveas. Aust. N. Z. J. Ophthalmol. 1992, 20, 281–295. [CrossRef] [PubMed] Snyder, A.W.; Miller, W.H. Telephoto lens system of falconiform eyes. Nature 1978, 275, 127–129. [CrossRef] [PubMed] 3. Reymond, L. Spatial visual-acuity of the eagle Aquila-Audax: A behavioral, optical and anatomical investigation. Vis. Res. 1985, 25, 1477–1491. [CrossRef] Photonics 2018, 5, x FOR PEER sensing REVIEWand navigation in the animal world: An overview. Sens. Rev. 2013, 33, 183–13. of 19 4. Klemas, V.V. Remote [CrossRef] 5. Murphy,C.J.; C.J.;Howland, Howland, H.C. Owl eyes: Accommodation, corneal curvature and refractive Comp. 5. Murphy, H.C. Owl eyes: Accommodation, corneal curvature and refractive state. J.state. Comp.J.Physiol. Physiol. 151, 277–284. 1983, 151,1983, 277–284. [CrossRef] 6. Melnyk, P.B.; Messner, R.A. R.A. Biologically Biologically motivated motivated composite composite image image sensor 6. Melnyk, P.B.; Messner, sensor for for deep-field deep-field target target tracking. tracking. In Proceedings of the SPIE Conference on Vision Geometry XV, San Jose, CA, USA, 28 January–1 February In Proceedings of the SPIE Conference on Vision Geometry XV, San Jose, CA, USA, 28 January–1 February 2007; 2007; pp. 649905-1–649905-8. pp. 649905-1–649905-8. 7. Curatu, G.; G.; Harvey, Harvey, J.E. J.E. Lens Lens design design and and system system optimization optimization for for foveated foveated imaging. imaging. In In Proceedings Proceedings of of the the 7. Curatu, SPIE Conference on Current Developments in Lens Design and Optical Engineering IX, San Diego, CA, SPIE Conference on Current Developments in Lens Design and Optical Engineering IX, San Diego, CA, USA, USA, August 10–14 August 2008; pp. 70600P-1–70600P-9. 10–14 2008; pp. 70600P-1–70600P-9. 8. Du, X.; Chang, J.; Zhang, Y.; Wang, Wang, X.; X.; Zhang, Zhang, B.; B.; Gao, Gao, L.; L.; Xiao, Xiao, L. L. Design Design of of aa dynamic dynamic dual-foveated dual-foveated 8. Du, X.; Chang, J.; Zhang, Y.; imaging system. system. Opt. Opt. Express Express 2015, 2015, 23, 23, 26032–26040. 26032–26040. [CrossRef] [PubMed] imaging 9. McCarley, P.L.; Massie, M.A.; Curzan, J.P.Large Largeformat formatvariable variablespatial spatialacuity acuity superpixel superpixel imaging: imaging: Visible Visible 9. McCarley, P.L.; Massie, M.A.; Curzan, J.P. and infrared systems applications. In Proceedings of the SPIE Conference on Infrared Technology and and infrared systems applications. In Proceedings of the SPIE Conference on Infrared Technology and Applications XXX, Orlando, FL, USA, 12–16 April 2004; pp. 361–369. Applications XXX, Orlando, FL, USA, 12–16 April 2004; pp. 361–369. 10. McCarley, McCarley, P.L.; P.L.; Massie, Massie, M.A.; M.A.; Curzan, Curzan, J.P. J.P.Foveating Foveatinginfrared infrared image imagesensors. sensors. In In Proceedings Proceedings of of the the SPIE SPIE 10. Conferenceon onInfrared InfraredSystems Systemsand and Photoelectronic Technology II, San Diego, August Conference Photoelectronic Technology II, San Diego, CA, CA, USA,USA, 26–3026–30 August 2007; 2007; pp. 666002-1–666002-14. pp. 666002-1–666002-14. 11. Bryant, K.R. Foveated Foveated optics. optics. In SPIE Conference Conference on on Advanced Advanced Optics Optics for for Defense Defense 11. Bryant, K.R. In Proceedings Proceedings of of the the SPIE Applications: UV UV through through LWIR, LWIR,Baltimore, Baltimore,MD, MD,USA, USA,17–19 17–19April April2016; 2016;pp. pp.982216-1–982216-11. 982216-1–982216-11. Applications: 12. Thibault, Thibault,S.S.Enhanced Enhancedsurveillance surveillance system based panomorph panoramic lenses. In Proceedings the 12. system based on on panomorph panoramic lenses. In Proceedings of theofSPIE SPIE Conference on Optics and Photonics in Global Homeland Security III, Orlando, FL, USA, 9–13 April Conference on Optics and Photonics in Global Homeland Security III, Orlando, FL, USA, 9–13 April 2007; 2007; pp. 65400E-1–65400E-8. pp. 65400E-1–65400E-8. 13. Ҫöltekin, A.; Haggrén, H. Stereo foveation. Photogramm. J.J. Finl. 2006, 13. 2006, 20, 20, 45–53. 45–53. 14. Schindler, Schindler, K. Geometry and construction Vision 14. constructionof ofstraight straightlines linesin inlog-polar log-polarimages. images.Comput. Comput. Vis. Image Underst. 2006, 103, 196–207. 2006, 103, 196–207. [CrossRef] 15. Tabernero, Tabernero, A.; A.; Portilla, Portilla, J.; J.; Navarro, Navarro, R. R. Duality Duality of of log-polar log-polar image representations in the space and spatial15. frequency domains. domains. IEEE IEEE Trans. Trans. Signal Signal Process. Process. 1999, 1999, 47, 47, 2469–2479. 2469–2479. [CrossRef] frequency 16. Wang, Wang, Z.; Z.; Lu, Lu, L.; L.; Bovik, Bovik, A.C. Foveation scalable video coding with automatic fixation selection. IEEE 16. IEEE Trans. Trans. Image Process. Process. 2003, 2003, 12, 12, 243–254. 243–254. [CrossRef] [PubMed] Image 17. Itti, L. Automatic compression using a neurobiological model of visual attention. IEEE 17. Automaticfoveation foveationfor forvideo video compression using a neurobiological model of visual attention. Trans.Trans. ImageImage Process. 2004,2004, 13, 1304–1318. IEEE Process. 13, 1304–1318. [CrossRef] [PubMed] 18. Hua, H.; H., Liu, Liu, S. S. Dual-sensor Dual-sensor foveated foveated imaging imaging system. system. Appl. Appl. Opt. 2008, 47, 317–327. [CrossRef] [PubMed] 18. 19. Belay, Belay, G.Y.; G.Y.; Ottevaere, Ottevaere, H.; H.; Meuret, Meuret, Y.; Y.; Vervaeke, Vervaeke, M.; M.; Van Van Erps, Erps, J.; J.; Thienpont, Thienpont, H. H. Demonstration Demonstration of aa 19. multichannel, multiresolution imaging system. Appl. Opt. 2013, 52, 6081–6089. multichannel, multiresolution imaging system. Appl. [CrossRef] [PubMed] 20. Carles, Carles, G.; G.;Muyo, Muyo,G.;G.; Bustin, Wood, A.; Harvey, A.R. Compact multi-aperture with high 20. Bustin, N.;N.; Wood, A.; Harvey, A.R. Compact multi-aperture imagingimaging with high angular angular resolution. J. Opt. Am. 32, A 2015, 32, 411–419. resolution. J. Opt. Soc. Am.Soc. A 2015, 411–419. [CrossRef] [PubMed] 21. Carles, G.; G.; Chen, Chen, S.; Bustin, Bustin, N.; Downing, J.; McCall, D.; Wood, A.; Harvey, Harvey, A.R. A.R. Multi-aperture Multi-aperture foveated foveated 21. imaging. Opt. Lett. 2016, 41, 1869–1872. imaging. [CrossRef] [PubMed] 22. Babington,J.;J.;Wood, Wood,A.;A.; Ralph, Harvey, Superimposed multi-resolution imaging. 22. Carles, G.; Babington, Ralph, J.F.;J.F.; Harvey, A.R.A.R. Superimposed multi-resolution imaging. Opt. Opt. Express 25, 33043–33055. [CrossRef] Express 2017,2017, 25, 33043–33055. 23. Kane, T.J.;T.J.; Rice,Rice, T.F.; T.F.; Tauber, M.J. Analysis and implementation of the foveated 23. Long, Long, A.D.; A.D.;Narayanan, Narayanan,R.M.; R.M.; Kane, Tauber, M.J. Analysis and implementation of the vision of the raptor eye.raptor In Proceedings of the SPIEofConference on Image Sensing Technologies: Materials, foveated vision of the eye. In Proceedings the SPIE Conference on Image Sensing Technologies: Devices, Systems, Applications III, Baltimore, USA, 20–21 98540T-1–98540T-9. Materials, Devices,and Systems, and Applications III,MD, Baltimore, MD, April USA, 2016; 20–21pp. April 2016; pp. 98540T-1– 98540T-9. 24. Long, A.D.; Narayanan, R.M.; Kane, T.J.; Rice, T.F.; Tauber, M.J. Foveal scale space generation with the logpolar transform. In Proceedings of the SPIE Conference on Image Sensing Technologies: Materials, Devices, Systems, and Applications IV, Anaheim, CA, USA, 12–13 April 2017; pp. 1020910-1–1020910-8. 25. Walls, G.L. Significance of the foveal depression. Arch. Ophthalmol. 1937, 18, 912–919. 26. Waldvogel, J.A. The bird’s eye view. Am. Sci. 1990, 78, 342–353. 27. Tucker, V. The deep fovea, sideways vision and spiral flight paths in raptors. J. Exp. Biol. 2000, 203, 3745– 3754. 1. 2.

Photonics 2018, 5, 18

24.

25. 26. 27. 28. 29. 30. 31. 32. 33. 34.

35. 36. 37. 38. 39. 40.

41.

42.

43. 44.

45.

19 of 19

Long, A.D.; Narayanan, R.M.; Kane, T.J.; Rice, T.F.; Tauber, M.J. Foveal scale space generation with the log-polar transform. In Proceedings of the SPIE Conference on Image Sensing Technologies: Materials, Devices, Systems, and Applications IV, Anaheim, CA, USA, 12–13 April 2017; pp. 1020910-1–1020910-8. Walls, G.L. Significance of the foveal depression. Arch. Ophthalmol. 1937, 18, 912–919. [CrossRef] Waldvogel, J.A. The bird’s eye view. Am. Sci. 1990, 78, 342–353. Tucker, V. The deep fovea, sideways vision and spiral flight paths in raptors. J. Exp. Biol. 2000, 203, 3745–3754. [PubMed] Traver, V.J.; Bernardino, A. A review of log-polar imaging for visual perception in robotics. Rob. Auton. Syst. 2010, 58, 378–398. [CrossRef] Kingslake, R.; Johnson, R.B. Lens Design Fundamentals; Academic Press: Burlington, MA, USA, 2010; pp. 501–512. Samy, A.M.; Gao, Z. Fovea-stereographic: A projection function for ultra-wide-angle cameras. Opt. Eng. 2015, 54, 045104-1–045104-8. [CrossRef] Shimizu, S.; Hashizume, T. Development of micro wide angle fovea lens–Lens design and production of prototype. IEEJ J. Ind. Appl. 2013, 2, 55–60. Shimizu, S.; Tanzawa, Y.; Hashizume, T. Development of wide angle fovea telescope. IEEJ J. Ind. Appl. 2014, 3, 368–373. Nagy, J.G.; O’Leary, D.P. Restoring images degraded by spatially variant blur. SIAM J. Sci. Comput. 1988, 19, 1063–1082. [CrossRef] Šorel, M.; Šroubek, F. Space-variant deblurring using one blurred and one underexposed image. In Proceedings of the 16th IEEE International Conference on Image Processing (ICIP’09), Cairo, Egypt, 7–10 November 2009; pp. 157–160. Araujo, H.; Dias, J.M. An introduction to the log-polar mapping. In Proceedings of the 2nd IEEE Workshop on Cybernetic Vision, Sao Carlos, Brazil, 9–11 December 1996; pp. 139–144. Bolduc, M.; Levine, M.D. A review of biologically motivated space-variant data reduction models for robotic vision. Comput. Vis. Image Underst. 1998, 69, 170–184. [CrossRef] Griffin, G.; Holub, A.; Perona, P. Caltech-256 Object Category Dataset; California Institute of Technology Technical Report 7694; California Institute of Technology: Pasadena, CA, USA, 2007. Koenderink, J.J. The structure of images. Biol. Cybern. 1984, 50, 363–370. [CrossRef] [PubMed] Lindeberg, T. Scale-space theory: A basic tool for analyzing structures at different scales. J. Appl. Stat. 1994, 21, 225–270. [CrossRef] Witkin, A. Scale-space filtering: A new approach to multi-scale description. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’84), San Diego, CA, USA, 19–21 March 1984; Volume 9, pp. 150–153. Lindeberg, T.; Florack, L. Foveal Scale-Space and the Linear Increase of Receptive Field Size as a Function of Eccentricity; KTH Royal Institute of Technology Technical Report ISRN KTH NA/P-94/27-SE; KTH Royal Institute of Technology: Stockholm, Sweden, 1994. Matungka, R.; Zheng, Y.F.; Ewing, R.L. 2D invariant object recognition using log-polar transform. In Proceedings of the 7th World Congress on Intelligent Control and Automation (WCICA 2008), Chongqing, China, 25–27 June 2008; pp. 223–228. Pamplona, D.; Bernardino, A. Smooth foveal vision with Gaussian receptive fields. In Proceedings of the 9th IEEE-RAS International Conference on Humanoid Robots, Paris, France, 7–10 December 2009; pp. 223–229. Chambers, D.R.; Flannigan, C.; Wheeler, B. High-accuracy real-time pedestrian detection system using 2D and 3D features. In Proceedings of the SPIE Conference on Three-Dimensional Imaging, Visualization, and Display 2012, Baltimore, MD, USA, 24–25 April 2012; pp. 83840G-1–83840G-11. Hao, Q.; Wang, Z.; Cao, J.; Zhang, F. A hybrid bionic image sensor achieving FOV extension and foveated imaging. Sensors 2018, 18, 1042. [CrossRef] [PubMed] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents