Virtual Deformable Image Sensors: Towards to a General ... - MDPI

4 downloads 0 Views 8MB Size Report
Jun 6, 2018 - Penrose tiling, which was presented by Penrose in 1973 [36], and has ... corners with angles of 36, 144, 36, and 144 degrees, and the thick ...
sensors Article

Virtual Deformable Image Sensors: Towards to a General Framework for Image Sensors with Flexible Grids and Forms Wei Wen

ID

and Siamak Khatibi *

Blekinge Institute of Technology, 37179 Karlskrona, Sweden; [email protected] * Correspondence: [email protected]; Tel.: +46-734-223636 Received: 17 April 2018; Accepted: 25 May 2018; Published: 6 June 2018

 

Abstract: Our vision system has a combination of different sensor arrangements from hexagonal to elliptical ones. Inspired from this variation in type of arrangements we propose a general framework by which it becomes feasible to create virtual deformable sensor arrangements. In the framework for a certain sensor arrangement a configuration of three optional variables are used which includes the structure of arrangement, the pixel form and the gap factor. We show that the histogram of gradient orientations of a certain sensor arrangement has a specific distribution (called ANCHOR) which is obtained by using at least two generated images of the configuration. The results showed that ANCHORs change their patterns by the change of arrangement structure. In this relation pixel size changes have 10-fold more impact on ANCHORs than gap factor changes. A set of 23 images; randomly chosen from a database of 1805 images, are used in the evaluation where each image generates twenty-five different images based on the sensor configuration. The robustness of ANCHORs properties is verified by computing ANCHORs for totally 575 images with different sensor configurations. We believe by using the framework and ANCHOR it becomes feasible to plan a sensor arrangement in the relation to a specific application and its requirements where the sensor arrangement can be planed even as combination of different ANCHORs. Keywords: framework; sensor grid; pixel form; hexagonal; Penrose; deformable sensor; HoG

1. Introduction The most dominant grid structure of the image sensor in a digital camera is the two-dimensional square grid, where each pixel has a square as its basic form. The ease of its implementation in the Cartesian coordinate system is the main reason for its popularity since the invention of the first digital image camera. In recent years, the performance of digital cameras has improved drastically due to increase of resolution of the image sensors achieved by reducing the pixel size. In some special image sensors such as OV5675 from OmniVision [1], the pixel size is as small as 1.12 µm × 1.12 µm. However, the smaller pixel size results to lower dynamic range (DR), lower signal-to-noise ratio (SNR) and lower fill factor (FF) [2], indicating that by reducing the pixel size the image quality is reduced. Moreover, the optical diffraction limit; which is a constraint by the aperture of optical elements, makes it impossible to physically reduce the pixel size less than 1.22 λ f /D according to Rayleigh criterion, where λ is the wavelength of light, f is the focal length of lens, and D is the aperture diameter. The wavelengths of visible light range are between 390 nm to 780 nm for a typical human eye. The Quanta Image Sensor (QIS) is proposed to overcome the sub-diffraction-limit [3–5], where each pixel is partitioned into thousands of single photon sensitive sub-pixels (e.g., 200–1000 nm pitch) referred to as “jots”. The sensor measures light intensity using oversampled binary observations which is sensitive to single photon. Its architecture allows high spatial (>109/sensor) and temporal

Sensors 2018, 18, 1856; doi:10.3390/s18061856

www.mdpi.com/journal/sensors

Sensors 2018, 18, x FOR PEER REVIEW Sensors 2018, 18, 1856

2 of 16 2 of 16

binary observations which is sensitive to single photon. Its architecture allows high spatial (>109/sensor) and temporal resolution (>102–103 Hz) of photon strikes on image planes. However, image reconstruction theof sensor remains issue. However, Anatomical and physiological studies resolution (>102–103ofHz) photon strikesa challenging on image planes. image reconstruction of the indicate that our visual quality-related issues, such and as high contrast sensitivity, high SNR, and optimal sensor remains a challenging issue. Anatomical physiological studies indicate that our visual sampling are related directly form and arrangement of theSNR, sensors the visual system [6], quality-related issues, such to asthe high contrast sensitivity, high andinoptimal sampling are which related have a significant in optimizingof the visual acuity [7]. Within of our eye, the directly to the formrole and arrangement the sensors in the visual systemthe [6], retina which have a significant photoreceptor (the rods and cones) mosaic determines the amount information which is (the retained role in optimizing the visual acuity [7]. Within the retina of our of eye, the photoreceptor rods or lost by mosaic the sampling process, including resolution which acuityisand detection [8]. The and cones) determines the amount of information retained or lostacuity by the sampling photoreceptor layerresolution specializedacuity for maximum visualacuity acuity[8]. is in thephotoreceptor center of the retina; i.e., the fovea. process, including and detection The layer specialized for Figure 1 shows the distribution of photoreceptors in the areas from the center to the periphery of the maximum visual acuity is in the center of the retina; i.e., the fovea. Figure 1 shows the distribution of retina. The shapeinof the the fovea is similar to a hexagonal packed, with photoreceptors thecones areasin from center to the periphery of thestructure, retina. Thedensely shape of cones in the virtually no gap to between neighboring cones. However, in with the periphery, the cones and rods are not fovea is similar a hexagonal structure, densely packed, virtually no gap between neighboring packed closely, particularly the the cones areand farrods apart each closely, other, particularly and the form of each cones. However, in the periphery, cones are from not packed the cones are photoreceptor closer to elliptical. far apart fromiseach other, and the form of each photoreceptor is closer to elliptical.

Figure Figure1.1.Distribution Distributionofofphotoreceptors photoreceptorsininretina retinaofofthe theeye eye[9]. [9].

Inspired by the visual system, the hexagonal grid structure has been proposed and implemented Inspired by the visual system, the hexagonal grid structure has been proposed and implemented on hardware, since many years ago, as an alternative grid structure for image representation instead on hardware, since many years ago, as an alternative grid structure for image representation instead of of the conventional grid, due to its advantages compared to the square grid [10–12]. Two examples the conventional grid, due to its advantages compared to the square grid [10–12]. Two examples of of such hardware implementations are the super CCD from Fujifilm which features an octagonalsuch hardware implementations are the super CCD from Fujifilm which features an octagonal-formed formed pixel in a hexagonal sensor grid [13], and color filters in hexagonal form for the image sensors pixel in a hexagonal sensor grid [13], and color filters in hexagonal form for the image sensors to to improve the quality of the acquired information by the sensor [14]. The cost of transferring the improve the quality of the acquired information by the sensor [14]. The cost of transferring the popular popular square grid and pixel form to the hexagonal ones in camera and display technologies has square grid and pixel form to the hexagonal ones in camera and display technologies has been one of been one of the issues in today’s unpopularity of the hardware hexagonal technique implementation. the issues in today’s unpopularity of the hardware hexagonal technique implementation. The other The other issue is the difficulty of image processing of hexagonal images where unlike the square issue is the difficulty of image processing of hexagonal images where unlike the square grid, the points grid, the points in a hexagonal grid do not easily lend themselves to be addressed by integer Cartesian in a hexagonal grid do not easily lend themselves to be addressed by integer Cartesian coordinates; coordinates; due to that the points are not aligned in two orthogonal directions. Besides hardware due to that the points are not aligned in two orthogonal directions. Besides hardware implementation implementation of the hexagonal grid, several attempts at building artificial retinas with electronic of the hexagonal grid, several attempts at building artificial retinas with electronic hardware are hardware are also achieved showing a development from implementations with discrete components [15] also achieved showing a development from implementations with discrete components [15] over over first integrated versions [16] to high-density arrays [17] with resolutions of up to 48,000 pixels [18]. first integrated versions [16] to high-density arrays [17] with resolutions of up to 48,000 pixels [18]. However, when it is about flexibility, once one image sensor is manufactured, it is almost impossible However, when it is about flexibility, once one image sensor is manufactured, it is almost impossible to change the sensor grid and pixel form physically on the hardware. To overcome the hardware to change the sensor grid and pixel form physically on the hardware. To overcome the hardware

Sensors 2018, 18, 1856

3 of 16

limitation and deform the current image sensor closer to the retina, a software-based framework is necessary to achieve an optimal spatial sampling process. Such a framework can offer flexibility in design of pixel form and grid structure of the image sensor. Numerous software solutions using image processing algorithms were developed to generate a virtual image sensor. Based on the grid pattern, there are two types of the virtual sensor grids: periodic tiling grid (i.e., the square or hexagonal grids) [7] and aperiodic tiling grid (i.e., Penrose or log-spiral grids) [19,20]. Currently, generation of the hexagonal or Penrose sensor grid is generally achieved by having larger pixels with linear and non-linear interpolation of intensity values of the square pixel form in the Cartesian system, such as the nearest neighbor, bilinear, bicubic and the spline based on least-squares [21–23]. In different tiling grids, there are various pixel forms which can be fixed and regular, such as square, hexagonal and rhombus, or can be dynamic and irregular, such as Voronoi [24,25]. To have a higher fill factor, all the pixel forms are used for removing the gap between pixels. However, the achievement of the different pixel forms in different grids are still done by interpolation. In this paper, we propose a general framework towards a virtual deformable image sensor in which grid, pixel and the gap (i.e., the empty space among pixels) can change their form and size. This facilitates application-based configuration of grid, pixel, and gap on the image sensor. In the core framework, we use the idea of modelling the incident photons onto the sensor surface which is elaborated in our previous works [26,27]. Accordingly, each pixel of a captured image by a traditional image sensor (i.e., having square grid and pixel form) is projected onto a grid of L × L square subpixels in which the grid is arranged by the known fill factor or its estimation value as in [28]. Inspired by Monte Carlo simulation, the intensity values of the subpixels are estimated by a statistical resampling process; using a local learning model, a Bayesian inference method, and a maximum likelihood estimator (of Gaussian distribution). Then the subpixels are projected onto the deformed pixel and grid of image sensor; based on the grid and pixel configuration. Certain configurations are studied in our previous works: (a)

(b)

(c)

The grid and pixel are square and there is no or fixed gap [26,27]. By virtual increase of the fill factor to 100%, the gap between actual pixels in a CCD camera sensor is removed. The results show the dynamic range is widened and tonal level is extended. The grid and pixel are hexagonal and square respectively and there is no or fixed gap in [29], where the hexagonal grid is generated by a half-pixel shifting, its results show that the generated hexagonal images are superior in detection of curvature edges to the square images. The grid and pixel are hexagonal and there is no gap [30]. In this work, the impact of the three sensor properties, the grid structure, pixel form and fill factor, is examined by curviness quantification using gradient computation. The results show that the grid structure and pixel form are the first and second most important properties and the hexagonal image is the best image type for distinguishing the contours in the images. In this study we pay attention to two new configurations:

(d) (e)

The grid and pixel are hexagonal and there is a fixed gap; The grid and pixel are Penrose and there is no or fixed gap. In this paper, the feature descriptor, histogram of oriented gradient (HoG), is used for examining the impact of the above two configurations to obtain the characteristics of the sensor structure.

This paper is organized as follows: in Sections 2 and 3, periodic and aperiodic tiling on an image sensor and the methodology are discussed in more detail. In Section 4 the implementation of HoG in relation to the configurations of d and e is elaborated. Section 5 presents the experiment setup. Then the results are shown and analyzed in Section 6. Finally, we summarized and discussed our work in this paper in Section 7.

Sensors 2018, 18, 1856

4 of 16

2. Virtual Deformable Image Sensor Sensors 2018, 18, x FOR PEER REVIEW

4 of 16

The enhanced andPEER zoomed images of four segments of a human foveal photoreceptor mosaic Sensors 2018, 18, x FOR REVIEW 4 of 16from 2. Virtual Deformable the original image printedImage in [8] Sensor are shown in Figure 2. From left to right, the segment is chosen from 2. Virtual Deformable Imageof Sensor the center of the fovea, the fovea, and the peripheral areas that arephotoreceptor 1.35 mm and mosaic 5 mm away The enhanced and slope zoomed the images of four segments of a human foveal the original image [8]the arefovea shown in Figure From left tofoveal right, the segment is and chosen fromfrom the fovea center, respectively. In the2.of photoreceptors are only cones packed The enhanced andprinted zoomedinimages of fourcenter, segments a human photoreceptor mosaic from the center the fovea, the slope ofare theshown fovea, in and the peripheral areas that are mm and 5 mm from from theform original image printed in [8] 2. From right, the1.35 segment is chosen densely, the ofofeach cone is close to hexagonal inFigure shape. Whenleft thetocones are farther far away away from the fovea respectively. Infovea, theand fovea center, the only from center, the center ofcone thecenter, fovea, the slope oflarger, the and the peripheral areas that areare 1.35 mmcones and to 5 and mm the fovea the size is getting their form is photoreceptors changing from hexagonal circular, packed densely, the form of each cone is close to fovea hexagonal in the shape. When the cones are farther far away from the fovea center, respectively. In the center, photoreceptors are only cones and and then to elliptical. Figure 3 shows a simulated sensor structure according to the distribution of away from the fovea cone size getting larger, andintheir form is changing from packed densely, the center, form ofthe each cone isisclose to hexagonal shape. When the cones arehexagonal farther far cones in retina of the eye shown in Figure 2. The area in Figure 3 is separated into four areas a, b, to circular, andfovea thencenter, to elliptical. Figure showslarger, a simulated sensor according to the away from the the cone size is3getting and their form structure is changing from hexagonal c anddistribution d, each of which corresponds to one segment in Figure 2. The red contours represent the active cones retina of the eye shown in Figure 2. The area in Figure 3 is separated intotofour to circular, of and theninto elliptical. Figure 3 shows a simulated sensor structure according the pixelsareas on the image sensor. From the left to right, the gaps between the pixels are getting larger as well a, b, c and d, each which corresponds toin one segment in area Figure 2. The red distribution of cones in of retina of the eye shown Figure 2. The in Figure 3 is contours separatedrepresent into four as thethe pixel from sparse, and the of pixelsthe is gaps changing from hexagon to round active ondense the image sensor. From theform right, between thethe pixels arerepresent getting areas a,size b,pixels c and d, each ofto which corresponds toleft oneto segment in Figure 2. The red contours and to Although densely packed sensors gaps between haveare higher larger as well as on thethe pixel sizesensor. from dense to ofbetween pixelspixels isthe changing from thevisual theellipse. active pixels image From thesparse, leftwithout to and right,the theform gaps pixels getting hexagon towell round to densely packed without gapsthe pixels larger as as and the pixel size Although from dense to sparse, andsensors the form of pixels isbetween changing from resolution, the pixel size isellipse. smaller, which limits the visual detection, i.e., presence ofhave athe spatial higher visual resolution, the pixel size is smaller, which limits the visual detection, i.e., the presence hexagon to round and to ellipse. Although densely packed sensors without gaps between pixels have contrast (tonal levels) [31]. To simulate a sensor structure close to the cones distribution in the retina, of a spatial (tonalfor levels) simulate a sensor structure close to the cones distribution higher visual resolution, the pixel[31]. size To is image smaller, which limits the visual detection, i.e., the presence a framework is contrast necessarily generating sensors with different configurations based on grid in the retina,contrast a framework is necessarily for generating image sensors close with to different configurations of a spatial (tonal levels) [31]. To simulate a sensor structure the cones distribution structure, pixel form and gap. In this paper, we mainly focus on the area a and b shown in Figure 3, based grid structure, pixel and gap. this paper,image we mainly focus ondifferent the area aconfigurations and b shown in theon retina, a framework is form necessarily forIngenerating sensors with where the pixel forms and sensor grids are kept the same with no gap and fixed gap. in Figure where the pixel forms are kept same focus with no and afixed based on 3, grid structure, pixel formand andsensor gap. Ingrids this paper, wethe mainly on gap the area and gap. b shown in Figure 3, where the pixel forms and sensor grids are kept the same with no gap and fixed gap.

(a) (b) (c) (d) (a) (b) (c) (d) Figure 2. The enhanced and zoomed images of four segments of a human foveal photoreceptor mosaic Figure 2. The enhanced and zoomed images of four segments of a human foveal photoreceptor mosaic from the2.original image and printed in [8],images From of leftfour to right, the segment is chosen (a) the center of Figure The enhanced zoomed segments of a human foveal from photoreceptor mosaic fromfovea, the original image printed in(c,d) [8], From left to right, the segment is chosen from (a) the center of (b) original the slopeimage of fovea, and in peripheral areasthe thatsegment are 1.35ismm and 5from mm (a) away the from the printed [8],the From left to right, chosen thefrom center of fovea, (b) the slope of fovea, and (c,d) the peripheral areas that are 1.35 mm and 5 mm away from the fovea fovea,center (b) therespectively. slope of fovea, and (c,d) the peripheral areas that are 1.35 mm and 5 mm away from the foveafovea center respectively. center respectively.

(a) (a)

(b) (b)

(c) (c)

(d) (d)

Figure 3. A simulated sensor structure according to the distribution of photoreceptors in retina of the eye. Each area ofsimulated (a–d) corresponds to eachaccording segment to of the (a–d) in Figure of 2. photoreceptors in retina of the eye. Figure 3. A sensor structure distribution

Figure 3. A simulated sensor structure according to the distribution of photoreceptors in retina of the Each area of (a–d) corresponds to each segment of (a–d) in Figure 2. eye. Each area ofof(a–d) corresponds to each segment of (a–d) in Figure 2. The design the pixel arrangement and pixel form on an image sensor is dependent on tiling,

whichThe is adesign way ofofcovering flat surface with forms without or overlaps. In an the pixela arrangement and smaller pixel form on or an tiles image sensorgaps is dependent on tiling, image sensor, pixel aatile. pixels are repeating themselves ingaps regular and periodic which is a way of covering flat When surfacethe with smaller forms without overlaps. In The design ofeach the pixelisarrangement and pixel form onor antiles image sensor is or dependent onantiling, image sensor, each pixel is a tile. When the pixels are repeating themselves in regular and periodic which is a way of covering a flat surface with smaller forms or tiles without gaps or overlaps. In an

Sensors 2018, 18, 1856 Sensors 2018, 18, x FOR PEER REVIEW

5 of 16 5 of 16

intervals, it will be called periodic tiling. the On the contrary, when the pattern ofin pixels is not repeatable, image sensor, each pixel is a tile. When pixels are repeating themselves regular and periodic it will be itaperiodic or non-periodic tiling. conventional image most familiar intervals, will be called periodic tiling. OnInthe contrary, when the sensor patternsystems, of pixelsthe is not repeatable, is periodic, by squares pixels, form the image basic building unit of athe digital to ittiling will be aperiodice.g., or non-periodic tiling. which In conventional sensor systems, most image. familiarDue tiling Sensors 2018, 18, x FOR PEER REVIEW 5 of 16 the difficulty of by physical deformation on form image sensors, various unit virtual areDue generated is periodic, e.g., squares pixels, which the basic building of asensor digitalgrids image. to the basedintervals, on of thephysical two of tiling techniques. Except the general pixel tiling sensor, hexagonal difficulty ontiling. image various virtual sensor grids are generated based it willtypes bedeformation called periodic Onsensors, the contrary, when thesquare pattern of pixels is not repeatable, tiling Penrose are two most popular representatives for periodic and aperiodic on the two be types of tiling techniques. Except general square pixel tiling sensor, hexagonal tiling it and will aperiodic or non-periodic tiling. Inthe conventional image sensor systems, the most familiartiling, respectively. and Penrose tiling aree.g., two popular for periodic tiling is periodic, bymost squares pixels,representatives which form the basic buildingand unit aperiodic of a digitaltiling, image.respectively. Due to the difficulty of physical deformation on image sensors, various virtual sensor grids are generated

2.1. Tiling based on the two types of tiling techniques. Except the general square pixel tiling sensor, hexagonal 2.1. Hexagonal Hexagonal Tiling tiling and Penrose tiling are two most popular representatives for periodic and aperiodic tiling,

In In hexagonal hexagonal tiling, tiling, the the hexagonal hexagonal pixels pixels are are arranged arranged in in aa hexagonal hexagonal grid. grid. A A hexagonal hexagonal image image respectively. is generally generated from an original square image. The half pixel shifting method is a popular is generally generated from an original square image. The half pixel shifting method is a popular method in of such such hexagonal hexagonal images, images,which whichisisderived derivedfrom fromdelaying delayingthe thesampling samplingbybya method in generation generation 2.1. Hexagonal Tilingof ahalf halfa pixel a pixel on the horizontal direction as it is shown in Figure 4 [32]. In Figure 4, theand left and on the horizontal it is are shown in Figure 4 [32]. Ingrid. Figure 4, the left In hexagonal tiling, thedirection hexagonalas pixels arranged in a hexagonal A hexagonal image right right patterns are showing the conventional square lattice and the new pseudo-hexagonal sampling patterns are showing thefrom conventional squareimage. latticeThe and new pseudo-hexagonal sampling is generally generated an original square halfthe pixel shifting method is a popular structure, whose pixel form is still square; see the six connected pixels by the dashed lines. structure, whose pixel form is still square; see the six connected pixels by the dashed lines. Ina such such method in generation of such hexagonal images, which is derived from delaying the sampling byIn sampling structure, the distance between the two sampling points, pixels, are not the same; they sampling distancedirection betweenasthe sampling points, are4,not they are are half pixel on thethe horizontal it istwo shown in Figure 4 [32].pixels, In Figure thethe leftsame; and right √ a structure, one 5/2. Another well-known way image upsamples the patterns showing the conventional square lattice and generation the new pseudo-hexagonal sampling ⁄2.are one or or Another well-known way of of hexagonal hexagonal image generation upsamples firstly firstly the original original √5 structure, whose form is stillsquare square;sub-pixels see the sixby connected pixels [33], by[33], the dashed lines. Inasuch square lattice to denser interpolation and then cluster group of square latticedata data toapixel amuch much denser square sub-pixels by interpolation and then cluster a group sampling structure, the distance between the two sampling points, pixels, are not the same; they are sub-pixels of the intermediate data together into an equivalent hexagonal pixel. By this way a pseudo of sub-pixels of the intermediate data together into an equivalent hexagonal pixel. By this way a one orpixel, Anotheraswell-known way of hexagonal image generation upsamples firstly the original √5⁄2.known hexagonal a hyperpel, is generated from a cluster square pixels [34], which is widely pseudo hexagonal pixel, known as a hyperpel, is generated from of a cluster of square pixels [34], which square lattice data to a much denser square sub-pixels by interpolation [33], and then cluster a group with used for displaying a hexagonal image on normal monitor. In Figure 5b, each area surrounding is widely used for displaying a hexagonal image on normal monitor. In Figure 5b, each area of sub-pixels of the intermediate data together into an equivalent hexagonal pixel. By this way a red boundarywith is a cluster of squareissubpixels by a hyperpel. In this method, the distance surrounding redpixel, boundary cluster represented ofissquare subpixels represented a hyperpel. In this pseudo hexagonal known as aa hyperpel, generated from a cluster of squareby pixels [34], which between each two adjacent hexagonal pixels is almost the same, andisthe form the of each pixel isthe close to method, the distance between each two adjacent hexagonal pixels almost same, and is widely used for displaying a hexagonal image on normal monitor. In Figure 5b, each area form aofhexagon. In both methods, the new pixel intensity in the hexagonal grid has the average intensity each pixel is close a hexagon. methods, thesubpixels new pixel intensity in hexagonal grid has surrounding withto red boundary In is aboth cluster of square represented by the a hyperpel. In this value of a group of square pixels, as ittwo isofdone in Figures 4 and 5. is almost the average intensity value of a group square pixels, as itpixels is done in Figures 4 andand 5. the form method, the distance between each adjacent hexagonal the same, of each pixel is close to a hexagon. In both methods, the new pixel intensity in the hexagonal grid has the average intensity value of a group of square pixels, as it is done in Figures 4 and 5.

Figure 4. The procedure from square pixels to hexagonal pixels by half-pixel shifting method. Figure 4. The procedure from square pixels to hexagonal pixels by half-pixel shifting method. Figure 4. The procedure from square pixels to hexagonal pixels by half-pixel shifting method.

(a)(a)

(b)(b)

Figure 5. Illustration of the square to hexagonal lattice conversion by the hyperpel method (a) the sub-

Figure 5. Illustration hexagonal lattice conversion by the hyperpel method (a) the Figure Illustrationof ofthe thesquare squaretoto hexagonal lattice conversion by the hyperpel method (a)subthe pixels in each surrounded area by red boundary are clustered together for the corresponding pixels in each surrounded area byby red corresponding sub-pixels in each surrounded area redboundary boundaryare areclustered clustered together together for for the corresponding hexagonal pixel and (b) the value of each hexagonal pixel is the average intensity of the sub-pixels hexagonal (b) the value value of of each each hexagonal hexagonal pixel pixel is is the the average average intensity intensity of of the the sub-pixels sub-pixels hexagonal and (b) the within pixel each cluster [35]. within within each each cluster cluster [35]. [35].

Sensors 2018, 18, 1856 Sensors 2018, 18, x FOR PEER REVIEW

6 of 16 6 of 16

2.2. 2.2. Penrose Penrose Pixel Pixel Arrangment Arrangment Although the hexagonal hexagonalstructure structureisisvery veryclose closetoto the true structure of human vision system in Although the the true structure of human vision system in the the fovea, however it is farfrom away from and a perfect and of real of isit.that Thephotoreceptors truth is that fovea, however it is still farstill away a perfect real model it. model The truth photoreceptors in the human fovea are arranged non-periodically, due to that each of photoreceptors in the human fovea are arranged non-periodically, due to that each of photoreceptors as a tile is not as a tile is not region of human way tothe construct the aperiodic repeatable in repeatable the region in of the human retina. One retina. way toOne construct aperiodic pixel gridpixel is togrid use is to use tiling, Penrose tiling, which was presented in and 1973has [36], andused has for been used for Penrose which was presented by PenrosebyinPenrose 1973 [36], been building thebuilding models the models the sensor Figure shows the rhombus Penrose which consists of the sensorofpixel layoutpixel [24]. layout Figure [24]. 6 shows the 6rhombus Penrose tiling, whichtiling, consists of two types of two types ofeach rhombuses, placed atorientations five different specific rules [36]. two of rhombuses, placed ateach five different by orientations specific rulesby [36]. The two types ofThe Penrose types Penrose rhomb tilesinto cantwo be divided into two groups thin and thick thin rhomboftiles can be divided groups of thin and thickof rhombuses. The rhombuses. thin rhomb The has four rhomb four corners with angles of 36, 36, and 144the degrees, and thehas thick rhomb has108, angles cornershas with angles of 36, 144, 36, and 144144, degrees, and thick rhomb angles of√72, 72, 1+ 5 Number of 72, 108, 72, and 108 degrees. The ratio of the number of thick to thin rhombi is the Golden and 108 degrees. The ratio of the number of thick to thin rhombi is the Golden Number , which is 2

1+√5

, the which is of also the area. ratio of their the area.periodic Unlike the periodic tiling, Penrose has no translational also ratio their Unlike tiling, Penrose tiling has notiling translational symmetry; 2 it never repeats itself exactly. This means that it is theoretically possible to integrate and sample symmetry; it never repeats itself exactly. This means that it is theoretically possible to integrate and the infinite indefinitely without repeating the samethe pixel structure. In practice, allows to sample the plane infinite plane indefinitely without repeating same pixel structure. In this practice, this virtually a significantly larger number of different than isimages possible with regular, with square allows tosample virtually sample a significantly larger numberimages of different than isapossible a grid. In the method in [24], the Penrose tiling is Penrose achievedtiling by the ofby upsampling and regular, square grid.presented In the method presented in [24], the is process achieved the process of resampling. The done by placingisthe regular pixel intensities over a new grid of square upsampling and upsampling resampling. isThe upsampling done by placing the regular pixel intensities over a subpixel implementing the implementing nearest neighbor interpolation. Then the new grid ofthe subpixels is new grid and of square subpixel and the nearest neighbor interpolation. Then new grid projected ontoisthe grid of Penrose of Penrose pixel, theof thin or thickpixel, rhombus, is composed of subpixels projected onto thepixels. grid Each of Penrose pixels. Each Penrose the thin or thick by a cluster subpixels. Penrose pixel intensity is the average intensity of the intensity subpixel rhombus, is of composed by The a cluster of subpixels. The Penrose pixel intensity is value the average intensities belongintensities to its cluster. value of thethat subpixel that belong to its cluster. the virtual virtual sensor sensorcan canbebegenerated generatedwith with hexagonal Penrose structures; Although the hexagonal or or Penrose structures; i.e., i.e., by by interpolation means it was discussed above,however howeverthe theproblems problemsrelated relatedto to the the upsampling interpolation means as as it was discussed above, process are remained in these arrangements. We believe by implementation of reconstruction instead of interpolation process it is possible to solve this challenging problem. In a reconstruction process new data (i.e., for the non-available non-available subpixels intensities) is added to the current one (i.e., the available available model; e.g.,e.g., a model basedbased on incidental photons onto sensor subpixels intensities) intensities)using usinga aresampling resampling model; a model on incidental photons onto surface.surface. An interpolation process is using the current to predict non-available subpixels intensities; sensor An interpolation process is using thedata current data to predict non-available subpixels e.g., by using a local meanafilter. is reasonable the reconstruction in a framework intensities; e.g., by using localThus, meanitfilter. Thus, it to is use reasonable to use the process reconstruction process of a framework deformable of sensor grid which it is elaborated 3. in a deformable sensor grid whichin it Section is elaborated in Section 3.

Figure Figure 6. 6. An An example example of of the the rhombus rhombus Penrose Penrose tiling. tiling.

3. Image Generation on Deformable Grid 3. Image Generation on Deformable Grid In this section, the process of generating an image using the virtual deformable sensor under the In this section, the process of generating an image using the virtual deformable sensor under framework is explained. According to the model of the incident photons onto the sensor surface the framework is explained. According to the model of the incident photons onto the sensor surface presented in [26], the reconstruction process is divided into three steps of: (a) projecting each original pixel intensity onto a new grid of subpixels based on the gap size and form in the configuration; (b)

Sensors 2018, 18, 1856

7 of 16

presented in [26], the reconstruction process is divided into three steps of: (a) projecting each original pixel intensity onto a new grid of subpixels based on the gap size and form in the configuration; (b) estimating the values of subpixels based on a local learning model; and (c) estimating the new pixel intensity by decision-making based on the grid structure and pixel form in the configuration. The three steps are elaborated below: (a)

(b)

(c)

A grid of virtual image sensor pixels is constructed. Each original pixel, having square pixel form and arranged in square grid is projected onto a grid of L × L square subpixels. According to the configuration size of the gap G between the pixels, the size of the active pixel area is defined as S × S, where S = L − G. The intensity value of every pixel in the image sensor array is assigned to the virtual active pixel area in the new grid. The intensities of subpixels in the gap areas are assigned to be zero. An example of such sensor rearrangement on sub-pixel level is presented in Figure 6, where there is a 3 by 3 pixels’ grid, and the light and dark grey areas represent the active pixel areas and the gap areas. Assuming L = 30 and S = 18, and thereby the gap size becomes G = 12 according to the above equation. The second step is to estimate the values of subpixels in both pixel areas and gap areas. Considering the statistical fluctuation of incident photons and their conversion to electrons on the sensor is a random Gaussian process, from a certain neighborhood area of each pixel, a local Gaussian model is generated by maximum likelihood method. Then a local noise source is generated within each local model, and introduced to its certain neighborhood. Inspired by Monte Carlo simulation, all subpixels in each certain neighborhood are estimated in an iteration process using the known pixel values (for subpixels in the active pixel area) or by linear polynomial reconstruction (for subpixels in gap area). In each iteration step the number of subpixels in the pixel area is varied from zero to total number of subpixels in pixel area. After the iteration process, a vector of intensity values for each subpixel is generated and the final subpixel value is predicted using Bayesian inference method and maximum likelihood of Gaussian distribution. In the third step, the subpixels are projected onto the new deformable sensor grid with different sensor grid, pixel form and gap size in respective configuration proposed in Section 2. In this paper, three sets of configurations are considered: (1) square grid and pixel form with or without gap; (2) pseudo-hexagonal grid by half-square-pixel shift and square pixel form with or without gap; (3) hexagonal grid and pixel form with or without gap; and (4) Penrose grid and rhombus pixel form with or without gap, where each of the configurations deformability is demonstrated. For the image generation in the different grids, the subpixels are projected back onto the new sensor grid. The intensity value of each pixel in different sets of configurations is the intensity value which has the strongest contribution in the histogram of its belonging subpixels.

4. Implementing Histogram of Gradient in Different Configurations In [30], the gradient is proved to be an effective parameter for examining the impact of different sensor grids and pixel forms on curviness. In this paper, the histogram of gradient (HoG) is used for evaluating the characteristic of the sensors having different configurations. The general process of HoG on square images; i.e., from extracting features to object detection, is presented in Figure 7, which is divided into four steps. The first step is gradient computation. In [37], it is proved that the performance of feature detection is sensitive to the way in which gradients are computed, but the simplest scheme turns out to be the best. The most common method is to apply the one dimensional centered, point discrete derivative mask in both horizontal and vertical directions with the filter kernels of [−1 0 1] T and [−1 0 1] . The second step in HoG process is spatial/orientation binning. In this step, the image is divided into a group of cells, each of which is a local spatial region in the image. For pixels within each cell a histogram of gradient directions is compiled where each pixel casts a weighted vote (typically as the gradient magnitude itself), for an orientation-based histogram based on the computed gradient values and then the votes are accumulated into orientation bins over the cell. Cells can be either rectangular or radial in shape. The orientation bins are evenly spread over 0 to

Sensors 2018, 18, 1856

8 of 16

180 degrees (unsigned gradient) or 0 to 360 degrees (signed gradient). In the third step in HoG process due to2018, the local variations in illumination and foreground-background contrast, the gradient strengths Sensors 18, x FOR PEER REVIEW 8 of 16 are normalized to reduce the variation. The effective local contrast normalization turns out to be essential for good [37]. The current normalization methodsmethods mostly are basedare on based grouping to be essential forperformance good performance [37]. The current normalization mostly on cells into larger spatial blocks and contrast normalizing each block separately. The main geometries grouping cells into larger spatial blocks and contrast normalizing each block separately. The main of the blocks R-HoG blocks [37], circular [38] and [38] hexagonal H-HoG geometries of are the rectangular blocks are rectangular R-HoG blocks [37],C-HoG circularblocks C-HoG blocks and hexagonal blocks [39]. According to the result in [39], the hexagonal structure block is more effective and efficient H-HoG blocks [39]. According to the result in [39], the hexagonal structure block is more effective thanefficient conventional structure block. The fourth step HoG process is generation of feature and than conventional structure block.and The final fourth andinfinal step in HoG process is generation descriptor is a concatenated of all components normalized of cellthe histograms fromcell all of feature which descriptor which is a vector concatenated vector of of allthe components normalized the blocks. histograms from all the blocks. Input greylevel image

Gradient computation

Spatial / Orientation Binning

Descriptor Blocks and Normalization

HoG feature Collection

Classifier

Figure 7. An overview of the feature extraction and object detection chain. Figure 7. An overview of the feature extraction and object detection chain.

Due to the fact that the sensor arrangements of hexagonal and Penrose images are hexagonal Due to therespectively, fact that the sensor arrangements of hexagonal and Penrose images are hexagonal and aperiodic, the gradient computations in these arrangements are different from and the aperiodic, respectively, the gradient computations in these arrangements are different from the square arrangement where the HoG is generally implemented. The hexagonal grid is periodicsquare tiling, arrangement where is generally implemented. The hexagonal is periodic tiling, and each and each of its pixelthe hasHoG six adjacent neighbor pixels within the samegrid distance, which is unlike the of its pixel has six adjacent neighbor pixels within the same distance, which is unlike the pixel in pixel in the square grid which has four neighbors; i.e., the gradient computation in hexagonal gridthe is squareingrid which has four neighbors; the gradient in hexagonal grid is done in three done three directions with the samei.e., kernel of [−1 0computation 1]. directions with thecomputation same kernelin ofPenrose . [−1 0 1]grid The gradient is more complex due to the fact the pixels are arranged The gradient computation in Penrose grid is more complex due to the directly. fact the pixels are arranged by aperiodic tiling, which the kernel of [−1 0 1] cannot be implemented In a Penrose grid, 1 0 1 by aperiodic tiling, which the kernel of cannot be implemented directly. In a Penrose grid, [− Each ] pixel has the form of a thin or thick rhombus with each pixel has four adjacent neighbor pixels. eachpairs pixelof has pixels. Each pixel has The the form thin or which thick rhombus two 36four and adjacent 144 or 72neighbor and 108 degrees, respectively. slopeof ofathe line connectswith two two pairs of 36 and 144 or 72 and 108 degrees, respectively. The slope of the line which connects two corners of such rhombi with a smaller pair of angles is used as pixel direction. The directions of each corners ofits such rhombi withare a smaller of angles is used as pixel direction. The directions of from each pixel and adjacent pixel used topair obtain five virtual vectors with the respective direction pixel and its adjacent pixel are used to obtain five virtual vectors with the respective direction from the the same origin where the magnitude of vectors are the intensity values of the respective pixel. The same origin where the magnitude of vectors are the intensity values of the respective pixel. The gradient gradient between an actual pixel and each of its neighboring ones is computed by vector subtraction between an actual pixel which and each of its ones is computed by vector subtraction of the respective vectors allows usneighboring to find the orientation and amplitude of the gradient.of the respective vectors which allows us to find the orientation and amplitude of the gradient. 5. Experimental Setup 5. Experimental Setup The dataset used in the experiment is ‘INRIA’ proposed in [37], which contains 1805 cropped The dataset used in the experiment is ‘INRIA’ proposed in [37], which contains 1805 cropped images of humans taken from different sets of personal photos. This dataset was produced at images of humans taken from different sets of personal photos. This dataset was produced at beginning beginning for the challenging task of pedestrian detection. The people are usually standing, but for the challenging task of pedestrian detection. The people are usually standing, but appear in any appear in any orientation and against a wide variety of background scenery, including crowds. Each orientation and against a wide variety of background scenery, including crowds. Each of the cropped of the cropped human images has a resolution of 64 × 128. For the experiment 23 images were human images has a resolution of 64 × 128. For the experiment 23 images were randomly selected randomly selected from this dataset. Each selected image is used to generate four sets of images by from this dataset. Each selected image is used to generate four sets of images by implementing implementing the process described in Section 3. The four sets differ due to the type of grid structure the process described in Section 3. The four sets differ due to the type of grid structure which can which can be square, half pixel shift, hexagon, or Penrose type. We named the generated images in be square, half pixel shift, hexagon, or Penrose type. We named the generated images in each set each set based on the type of grid structure, i.e., four sets of square enriched (SQ_E), half pixel shift based on the type of grid structure, i.e., four sets of square enriched (SQ_E), half pixel shift enriched enriched (HS_E), hexagonal enriched (Hex_E), and Penrose enriched (Pen_E) images. The “enriched (HS_E), hexagonal enriched (Hex_E), and Penrose enriched (Pen_E) images. The “enriched (E)” in the (E)” in the name of images refers to tonal enrichment property of images which is obtained by the name of images refers to tonal enrichment property of images which is obtained by the processing, processing, in comparison to non-processed original type of images. The images of each four sets in comparison to non-processed original type of images. The images of each four sets differ due to the differ due to the configuration parameters of pixel form and gap size. Figure 8 shows one of the configuration parameters of pixel form and gap size. Figure 8 shows one of the original images (SQ) original images (SQ) from the database and one image of each four sets of images. From left to right, from the database and one image of each four sets of images. From left to right, the types of images the types of images are SQ, SQ_E, HS_E, Hex_E, and Pen_E. The whole images are shown in the first are SQ, HS_E, Hex_E, and Pen_E. The whole images are first shown thezoomed first rowout and theshown areas row andSQ_E, the areas marked with red square in each image in the rowinare and marked with red square in each image in the first row are zoomed out and shown in the second row. in the second row. For visualization purposes each pixel in the hexagonal and Penrose grids is composed by a group of square pixels. The generated images show better dynamic range and higher contrast in comparison to the original images. All the processing is programmed and done by Matlab2017b on a stationary computer with an Intel i7-6850k CPU (Intel Corporation, Santa Clara, CA, USA) and a 32 GB RAM memory to keep the process stable and fast.

Sensors 2018, 18, 1856

9 of 16

For visualization purposes each pixel in the hexagonal and Penrose grids is composed by a group of square pixels. The generated images show better dynamic range and higher contrast in comparison to the original images. All the processing is programmed and done by Matlab2017b on a stationary computer with an Intel i7-6850k CPU (Intel Corporation, Santa Clara, CA, USA) and a 32 GB RAM memory to keep the process stable and fast. Sensors 2018, 18, x FOR PEER REVIEW 9 of 16

Sensors 2018, 18, x FOR PEER REVIEW

9 of 16

Figure 8. One of original images and its set of generated images. Figure 8. One of original images and its set of generated images. 6. Results and Discussion

6. Results and Discussion Figure 8. One of original images and its set of generated images.

For each image of the four sets of images HoG is computed as explained in Section 4. As the gradient indicates of intensities an image, the gradient orientation is used For image ofthe thedirectional four setschange of images HoG isincomputed as explained in Section 4. As the 6. each Results and Discussion to show the difference among different image types. Figure 9 shows row-wise histogram of isthe gradient indicates the directional change of intensities in an image, the gradient orientation used to Fororientation each imageofoffive theimages four sets of images HoG computed asimage explained inofSection 4. As the gradient of the database andiscolumn-wise types SQ, SQ_E, HS_E, show the difference among different image types. Figure 9 shows row-wise histogram of the gradient gradient directional change of intensities image, the gradient is usedof Hex_E andindicates Pen_E. the In the figure considering the resultsinofanSQ and SQ_E image orientation types, the peaks orientation of five images ofamong the database and column-wise image types of SQ,histogram SQ_E, HS_E, to show the difference different image Figure 9 shows row-wise the Hex_E histogram of the gradient orientation are close to types. 0, 90 and −90 degrees, indicating that squareof sensor and Pen_E. In the figure considering the results of SQ and SQ_E image types, the peaks of histogram gradient orientation of five images of the database and column-wise image types of SQ, SQ_E, HS_E, structure is more sensitive to the vertical and horizontal changes. When the sensor structure is Hex_E and Pen_E.HS_E In are theand figure considering the results of SQ and image types, the peaks of of the hexagonal; gradient orientation close to 0,image 90 and −90 degrees, indicating that are square is having Hex_E types, the result of the SQ_E two types very sensor close tostructure each histogram of the gradient orientation are close to 0, 90 and −90 degrees, indicating that square sensor more sensitive vertical and horizontal changes. When the sensor to structure is hexagonal; other and to forthe both types there are more sensitive angles in comparison the square arrangement;having structure is more sensitive to degrees. vertical andPenrose horizontal changes. When sensor structure is peaks are at 60, 120 and 180 When structure images are the considered, due and to their HS_E the and Hex_E image types, thethe result of the two types are very close to each other for both hexagonal; having HS_E and Hex_E image types, the result of the two types are very close to each having two types of rhombus in the pixel form where the angle between each pixel is either around types there are more sensitive angles in comparison to the square arrangement; the peaks are at 60, other and for both types there are more sensitive angles in comparison to the square arrangement; 72 180 or 144 degrees,When the peaks are also close to 72 or 144 degrees. The results related to the five images 120 and degrees. Penrose structure considered, due to their having two types the peaks are at 60, 120 and 180 degrees. Whenimages Penroseare structure images are considered, due to their of Figure 8 are verified for all images of the experimental dataset. Accordingly, the histogram of of rhombus in the pixel form where the angle between each pixel is either around 72 or 144 degrees, having two types of rhombus in the pixel form where the angle between each pixel is either around gradient orientation shows a specific distribution related to each sensor structure with certain gap 72 orare 144also degrees, alsodegrees. close to 72The or 144 degrees. The results to the five the peaks closethetopeaks 72 orare 144 results related to therelated five images ofimages Figure 8 are size. We call this specific distribution as the ANgular CHaracteristic of a sensOR structure offor Figure 8 are verified for all images of the experimental dataset. Accordingly, thegradient histogram of verified all images of the experimental dataset. Accordingly, the histogram of (ANCHOR). When the number of bins for orientation of the gradient is 36, the envelopeorientation of a gradient orientation shows a specific distribution related to each sensor structure with certain gap showshistogram a specificbecomes distribution related to each sensor structure with certain gap size. We call this specific smooth as a curve. size. We call this specific distribution as the ANgular CHaracteristic of a sensOR structure distribution as the ANgular CHaracteristic of a sensOR structure (ANCHOR). When the number of (ANCHOR). When the number of bins for orientation of the gradient is 36, the envelope of a bins forhistogram orientation of the gradient is 36, the envelope of a histogram becomes smooth as a curve. becomes smooth as a curve. SQ

SQ SQ_E

SQ_E

HS_E

HS_E

Hex_E

Hex_E

Pen_E

Pen_E

Figure 9. The angular characteristic of sensor grid structure. The five columns of figures are the histograms of the gradient orientation from five images in the database. From the first to the fourth row, the 9. five image typescharacteristic are SQ, SQ_E, and Pen_E. Figure The angular of HS_E, sensorHex_E grid structure. The five columns of figures are the

Figure 9. The angular characteristic of sensor grid structure. The five columns of figures are the histograms of the gradient orientation from five images in the database. From the first to the fourth histograms of the orientation from five images in the database. From the first to thevalues fourth Inrow, Figure 10,gradient from top toare bottom, the five ANCHOR show the normalized average the five image types SQ, SQ_E, HS_E, Hex_E andcurves Pen_E. row, thehistograms five image of types SQ, SQ_E, HS_E, Hex_E and Pen_E. of 23 the are gradient orientation. The result is consistent to what we conclude from 10, from top10toshows bottom, five ANCHOR curves show the of normalized average FigureIn9.Figure The right Figure thethe comparison between ANCHORs Pen_E (black) andvalues Hex_E of 23 histograms of the gradient The result is consistent to what we conclude from (green). These ANCHORs together,orientation. representing combination of two types of sensor arrangements, Figure 9. The right Figure 10 shows the comparison between ANCHORs of Pen_E (black) and Hex_E (green). These ANCHORs together, representing combination of two types of sensor arrangements,

Sensors 2018, 18, 1856

10 of 16

In Figure 10, from top to bottom, the five ANCHOR curves show the normalized average values of 23 histograms of the gradient orientation. The result is consistent to what we conclude from Figure 9. The right Figure 10 shows the comparison between ANCHORs of Pen_E (black) and Hex_E Sensors 2018, 18, x FOR PEER REVIEW 10 of 16 (green). These ANCHORs together, representing combination of two types of sensor arrangements, compensate week sensitivity areas and and become more more sensitive to detect changes compensateeach eachother’s other’s week sensitivity areas become sensitive todirectional detect directional of intensities. Part of the Part mosaic photoreceptors in the human retina hashuman such a retina combination of the changes of intensities. of ofthe mosaic of photoreceptors in the has such a two types of arrangement. combination of the two types of arrangement.

Figure 10. The histograms of the gradient orientation with 36 bins are shown in the (left). From top Figure 10. The histograms of the gradient orientation with 36 bins are shown in the (left). From top to to bottom, there are results related to SQ, SQ_E, HS_E, Hex_E and Pen_E respectively. The ANCHORs bottom, there are results related to SQ, SQ_E, HS_E, Hex_E and Pen_E respectively. The ANCHORs show the average values of 23 histograms of the gradient orientation. The (right) shows the show the average values of 23 histograms of the gradient orientation. The (right) shows the comparison comparison between ANCHORs of Pen_E (black) and Hex_E (green). These ANCHORs together, between ANCHORs of Pen_E (black) and Hex_E (green). These ANCHORs together, representing representing combination of two types of sensor arrangements, compensate each other’s week combination of two types of sensor arrangements, compensate each other’s week sensitivity areas and sensitivity areas and become more sensitive to detect directional changes of intensities. become more sensitive to detect directional changes of intensities.

The characteristic robustness of ANCHORs is examined by computing averaging of n number The characteristic robustness ofnANCHORs is examined by computing averaging of n number of of histograms of orientation, where is varied from two to twenty-three; i.e., the computations result histograms of orientation, where n is varied from two to twenty-three; i.e., the computations result to to obtain n number of candidates for each ANCHOR. Then the Mean Square Error (MSE) between obtain n number of candidates each ANCHOR. Then the Mean Square Error (MSE) between each each two candidates of certain for ANCHOR is computed by: two candidates of certain ANCHOR is computed by: 𝑛 1 ̅ 2 𝑀𝑆𝐸 = 1 ∑(𝑌 n 𝑖 − 𝑌 )2 𝑛 MSE = ∑ 𝑖=2 Y − Y n i =2 i where i and n represent the number of orientation degree index and the number of the averaged where i andrespectively. n represent the number of orientation degree index and the number of23 the averaged histograms 𝑌𝑖 and 𝑌̅ represent the average value from i histograms and histograms. histograms respectively. Yi and structure Y represent average value11, from i histograms and 23 histograms. The results for different sensor arethe shown in Figure where the five color lines from top The resultsrepresent for different shown in Figure 11,and where the The five figure color lines from to to bottom fivesensor imagestructure types, SQ,are SQ_E, HS_E, Hex_E Pen_E. shows thattop each bottom five image SQ,after SQ_E, Hex_E and Pen_E. The figure shows thatoneach of MSE represent result decreases closetypes, to zero tenHS_E, combinations; indicating no further changes the of MSE result decreases to zeroThus, afterten ten images combinations; indicating no further on the respective ANCHOR hasclose occurred. are enough to obtain a robustchanges ANCHOR for respective ANCHOR has occurred. Thus, ten images are enough to obtain a robust ANCHOR for each type of arrangement. As the results in the figure show, the MSE values reduce to small values each of arrangement. Aswhich the results in the show, the MSE values reduce to small after type having just two images indicate thatfigure the robustness of ANCHORs is strong and values almost after having just twonumber images of which indicate thatANCHORs, the robustness ANCHORs strong and almost independent of the images. Among theof one of Pen_E is has the lowest MSE independent of the the number of images. AmongofANCHORs, the one ofANCHOR. Pen_E hasThe the lowest MSE values, indicating strongest robustness the corresponding variance of values, indicating the strongest robustness of shown the corresponding ANCHOR. The variance of ANCHORS ANCHORS computed from 23 images is in Figure 12. It shows that the variance of the computed from 23 of images shown in Figure 12. It shows the variance of the arrangement type0 arrangement type Pen_Eishas the lowest variance valuesthat in different orientation angles between of has the lowest variance orientation angles between to 360 to Pen_E 360 degrees. However, and the values ones ofin thedifferent arrangement types of SQ and SQ_E 0have thedegrees. highest variance values. The results are consistent with the result in Figure 10, that the Penrose sensor structure has the most robust ANCHOR.

Sensors 2018, 18, 1856

11 of 16

However, and the ones of the arrangement types of SQ and SQ_E have the highest variance values. The results are consistent with the result in Figure 10, that the Penrose sensor structure has the most Sensors 2018, 18, x FOR PEER REVIEW 11 of 16 robust ANCHOR. Sensors 2018, 18, x FOR PEER REVIEW

11 of 16

Figure 11. MSE results between each two candidates of certain ANCHOR. Each of the candidates are Figure 11. 11. MSE MSE results results between between each each two two candidates candidates of of certain certain ANCHOR. ANCHOR. Each Each of of the the candidates candidates are are Figure computed as average values of orientation of the gradient for certain number (n) of images. computed as average values of orientation of the gradient for certain number (n) of images. computed as average values of orientation of the gradient for certain number (n) of images.

Figure 12. The variance of ANCHORs is demonstrated. The histograms of the gradient orientation Figure 12. The variance of ANCHORs is demonstrated. The histograms of the gradient orientation Figure The ANCHORs is demonstrated. The histograms of the gradient with 3612. bins arevariance used andoffrom top to bottom, there are results related to SQ, SQ_E, HS_E,orientation Hex_E and with 36 bins are used and from top to bottom, there are results related to SQ, SQ_E, HS_E, Hex_E and with 36respectively. bins are used and from topof toPen_E bottom, therehas are the results related to SQ, SQ_E, HS_E, Hex_E Pen_E The ANCHOR (black) lowest variance and the SQ (deep blue) Pen_E respectively. The ANCHOR of Pen_E (black) has the lowest variance and the SQ (deep blue) and respectively. ThePenrose ANCHOR of Pen_E (black) has the lowest variance and the SQ others. (deep havePen_E the highest, indicate the arrangement has more robust ANCHOR in comparison to the have highest, indicateindicate the Penrose arrangement has morehas robust ANCHOR in comparison to the others. blue)the have the highest, the Penrose arrangement more robust ANCHOR in comparison to the Theothers. gap factor is defined in relation to the gap size and pixel size by: The gap factor is defined in relation to the gap size and pixel size by: 𝑔𝑎𝑝 𝑠𝑖𝑧𝑒 𝑔𝑎𝑝 𝑓𝑎𝑐𝑡𝑜𝑟 = 𝑔𝑎𝑝 𝑠𝑖𝑧𝑒 × 100% 𝑔𝑎𝑝 𝑓𝑎𝑐𝑡𝑜𝑟 = 𝑝𝑖𝑥𝑒𝑙 𝑠𝑖𝑧𝑒 × 100% 𝑝𝑖𝑥𝑒𝑙 𝑠𝑖𝑧𝑒 Assuming the pixel size is 100 and the gap size between the active areas in two pixels is 20, the Assuming the pixel size is 100 and the gap size between the active areas in two pixels is 20, the gap factor becomes 20%. In the following experiment, the gap size is set at 0, 20, 40 and 60, which is gap factor becomes 20%. In the following experiment, the gap size is set at 0, 20, 40 and 60, which is corresponding to gap factor of 0%, 20%, 40% and 60%. Figure 13 shows one example of the hexagonal corresponding to gap factor of 0%, 20%, 40% and 60%. Figure 13 shows one example of the hexagonal sensor having the gap factor of 0%, 20% and 60% from left to right where the pixel size is kept the sensor having the gap factor of 0%, 20% and 60% from left to right where the pixel size is kept the

Sensors 2018, 18, 1856

12 of 16

The gap factor is defined in relation to the gap size and pixel size by: gap f actor =

gap size × 100% pixel size

Assuming the pixel size is 100 and the gap size between the active areas in two pixels is 20, the gap factor becomes 20%. In the following experiment, the gap size is set at 0, 20, 40 and 60, which is corresponding to gap factor of 0%, 20%, 40% and 60%. Figure 13 shows one example of the hexagonal sensor2018, having factor of 0%, 20% and 60% from left to right where the pixel size is kept the12same. Sensors 18, xthe FORgap PEER REVIEW of 16 Figure 14 shows the results of gradient orientation; i.e., the ANCHORs, of four types of images, SQ_E, Sensors 2018, 18, x FOR PEER REVIEW 12 of 16 HS_E, HS_E, Hex_E Hex_E and Pen_E different configurations of gap factor. the figure due to that thetoeffect SQ_E, and with Pen_E with different configurations of gapInfactor. In the figure due that of the gap factors on the SQ image, i.e., the original image, cannot be implemented its ANCHOR is the the effect of the gap factors on the SQ image, i.e., the original image, cannot be implemented its SQ_E, HS_E, Hex_E and Pen_E with different configurations of gap factor. In the figure due to that same for different gap factors. Each column represents one type of images, and the rows represent the ANCHOR is the for different column represents one type images, and its the effect of the same gap factors on thegap SQ factors. image, Each i.e., the original image, cannot beofimplemented effect of different gap factors on the gap respective ANCHOR. Therepresents effect of ANCHOR changes inand relation rows represent the effect different gapfactors. factors on the respective ANCHOR. Theof effect of ANCHOR ANCHOR is the same forofdifferent Each column one type images, the to gap factor is investigated by MSE computation between a reference ANCHOR (having the same changes in relation to gap factor is investigated by MSE computation between a reference ANCHOR rows represent the effect of different gap factors on the respective ANCHOR. The effect of ANCHOR pixel sizethe asrelation the original image and gap factor of and an ANCHOR with same pixel size but (having same pixel size as the original image and gap factor of 0%) and the an ANCHOR with the changes in to gap factor is investigated by0%) MSE computation between a reference ANCHOR different gap factor in relation to reference one. mean of of MSEs 23ANCHOR images forwith different same pixel size butpixel different gap factor in relation toThe the reference one. The an mean of MSEs from (having the same size as thethe original image and gap factor 0%) from and the23 types arrangement are shown in Table thetotable the values of MSEs increase with images for different types of gap arrangement areInshown in reference Table 1. Inone. theaverage table the values offrom average same of pixel size but different factor in 1. relation the The mean of MSEs 23 increase of the gap factor, however the rate of increase is lowest in the SQ_E arrangement type and MSEs increase with increase the gap factor, however the rate ofthe increase is lowest SQ_E images for different types of of arrangement are shown in Table 1. In table the values in of the average highest in HS_E and Hex_E ones. arrangement type and highest inthe HS_E Hex_E ones. the rate of increase is lowest in the SQ_E MSEs increase with increase of gapand factor, however arrangement type and highest in HS_E and Hex_E ones.

Figure example of of the the hexagonal hexagonalsensor sensorhaving havingthe thegap gapfactor factorofof0%, 0%,20% 20% and 60% from Figure 13. 13. One One example and 60% from leftleft to Figure 13. One example ofisthe hexagonal sensor having the gap factor of 0%, 20% and 60% from left to right, and the pixel size kept the same. right, and the pixel size is kept the same. to right, and the pixel size is kept the same. Gap Factor Gap 60% Factor 60%

Gap Factor Gap 40% Factor 40%

Gap Factor Gap 20% Factor 20%

Gap Factor Gap 0% Factor 0%

Figure 14. The results of ANCHORs from different types of images with different configurations of Figure 14. The results of ANCHORs from different types of images with different configurations of Figure 14. The results of ANCHORs from different types of images with different configurations of gap factor. gap factor. gap factor. Table 1. The average MSE of orientation of four image types with different configurations of gap Table 1. The average MSE of orientation of four image types with different configurations of gap factor referred to the 0% gap factor. factor referred to the 0% gap factor. Gap Gap Factor Factor 60% 60% 40% 40%

Referred to to 0% 0% Gap GapFactor Factor Referred SQ_E HS_E Hex_E Pen_E SQ_E HS_E Hex_E Pen_E 0.0052 0.0059 0.0078 0.0004 0.0052 0.0059 0.0078 0.0004 0.0049 0.0029 0.0053 0.00035 0.0049 0.0029 0.0053 0.00035

Sensors 2018, 18, 1856

13 of 16

Table 1. The average MSE of orientation of four image types with different configurations of gap factor referred to the 0% gap factor. Referred to 0% Gap Factor Gap Factor

SQ_E

HS_E

Hex_E

Pen_E

60% 40% 20%

0.0052 0.0049 0.0050

0.0059 0.0029 0.0028

0.0078 0.0053 0.0034

0.0004 0.00035 0.0003

Figure 15 shows the ANCHORs of five types of images (as in Figure 15) when the pixel size is increased by 20% and having gap factor of 0%. All the results show that the images of the same sensor structure follow the same specific pattern. Table 2 shows the MSEs between the reference ANCHOR and ANCHORs of different types of arrangements with increased pixel size of 20% (in comparison Sensors 2018, 18, x FOR PEER REVIEW 13 of 16 to as original image) and gap factor of 0%. The table shows that pixel size increase results in 10-fold impact on ANCHOR ANCHORofofSQ SQarrangement arrangementinin comparison to the other types. The Tables 2 impact on comparison to the other types. The Tables 1 and12and show show that the impact pixel size changes is greater gapchanges factor changes on ANCHORs of all that the impact of pixelofsize changes is greater that gapthat factor on ANCHORs of all involved involved arrangements. arrangements.

Figure 15. The ANCHORs of different types of images when the pixel size is increased by 20% and Figure 15. The ANCHORs of different types of images when the pixel size is increased by 20% and gap gap factor is 0%. factor is 0%. Table 2. The average MSE of orientation of four image types with pixel size increase of 20% and 0% Table 2. The average MSE of orientation of four image types with pixel size increase of 20% and 0% gap factor. gap factor. Referred to 0% Gap Factor Referred to 0% Gap Factor Gap factor SQ_E HS_E Hex_E Pen_E 0% 0.0039 0.0308 Gap factor SQ_E HS_E 0.0271 Hex_E 0.0036 Pen_E 0%

0.0039

0.0308

0.0271

0.0036

The HoG is computed for the four set of images in the experiment as explained in Section 4, where each HoG result is represented as a vector. In our experiment, the block size is set to 4, 8 and Thecorrelation HoG is computed four of setHoG of images the experiment as explained in of Section 4, 16. The betweenfor thethe results for SQinimages and the other four types images where each represented experiment, the block size is set to 4, 8 and 16. (having theHoG sameresult blockissize of eitheras 4, a8,vector. or 16) In areour computed for the similarity comparison which The correlation between the results of HoG for SQ images and the other four types of images (having are shown in Table 3. The HoG in SQ_E image has the highest correlation to SQ image because they the same of either 4, 8, and or 16) are computed forthe theblock similarity comparison have the block same size sensor structure pixel form. When size is increased which from 4are to shown 16, the in Table 3. The HoG in SQ_E image has the highest correlation to SQ image because they have correlation values increase as well. In Table 3, the three block sizes are marked with three colorsthe of same structure and pixelsize form. When sizerow, is increased from 16, the correlation blue, sensor green and red for block of 4, 8 andthe 16.block In each there are four4 to correlation values in values well. In Table 3, thethe three block sizes marked with threeand colors blue, green relationincrease to each as block size, for which lowest ones areare marked blue, green, red.ofFor example, and red forvalue blockin size of 4, 8to and 16. size In each therered. are four correlation values in relation to each the lowest relation block 16 isrow, marked block size, for which the lowest ones are marked blue, green, and red. For example, the lowest value in relation block size 16 is marked Table to 3. The correlation between thered. results of HoG for SQ images and the other four types of images As Table to 3 shows, the lowest correlation values related to block sizes are related to the Pen_E in respect block size. and Hex_E arrangements. Such results indicate that one should expect most different results of Correlation to SQ HoG having Pen_E in first place and then Hex_E arrangements. The differences among the four SQ_E HS_E Hex_E Pen_E types of arrangements are measured by using their correlation to the same reference SQ image No. Size 4 Size 8 Size 16 Size 4 Size 8 Size 16 Size 4 Size 8 Size 16 Size 4 Size 8 Size type. 16 The 1cause of such differences between each two arrangements is related differences 0.072 0.04 to the 0.064 0.072 between 0.593 0.799 0.9 0.123 0.097 0.071 0.096 0.163 0.064 0.074 0.106 0.679 0.818 0.907 0.258 0.289 0.341 0.27 0.309 0.369 their2 ANCHORs. 3 4 5 6 7 8 9 10 11

0.627 0.672 0.648 0.676 0.623 0.634 0.65 0.652 0.633

0.819 0.809 0.817 0.862 0.822 0.824 0.834 0.858 0.802

0.911 0.886 0.929 0.947 0.903 0.912 0.918 0.951 0.907

0.155 0.066 0.101 0.074 0.022 0.082 0.01 0.008 0.026

0.164 0.131 0.156 0.124 0.111 0.138 0.019 0.006 0.042

0.239 0.167 0.097 0.15 0.229 0.107 0.075 0.14 0.045

0.149 0.053 0.108 0.066 0.056 0.087 0.018 0.004 0.032

0.18 0.12 0.145 0.128 0.126 0.133 0.007 0.002 0.059

0.267 0.132 0.107 0.15 0.204 0.113 0.084 0.158 0.058

0.001 0.081 0.029 0.042 0.012 0.05 0.018 0.164 0.012

0.035 0.114 0.062 0.14 0.015 0.089 0.076 0.243 0.054

0.081 0.173 0.072 0.232 0.067 0.121 0.171 0.368 0.126

Sensors 2018, 18, 1856

14 of 16

Table 3. The correlation between the results of HoG for SQ images and the other four types of images in respect to block size. Correlation to SQ SQ_E

HS_E

Hex_E

Pen_E

No.

Size 4

Size 8

Size 16

Size 4

Size 8

Size 16

Size 4

Size 8

Size 16

Size 4

Size 8

Size 16

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

0.593 0.679 0.627 0.672 0.648 0.676 0.623 0.634 0.65 0.652 0.633 0.604 0.634 0.613 0.629 0.644 0.587 0.612 0.647 0.602 0.636 0.596 0.642

0.799 0.818 0.819 0.809 0.817 0.862 0.822 0.824 0.834 0.858 0.802 0.828 0.815 0.789 0.821 0.81 0.79 0.766 0.809 0.796 0.789 0.795 0.819

0.9 0.907 0.911 0.886 0.929 0.947 0.903 0.912 0.918 0.951 0.907 0.912 0.893 0.88 0.877 0.896 0.889 0.882 0.902 0.885 0.912 0.885 0.894

0.123 0.258 0.155 0.066 0.101 0.074 0.022 0.082 0.01 0.008 0.026 0.039 0.146 0.122 0.053 0.098 0.118 0.063 0.063 0.186 0.027 0.024 0.074

0.097 0.289 0.164 0.131 0.156 0.124 0.111 0.138 0.019 0.006 0.042 0.086 0.216 0.19 0.051 0.111 0.171 0.083 0.133 0.26 0.037 0.055 0.113

0.071 0.341 0.239 0.167 0.097 0.15 0.229 0.107 0.075 0.14 0.045 0.078 0.143 0.154 0.001 0.002 0.072 0.108 0.112 0.275 0.002 0.079 0.17

0.096 0.27 0.149 0.053 0.108 0.066 0.056 0.087 0.018 0.004 0.032 0.053 0.145 0.129 0.065 0.094 0.141 0.036 0.07 0.187 0.029 0.034 0.072

0.072 0.309 0.18 0.12 0.145 0.128 0.126 0.133 0.007 0.002 0.059 0.096 0.217 0.189 0.038 0.093 0.176 0.064 0.124 0.268 0.037 0.045 0.115

0.04 0.369 0.267 0.132 0.107 0.15 0.204 0.113 0.084 0.158 0.058 0.045 0.171 0.165 0.006 0.012 0.122 0.06 0.117 0.284 0.039 0.042 0.132

0.064 0.064 0.001 0.081 0.029 0.042 0.012 0.05 0.018 0.164 0.012 0.026 0.037 0.006 0.013 0.039 0.01 0.096 0.036 0.009 0.09 0.008 0.058

0.072 0.074 0.035 0.114 0.062 0.14 0.015 0.089 0.076 0.243 0.054 0.027 0.08 0.007 0.038 0.063 0.045 0.166 0.026 0.013 0.175 0.018 0.038

0.163 0.106 0.081 0.173 0.072 0.232 0.067 0.121 0.171 0.368 0.126 0.105 0.047 0.01 0.111 0.179 0.017 0.256 0.027 0.001 0.234 0.066 0.015

7. Conclusions In this paper, we have presented a framework by which it becomes feasible to create virtual deformable sensor arrangements. In the framework the structure of the arrangement, the pixel form and the gap factor (related to the distance between pixels) are the three optional variables in creation of a sensor arrangement. We showed that the histogram of gradient orientations is a useful tool in measuring an arrangement structure. The envelope of such a histogram is defined as an ordination distribution function. In our experiments we observed that for each image type; having different arrangement structure, the distribution function is unique. Additionally, we showed that a change of pixel size or gap size; i.e., a different sensor configuration, generates a specific distribution related to the changes. We called these specific orientation distributions ANCHORs. We showed that by using at least two generated images of a certain configuration it is possible to obtain the ANCHOR of the sensor. The results showed pixel size changes have more impact on ANCHORs than gap factor changes. By using 575 different configuration images we verified the robustness of ANCHORs and feasibility of the framework, which encourages us to think about the possibility of tailored sensor arrangements in relation to a specific application and its requirements; our results inspire this idea as well. On the right of Figure 10 the ANCHORs of two different sensor arrangements are shown which can be combined to a new sensor arrangement in case of an application requirement for having both the properties of hexagon and Penrose. We believe this idea may result in being able to have a sensor arrangement in the future very alike that of the biological vision sensory system. Author Contributions: W.W. and S.K. equally contributed to the textual content of this article. W.W. and S.K. conceived and designed the methodology and experiments. W.W. performed the experiments. W.W. and S.K. analyzed the data. W.W. and S.K. wrote the paper together. Conflicts of Interest: The authors declare no conflicts of interest.

Sensors 2018, 18, 1856

15 of 16

References 1. 2. 3. 4. 5. 6. 7.

8. 9. 10. 11.

12.

13. 14. 15. 16. 17. 18.

19. 20. 21. 22. 23. 24.

OmniVision Technologies Inc. OV5675, Color CMOS 5-Megapixel (2592 × 1944) PureCel®Image Sensor. Available online: http://www.ovt.com/sensors/OV5675 (accessed on 26 May 2018). Chen, T.; Catrysse, P.B.; El Gamal, A.; Wandell, B.A. How small should pixel size be. In Electronic Imaging; International Society for Optics and Photonics: Bellingham, WA, USA, 2000; pp. 451–459. Fossum, E. The quanta image sensor (QIS): Concepts and challenges. In Computational Optical Sensing and Imaging; Optical Society of America: Washington, DC, USA, 2011; p. JTuE1. Fossum, E.R.; Ma, J.; Masoodian, S.; Anzagira, L.; Zizza, R. The quanta image sensor: Every photon counts. Sensors 2016, 16, 1260. [CrossRef] [PubMed] Chan, S.H.; Elgendy, O.A.; Wang, X. Images from bits: Non-iterative image reconstruction for quanta image sensors. Sensors 2016, 16, 1961. [CrossRef] [PubMed] Lamb, T.D. Evolution of phototransduction, vertebrate photoreceptors and retina. Prog. Retin. Eye Res. 2013, 36, 52–119. [CrossRef] [PubMed] Sugathan, S.; James, A.P. Irregular pixel imaging. In Proceedings of the 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Delhi, India, 24–27 September 2014; pp. 2459–2463. Curcio, C.A.; Sloan, K.R.; Kalina, R.E.; Hendrickson, A.E. Human photoreceptor topography. J. Comp. Neurol. 1990, 292, 497–523. [CrossRef] [PubMed] Mustafi, D.; Engel, A.H.; Palczewski, K. Structure of cone photoreceptors. Prog. Retin. Eye Res. 2009, 28, 289–302. [CrossRef] [PubMed] Tam, N.D. Hexagonal pixel-array for efficient spatial computation for motion-detection pre-processing of visual scenes. Adv. Image Video Process. 2014, 2, 26–36. [CrossRef] Asharindavida, F.; Hundewale, N.; Aljahdali, S. Study on Hexagonal Grid in Image Processing. In Proceedings of the International Conference on Information and Knowledge Management, Kuala Lumpur, Malaysia, 24–26 July 2012. Coleman, S.; Gardiner, B.; Scotney, B. Adaptive tri-direction edge detection operators based on the spiral architecture. In Proceedings of the 2010 17th IEEE International Conference on Image Processing (ICIP), Hong Kong, China, 26–29 September 2010; pp. 1961–1964. Jiang, Z.T.; Xiao, Q.H.; Zhu, L.H. 3D reconstruction based on hexagonal pixel’s dense stereo matching. Appl. Mech. Mater. 2010, 20–23, 487–492. [CrossRef] Schweng, D.; Spaeth, S. Hexagonal Color Pixel Structure with White Pixels. U.S. Patent 7,400,332B2, 15 July 2008. Fukushima, K.; Yamaguchi, Y.; Yasuda, M.; Nagata, S. An electronic model of the retina. Proc. IEEE 1970, 58, 1950–1951. [CrossRef] Mead, C.A.; Mahowald, M.A. A silicon model of early visual processing. Neural Netw. 1988, 1, 91–97. [CrossRef] Boahen, K.A.; Andreou, A.G. A contrast sensitive silicon retina with reciprocal synapses. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 1992; pp. 764–772. Andreou, A.G.; Boahen, K.A. A 48,000 pixel, 590,000 transistor silicon retina in current-mode subthreshold CMOS. In Proceedings of the 37th Midwest Symposium on Circuits and Systems, Lafayette, LA, USA, 3–5 August 1994; Volume 1, pp. 97–102. Gardner, M. Mathematical Games. Sci. Am. 1980, 243, 18–28. [CrossRef] Iñigo, R.; McVey, E.S.; Hsin, C.; Rahman, Z.; Minnix, J. A New Sensor for machine vision. IFAC Proc. Vol. 1986, 19, 81–86. [CrossRef] Her, I.; Yuan, C.-T. Resampling on a pseudohexagonal grid. CVGIP Graph. Models Image Process. 1994, 56, 336–347. [CrossRef] Van De Ville, D.; Philips, W.; Lemahieu, I. Least-squares spline resampling to a hexagonal lattice. Signal Process. Image Commun. 2002, 17, 393–408. [CrossRef] Li, X.; Gardiner, B.; Coleman, S.A. Square to Hexagonal lattice Conversion in the Frequency Domain. In Proceedings of the 2017 International Conference on Image Processing, Beijing, China, 17–20 September 2017. Ben-Ezra, M.; Lin, Z.; Wilburn, B.; Zhang, W. Penrose Pixels for Super-Resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1370–1383. [CrossRef] [PubMed]

Sensors 2018, 18, 1856

25.

26.

27.

28. 29. 30. 31. 32. 33. 34. 35.

36. 37.

38.

39.

16 of 16

Ben-Ezra, M.; Lin, Z.; Wilburn, B. Penrose pixels super-resolution in the detector layout domain. In Proceedings of the IEEE 11th International Conference on Computer Vision (ICCV 2007), Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. Wen, W.; Khatibi, S. Novel Software-Based Method to Widen Dynamic Range of CCD Sensor Images. In Proceedings of the 8th International Conference on Image and Graphics, Tianjin, China, 13–16 August 2015; pp. 572–583. Wen, W.; Khatibi, S. A software method to extend tonal levels and widen tonal range of CCD sensor images. In Proceedings of the 2015 9th International Conference on Signal Processing and Communication Systems (ICSPCS), Cairns, QLD, Australia, 14–16 December 2015; pp. 1–6. Wen, W.; Khatibi, S. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image. Sensors 2017, 17, 620. [CrossRef] [PubMed] Wen, W.; Khatibi, S. Back to basics: Towards novel computation and arrangement of spatial sensory in images. Acta Polytech. 2016, 56, 409–416. [CrossRef] Wen, W.; Khatibi, S. The Impact of Curviness on Four Different Image Sensor Forms and Structures. Sensors 2018, 18, 429. [CrossRef] [PubMed] Woog, K.; Legras, R. Visual resolution and cone spacing in the nasal and inferior retina. Ophthalmic Physiol. Opt. 2018, 38, 66–75. [CrossRef] [PubMed] Horn, B. Robot Vision; MIT Press: Cambridge, MA, USA, 1986. Coleman, S.; Scotney, B.; Gardiner, B. Tri-directional gradient operators for hexagonal image processing. J. Vis. Commun. Image Represent. 2016, 38, 614–626. [CrossRef] Wüthrich, C.A.; Stucki, P. An algorithmic comparison between square-and hexagonal-based grids. CVGIP Graph. Models Image Process. 1991, 53, 324–339. [CrossRef] Li, X.; Gardiner, B.; Coleman, S.A. Square to hexagonal lattice conversion based on one-dimensional interpolation. In Proceedings of the 2016 6th International Conference on Image Processing Theory Tools and Applications (IPTA), Oulu, Finland, 12–15 December 2016; pp. 1–6. Grünbaum, B.; Shephard, G.C. Tilings and Patterns; W. H. Freeman & Company: New York, NY, USA, 1987. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. Zhu, Q.; Yeh, M.-C.; Cheng, K.-T.; Avidan, S. Fast human detection using a cascade of histograms of oriented gradients. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; Volume 2, pp. 1491–1498. Haibo, P.; Chengming, L.; Zhe, Z.; Shuyan, Z. Gesture Recognition Based on Hexagonal Structure Histograms of Oriented Gradients. Int. J. Signal Process. Image Process. Pattern Recognit. 2015, 8, 239–250. [CrossRef] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).