A New Approach to Line Simplification Based on Image ... - MDPI

5 downloads 0 Views 12MB Size Report
Jan 30, 2018 - paper, we present a new line simplification approach based on image processing (BIP), which is specifically designed for raster data. First, the ...
International Journal of

Geo-Information Article

A New Approach to Line Simplification Based on Image Processing: A Case Study of Water Area Boundaries Yilang Shen

ID

, Tinghua Ai *

ID

and Yakun He

School of Resource and Environmental Sciences, Wuhan University, Wuhan 430079, China; [email protected] (Y.S.); [email protected] (Y.H.) * Correspondence: [email protected]; Tel.: +86-139-0863-9199 Received: 29 October 2017; Accepted: 28 January 2018; Published: 30 January 2018

Abstract: Line simplification is an important component of map generalization. In recent years, algorithms for line simplification have been widely researched, and most of them are based on vector data. However, with the increasing development of computer vision, analysing and processing information from unstructured image data is both meaningful and challenging. Therefore, in this paper, we present a new line simplification approach based on image processing (BIP), which is specifically designed for raster data. First, the key corner points on a multi-scale image feature are detected and treated as candidate points. Then, to capture the essence of the shape within a given boundary using the fewest possible segments, the minimum-perimeter polygon (MPP) is calculated and the points of the MPP are defined as the approximate feature points. Finally, the points after simplification are selected from the candidate points by comparing the distances between the candidate points and the approximate feature points. An empirical example was used to test the applicability of the proposed method. The results showed that (1) when the key corner points are detected based on a multi-scale image feature, the local features of the line can be extracted and retained and the positional accuracy of the proposed method can be maintained well; and (2) by defining the visibility constraint of geographical features, this method is especially suitable for simplifying water areas as it is aligned with people’s visual habits. Keywords: line element; simplification; cartographic generalization; water areas; image

1. Introduction Line simplification is the process of reducing the complexity of line elements when they are expressed at a larger scale while maintaining their bending characteristics and essential shapes [1–3]. For decades, as the most important and abundant map elements, line elements have been an important area of research in cartography. Due to the complexity of the shape and diversity of line elements, it is difficult to design a perfect automated algorithm for line simplification. The line simplification algorithm is used to generate the graphical expression of a line at different scales, and three basic principles should be followed during simplification [4–6]: (1) maintaining the basic shape of curves, namely, the total similarity of graphs; (2) maintaining the accuracy of the characteristic points of bending; (3) maintaining the contrast of the curving degree in different locations. Since the 1970s, classic algorithms used for line simplification have been proposed [7–11]. The Douglas–Peucker (DP) algorithm [8] uses a distance tolerance to subdivide the original line features in a recursive way. When the distance between the vertices and the line segment connecting the two original points is within a certain range, the vertices are eliminated. Li and Openshaw [9] presented a new approach to line simplification based on the natural process that people use to observe geographical objects. The method simulates the natural principle. The ratio of input and ISPRS Int. J. Geo-Inf. 2018, 7, 41; doi:10.3390/ijgi7020041

www.mdpi.com/journal/ijgi

ISPRS Int. J. Geo-Inf. 2018, 7, 41

2 of 25

target resolutions is used to calculate the so-called “smallest visible object”. The method proved to be effective for detecting and retaining the overall shape of line features. Visvalingam and Whyatt [10] attempted to realize the progressive simplification of line features by using the concept of “effective area”. Based on the analysis and detection of line bends, Wang and Muller [11] proposed a typical algorithm for line simplification, which can generate reasonable results. In recent years, research on line simplification has focused on shape maintenance [12–16], topological consistency [17,18], geo-characteristics [19–22], and data quality [23–29]. Jiang and Nakos [13] presented a new approach for line simplification based on shape detection of geographical objects in self-organizing maps. Dyken et al. [17] developed an approach for the simultaneous simplification of piecewise curves. The method uses the constrained Delaunay triangulation to preserve the topological relations between the linear features. Nabil Mustafa et al. [19] presented a method for line simplification based on the Voronoi diagram. First, the complex cartographic curve is divided into a series of safe sets. Then, the safe sets are simplified by using methods such as the DP algorithm. With this method, the intersections of the lines input can be avoided. Nöllenburg et al. [20] proposed a method for polylines morphing at different levels of detail. The method can be used to perform continuous generalization for linear geographical elements such as rivers and roads. Park and Yu [21] developed an approach for hybrid line simplification. In comparison with the single simplification algorithms, the method proved to be more effective in the field of vector displacement. Stanislawski et al. [23] studied various types of metric assessments such as the Hausdorff distance, segment length, vector displacement, areal displacement and sinuosity for the generalization of linear geographical elements. Their research can provide a reference for the appropriate parameter-settings of line generalization at different scales. Nakos et al. [24] made full use of the operations of data exaggeration, typification, enhancement, smoothing, or any combination thereof while performing line simplification. The study made remarkable progress regarding the highly automated generalization of full-life-circle data. Other algorithms have been proposed for simplifying polygonal boundaries [30–34]. To reduce the area change, McMaster [30] presented a method for line simplification based on the integration of smoothing and simplification. Bose et al. [29] proposed an approach to simplifying polygonal boundaries. In their research, according to a given threshold of area change, the path that contains the smallest number of edges is used for simplification. Buchin et al. [32] presented an area-preserving method for the simplification of polygonal boundaries using an edge-move operation. During the process, the orientation of the polygons can be maintained by the edge-move operation that moves edges of the polygons inward or outward. Studies of line representation based on image theory have mainly focused on polygonal approximation [35–38]. Kolesnikov and Fränti [36] presented a method for determining the optimal approximation of a closed curve. In the study, the best possible starting point can be selected by searching in the extended space of the closed curve. Ray and Ray [38] developed an approach for polygonal approximation based on a proposition of exponential averaging. As indicated above, most studies have used vector data to simplify line elements or polygonal boundaries, but few studies have used raster data. In addition, the existing studies based on image theory cannot meet the requirements of cartographic generalization because of a lack of consideration of the three basic principles. However, with the continuous development of computer vision and artificial intelligence technology, extracting and processing information from a growing number of unstructured datasets (such as pictures, street view, 3D models, video and so on) is a worthwhile and challenging task. Thus, line simplification based on image data is necessary due to the growing number of unstructured datasets. Therefore, this paper presents a new line simplification method based on image processing techniques using unstructured image data. Following this introduction, Section 2 describes a comparison between scale space and map generalization, which lays out a theoretical foundation for line simplification based on image data. In Section 3, we describe the methods of line simplification,

ISPRS Int. J. Geo-Inf. 2018, 7, 41

3 of 25

which include corner detection, minimum-perimeter polygons and matching candidate points and approximate feature points. Section 4 presents the results of experiments that illustrate the proposed approaches, and a discussion and analysis of our approaches are provided. Section 5 presents our conclusions. 2. Scale Space and Map Generalization Scale space describes a theory of signal representation at different scales [39]. In the process of the ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 3 of 25 scaling transformation, the representation of images at different scales is parametrized according to the size of approximate the smoothing kernel, the4 presents fine-scale detailsofcan be successively suppressed [40]. As the feature points. and Section the results experiments that illustrate the proposed approaches, a discussion andscale analysis of our approaches are provided. 5 presents primary type of scaleand space, the linear space is widely applicable andSection has the effectiveour property of conclusions. being able to be derived from a small set of scale space axioms. The Gaussian kernel and its derivatives are regarded as the only smoothing kernels [41]. 2. Scale Space andpossible Map Generalization For a given image f x, y , the linear Gaussian scale-space representation be described as a ( ) Scale space describes a theory of signal representation at different scales [39]. In can the process of family of derived signals L x, y; t , which is defined by the convolution of the image f x, y with the ( ) ( ) the scaling transformation, the representation of images at different scales is parametrized according to the size of the smoothing kernel, and the fine-scale details can be successively suppressed [40]. As two-dimensional Gaussian kernel [41]:

such that

the primary type of scale space, the linear scale space is widely applicable and has the effective property of being able to be derived from a small1set of scale 2 yspace 2 ) /2t axioms. The Gaussian kernel and x, y;possible t) = smoothing e−( x +kernels , [41]. its derivatives are regarded as theg (only 2πt For a given image , , the linear Gaussian scale-space representation can be described as a family of derived signals , ; , which is defined by the convolution of the image , with the two-dimensional Gaussian kernel [41]:

L(·, ·; t) = g(·, ·; t) ∗ f (·, ·). , ;

=



,

(1)

(2)

(1)

In Equation (1), g denotes the two-dimensional Gaussian filter, x denotes the distance from the such that origin in the horizontal axis, y denotes the distance from the origin in the vertical axis, and t denotes ∙,∙; = ∙,∙;In Equation ∗ ∙,∙ . the standard deviation of the Gaussian distribution. (2), the semicolon in the(2) argument of Equation (1), denotes the two-dimensional Gaussian filter, xxdenotes distance the t after the L indicates thatInthe convolution is executed only over the variables and y. the The scale from variate in the horizontal axis,level y denotes the distance from Although the origin in athe vertical axis, of andscales t denotes semicolonorigin indicates which scale is being defined. continuum t ≥ 0 can be the standard deviation of the Gaussian distribution. In Equation (2), the semicolon in the argument described of byL indicates the definition of L, only a finite, discrete set of levels will be taken into account in the that the convolution is executed only over the variables x and y. The scale variate t after representation of scale space. The variance the defined. Gaussian filter aiscontinuum described by the scale the semicolon indicates which scale level isofbeing Although of scales 0 canparameter be described by parameter the definitiont of finite, gdiscrete set of levels will be taken account t = σ2 . When the scale =L,0,only theafilter is an impulse function suchinto that L( x, in y;the 0) = f ( x, y), representation of scale space. The variance of the Gaussian filter is described by the scale parameter which indicates that the scale-space representation at scale level t = 0 becomes the image f itself. = . When the scale parameter = 0, the filter g is an impulse function such that , ;0 = With an increase of the scale parameter t, the filter g becomes increasingly larger. As L is the outcome , , which indicates that the scale-space representation at scale level = 0 becomes the image f of smoothing withana increase number offilter details from increasingly the imageslarger. will be removed in this itself.f With of increasing the scale parameter t, the g becomes As L is the √ filter, an outcome of smoothing f with the a filter, an increasing number of of details from the willof bean removed process. Since σ= t describes standard deviation the filter, theimages details image that are standardatdeviation of the the details of left an image in this process. Since √ describes significantly smaller than this = value will bethe removed scale level t. filter, In Figure 1, the panel shows a that are significantly smaller than this value will be removed at scale level t. In Figure 1, the left panel typical example of a scale space representation at different scales. shows a typical example of a scale space representation at different scales.

Figure 1. Scale space and map generalization.

Figure 1. Scale space and map generalization. Analogously, cartographic generalization, or map generalization, is a process that simplifies the representation of geographical data to generateor a map a smaller scale from scalethat map simplifies [42]. Analogously, cartographic generalization, mapatgeneralization, isaalarger process the For example, the right panel of Figure 1 represents residential areas at different scales. In this process, representation of geographical data to generate a map at a smaller scale from a larger scale map [42]. the details are abandoned and the general information is retained. As the essence of both processes For example, the right panel of Figure 1 represents residential areas at different scales. In this process,

ISPRS Int. J. Geo-Inf. 2018, 7, 41

4 of 25

the details are abandoned and the general information is retained. As the essence of both processes is the same, this paper seeks to use rasterize map data and image processing methods based on the scale space theory to simplify the polygonal boundary on a map. ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

4 of 25

3. Methodologies for Line Simplification

is the same, this paper seeks to use rasterize map data and image processing methods based on the

Wescale usespace the theory imageto processing techniques such on asa map. corner detection and calculation of the simplify the polygonal boundary minimum-perimeter polygon to simplify the polygonal boundary. The methods for simplifying 3. Methodologies for Line Simplification lines can be divided into three steps: (1) corner detection based on scale space to extract the local We objects use the image processing techniques such as the corner detection and calculation of the points; features of the from the raster data and ensure positioning accuracy of feature minimum-perimeter polygon to simplify the polygonal boundary. The methods for simplifying (2) calculation of a minimum-perimeter polygon (MPP) to simplify the overall shape of lines the objects; can be divided into three steps: (1) corner detection based on scale space to extract the local features and (3) matching candidate points and approximate feature points from the MPP. The detailed steps of the objects from the raster data and ensure the positioning accuracy of feature points, (2) are as follows. calculation of a minimum-perimeter polygon (MPP) to simplify the overall shape of the objects, and (3) matching candidate points and approximate feature points from the MPP. The detailed steps are

3.1. Classification as follows. and Detection of Corners

Typically, a point with two different and dominant edge directions in the local neighbourhood 3.1. Classification and Detection of Corners of this point can be considered a corner. The corners in images represent critical information in local Typically, a point with two different and dominant edge directions in the local neighbourhood image regions and describe local features of objects. A corner can be a point with locally maximal of this point can be considered a corner. The corners in images represent critical information in local curvature on a curve, a line end, or a separate point with local intensity minimum or maximum. image regions and describe local features of objects. A corner can be a point with locally maximal The detection of on corners can be end, successfully applied to many applications, such object recognition, curvature a curve, a line or a separate point with local intensity minimum or as maximum. The image matching motion [43–45]. As atoresult, there havesuch beenasaobject number of studies on detection oforcorners cantracking be successfully applied many applications, recognition, matchingfor or corner motion tracking [43–45]. As aAs result, there been number can of studies on differentimage approaches detection [46–50]. shown inhave Figure 2, acorners be divided into different approaches for corner detection [46–50]. As shown in Figure 2, corners can be divided into three types: acute and right corners, round corners, and obtuse corners. Figure 2d shows the curvature three types: acute and right corners, round corners, and obtuse corners. Figure 2d shows the plot of a round corner in Figure 2b. curvature plot of a round corner in Figure 2b.

(a)

(b)

(c)

(d) Figure 2. Classification of corner points.

Figure 2. Classification of corner points. In 1992, Rattarangsi and Chin presented a multi-scale approach for corner detection of planar curves Rattarangsi based on curvature scale space (CSS) [51]. The method isapproach based on Gaussian scaledetection space, which In 1992, and Chin presented a multi-scale for corner of planar is made up of the maxima of the absolute curvature of the boundary function presented at multiple curves based on curvature scale space (CSS) [51]. The method is based on Gaussian scale space, scales. Because the method based on CSS performs well in maintaining invariant geometric features which is made up of the maxima of the absolute curvature of the boundary function presented at of the curves at different scales, Mokhtarian and Suomela developed two algorithms of corner multipledetection scales. based Because the method on CSS performs well inbased maintaining invariant on CSS [52,53] forbased greyscale images. These methods on CSS are effective geometric for featurescorner of thedetection curves at different scales, Mokhtarian and developed two algorithms of corner and are robust to noise. In this study, weSuomela developed an improved method of corner detection boundaries. The definition of curvature K, from [54], is ason follows: detection based for onpolygonal CSS [52,53] for greyscale images. These methods based CSS are effective for , , we, developed , corner detection and are robust to noise. , In this study, an improved method of corner = , (3) . , , detection for polygonal boundaries. The definition of curvature K, from [54], is as follows: where ⨂

, ,

= ⨂ , , , = ⨂ , , , = .. . . .. , and ⨂ represents theXconvolution σ )Y (u, σ,) (u, σ)Y (u, σoperator, ) − X (u,while

K (u, σ) =

.

.

[ X (u, σ)2 + Y (u, σ)2 ]1.5

⨂ , , , = represents a Gaussian

,

(3)

ISPRS Int. J. Geo-Inf. 2018, 7, 41 .

N.

5 of 25 ..

N ..

.

..

N.

where X (u, σ ) = x (u) g(u, σ ), X (u, σ ) = x (u) g(u, σ), Y (u, σ ) = y(u) g(u, σ ), Y (u, σ) = N N .. y(u) g(u, σ ), and represents the convolution operator, while g(u, σ) represents a Gaussian . .. distribution of width σ, g(u, σ ) is the first derivative of g(u, σ) and g(u, σ) is the second derivative of g(u, σ). In traditional single-scale methods, such as references [46,48], the corners are detected by ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 5 ofand 25 some considering their local characteristics. In these cases, the fine features could be missed, noise may be incorrectly interpreted as corners. The benefit of the CSS algorithms is that they take distribution of width , , is the first derivative of , and , is the second full advantage local curvature characteristics, and they balance the influence of both derivativeofofglobal , and . types of characteristics the process of extracting betterthe extract thearelocal features In traditional in single-scale methods, such as corners. referencesTo [46,48], corners detected by of the considering their local characteristics. In these cases, the fine features could be missed, and some polygonal boundary from the images and ensure the positioning accuracy of feature points, we add the noise may be of incorrectly interpreted as corners. benefit CSS algorithms is that they take classification stage corner points on the basis ofThe CSS, and of anthe improved method of corner detection full advantage of global and local curvature characteristics, and they balance the influence of both based on CSS is proposed as follows: (1) (2) (3)

(4)

types of characteristics in the process of extracting corners. To better extract the local features of the boundary thethe images and ensure the positioning accuracy of feature points, we add Topolygonal obtain the binary from edges, method of Canny edge detection [55] is applied. the classification stage of corner points on the basis of CSS, and an improved method of corner Edge contours are extracted from the edge map as described in the CSS method. detection based on CSS is proposed as follows:

After the edge contours are extracted, the curvature of each contour is computed at a small scale (1) To obtain the binary edges, the method of Canny edge detection [55] is applied. to (2) obtain the true corners. If the absolute curvature of a point is the local maximum, it can be Edge contours are extracted from the edge map as described in the CSS method. treated as athe corner (3) After edge candidate. contours are extracted, the curvature of each contour is computed at a small scale to obtain of thecorner true corners. If the absolute curvature of a point is the local maximum, it can Classification points. According to the mean curvature within a region ofbesupport, treated as a corner candidate. an adaptive threshold is calculated. The round corner is marked by comparing the adaptive (4) Classification of corner points. According to the mean curvature within a region of support, an threshold with the curvature of corner candidates. According to a dynamically recalculated adaptive threshold is calculated. The round corner is marked by comparing the adaptive regionthreshold of support, the angles of the corner candidates should also be calculated, and any false with the curvature of corner candidates. According to a dynamically recalculated cornersregion will of besupport, eliminated. As aofresult, the candidates corners can be divided into three types: acute and the angles the corner should also be calculated, and any false right corners, round corners, and corners. corners will be eliminated. As aobtuse result, the corners can be divided into three types: acute and right corners, round corners, and obtuse corners.

Figure 3 shows the results of corner detection with different angles. As shown in Figure 3, Figure 3 shows the results of corner detection with different angles. As shown in Figure 3, increasing the angle generates more corner points. increasing the angle generates more corner points.

(a)

(b)

(c)

(d)

(e)

(f)

Figure 3. An example of corner detection. (a) Boundary extraction; (b)

= 60; (c)

= 80; (d)

=

Figure 3. An example of corner detection. (a) Boundary extraction; (b) A = 60; (c) A = 80; (d) A = 120; 120; (e) = 160; (f) = 175. (e) A = 160; (f) A = 175. There are three important parameters in the proposed method based on CSS: (1) Ratio (R): R is used to detect round corners, and it represents the minimum ratio of the major axis to the minor axis of an ellipse; (2) Angle (A): A is used to detect the acute, right and obtuse angle corners, and it

ISPRS Int. J. Geo-Inf. 2018, 7, 41

6 of 25

There are three important parameters in the proposed method based on CSS: (1) Ratio (R): R is used to detect round corners, and it represents the minimum ratio of the major axis to the minor axis of an ellipse; (2) Angle (A): A is used to detect the acute, right and obtuse angle corners, and it represents ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW of 25 the maximum angle used when screening for corners; (3) Filter (F): F represents the standard 6deviation of therepresents Gaussianthe filter during the process of computing default values maximum angle used when screening curvature. for corners;In (3)Figure Filter 3, (F):the F represents the are F = 3 standard and R = deviation 1. Detailed information about the adaptive threshold and the parameter R can3,be of the Gaussian filter during the process of computing curvature. In Figure thefound in reference [55]. default values are F = 3 and R = 1. Detailed information about the adaptive threshold and the parameter R can be found in reference [55].

3.2. Calculation of a Minimum-Perimeter Polygon 3.2. Calculation of a Minimum-Perimeter Polygon

For a given boundary, the aim of the approximation is to obtain the essence of the shape, and there given boundary, aim of the approximation is to obtain the essence of the shape, and should beFor as afew segments as the possible. This problem deserves attention in general, and polygonal there should be as few segments as possible. This problem deserves attention in general, and approximation methods of modest complexity are well suited to image-processing tasks. Among a polygonal approximation methods of modest complexity are well suited to image-processing tasks. variety of technologies, a minimum-perimeter polygon (MPP) [56] is one of the most powerful methods Among a variety of technologies, a minimum-perimeter polygon (MPP) [56] is one of the most to represent a boundary. The following discussion gives the basic definitions of a MPP [57]. powerful methods to represent a boundary. The following discussion gives the basic definitions of a An MPPMPP [57]. of a boundary represents a rasterized polygon with the shortest perimeter among all the polygons whose coincide witha the given polygon boundary. Tothe intuitively generateamong an algorithm An MPP of aimages boundary represents rasterized with shortest perimeter all to calculate MPPs, an appealing approach enclose a boundary with a generate set of concatenated the polygons whose images coincide with is thetogiven boundary. To intuitively an algorithm cells, as shown in Figure The boundary canisbe a rubber band is allowed to shrink. to calculate MPPs,4a. an appealing approach to treated enclose aas boundary with a setthat of concatenated cells, as shown in Figure 4a.limited The boundary be and treated as awalls rubber that is allowed to shrink. The The rubber band will be by the can inner outer ofband the bounding region defined by the rubber band will be limited by the inner and outer walls of the bounding region defined by the cells, cells, as shown in Figure 4b,c. Finally, the shape of a polygon with the minimum perimeter is created as shownthe in Figure 4b,c. Finally, thein shape of a4d. polygon with the minimum perimeter is produced created by with by shrinking boundary, as shown Figure All the vertices of the polygon are shrinking the boundary, as shown in Figure 4d. All the vertices of the polygon are produced with the the corners of either the inner or the outer wall of the grey region. For a given application, the goal is corners of either the inner or the outer wall of the grey region. For a given application, the goal is to to use the largest possible cell size to produce MPPs with the fewest vertices. The detailed process of use the largest possible cell size to produce MPPs with the fewest vertices. The detailed process of finding thesethese MPPMPP vertices is as follows. finding vertices is as follows.

(a)

(b)

(c)

(d)

Figure 4. Process of finding MPP. (a) Boundary enclosed by cells; (b) convex (white dots) and concave

Figure 4. Process of finding MPP. (a) Boundary enclosed by cells; (b) convex (white dots) and concave (black dots) vertices; (c) the mirrors of all the concave vertices; (d) result of MPP. (black dots) vertices; (c) the mirrors of all the concave vertices; (d) result of MPP.

The cellular method just described reduces the shape of the original polygon, which is enclosed by the original boundary, to the area circumscribed by the grey wall, as shown in Figure 4a. This

ISPRS Int. J. Geo-Inf. 2018, 7, 41

7 of 25

The cellular method just described reduces the shape of the original polygon, which is enclosed by the original boundary, to the area circumscribed by the grey wall, as shown in Figure 4a. This shape in dark grey is presented in Figure 4b, and its boundary is made up of four connected straight-line segments. Assuming that the boundary is traversed in a counter-clockwise direction, when a turning ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 7 of 25 point appears in the traversal, it will be either a convex or a concave vertex, and the angle of a turning pointshape is anininterior angle of the 4-connected boundary. Figure 4b shows theconnected convex and concave dark grey is presented in Figure 4b, and its boundary is made up of four straightvertices whiteAssuming and blackthat dots, We see that black (concave) vertexwhen in thea dark line with segments. therespectively. boundary is traversed in aeach counter-clockwise direction, turningcorresponds point appearstoinathe traversal, it will belight eithergrey a convex or awhich concave vertex, and the angleopposite of grey region mirror vertex in the region, is located diagonally a turning point is an interior angle of the 4-connected boundary. Figure 4b shows the convex and the black vertex. The mirrors of all the concave vertices are presented in Figure 4c. The vertices of the with white and black dots,wall respectively. We see that black of (concave) vertexvertices in MPPconcave consist vertices of convex vertices in the inner (white dots) and theeach mirrors the concave the dark grey region corresponds to a mirror vertex in the light grey region, which is located (black dots) in the outer wall. diagonally opposite the black vertex. The mirrors of all the concave vertices are presented in Figure The accuracy of the polygonal approximation is determined by the size of the cells (cellsize). 4c. The vertices of the MPP consist of convex vertices in the inner wall (white dots) and the mirrors For example, if cellsize = (black 1 pixel, then theouter size wall. of one cell is 1 pixel × 1 pixel. If cellsize = 2 pixels, of the concave vertices dots) in the then the size of one cell is 2polygonal pixels ×approximation 2 pixels. In aislimited case,bywhen theofsize of each cell equals a The accuracy of the determined the size the cells (cellsize). For pixelexample, in the boundary, the error between the MPP approximation and the original boundary in each if = 2 pixels, then √= 1 pixel, then the size of one cell is 1 pixel × 1 pixel. If the sizebe ofat one cell is 22 pixels × 2 pixels. In a limited case, when the sizedistance of each cell equals pixels. a pixel in cell would most s, where s represents the minimum possible between If each boundary, theapproximation error between the MPP approximation anditsthe original boundary cell inthe the polygonal is forced to be centred on corresponding pixelsinineach the cell original wouldthe be at most where s represents thedetailed minimum possible distance between pixels. If each boundary, error can√2 s, be reduced by half. The algorithm to find the vertices of the MPP can cell in the polygonal approximation is forced to be centred on its corresponding pixels in the original be found in reference [57]. boundary, the error can be reduced by half. The detailed algorithm to find the vertices of the MPP Figure 5 shows typical examples. Figure 5a shows the result of boundary extraction. Figure 5b can be found in reference [57]. shows the result of corner detection. Figure 5c shows the minimum-perimeter polygon of the lake Figure 5 shows typical examples. Figure 5a shows the result of boundary extraction. Figure 5b whenshows cellsize = result 6. Figure 5d shows the overlapping result corner points (marked with the of corner detection. Figure 5c shows thebetween minimum-perimeter polygon of the lakegreen) and MPP points (marked with red). As we can see from Figure 5d, the MPP points are mostly not when cellsize = 6. Figure 5d shows the overlapping result between corner points (marked with green)on the boundary, as shown in the red box. Furthermore, when boundary is relatively flat, the distribution and MPP points (marked with red). As we can see fromthe Figure 5d, the MPP points are mostly not on of MPP is sparse, as shown thebox. green box. Although algorithm has a certain ability the points boundary, as shown in the in red Furthermore, whenthe theMPP boundary is relatively flat, the distribution of MPPshape pointsof is asparse, as shown in the green box.meet Although the MPP algorithm has a to simplify the overall polygon, it obviously cannot the requirements of cartographic certain ability to simplify overalldistribution shape of a polygon, it obviously meetConversely, the requirements generalization because of thethe uneven and location errorcannot of points. although of cartographic generalization because of the uneven distribution and location error points. meet the corner points have good localization accuracy and show local characteristics, theyofcannot Conversely, although the corner points have good localization accuracy and show local the requirements of cartographic generalization because of the lack of overall shape simplification. characteristics, they cannot meet the requirements of cartographic generalization because of the lack Therefore, in the following sections, we will take full advantage of these two methods to simplify the of overall shape simplification. Therefore, in the following sections, we will take full advantage of polygonal boundary. these two methods to simplify the polygonal boundary.

(a)

(b) Figure 5. Cont.

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

8 of 25

ISPRS Int. J. Geo-Inf. 2018, 7, 41

8 of 25

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

(c)

(c)

8 of 25

(d)

(d)

Figure 5. Typical examples. (a) Boundary extraction; (b) corner detection; (c) minimum perimeter Typical (d) examples. (a) Boundary extraction; (b) corner detection; (c) minimum perimeter polygon; overlap of corner and MPP points. Typical examples. (a) Boundary extraction; (b) corner detection; (c) minimum perimeter

Figure 5. Figure 5. polygon; (d) overlap of corner and MPP points. polygon; (d) overlap of corner and MPP points. 3.3. Simplification of a Polygonal Boundary

To simplify the overall shape of the lake and ensure the localization accuracy of local feature 3.3. of Boundary 3.3. Simplification Simplification of aa Polygonal Polygonal Boundary

points, we combine the advantages of two algorithms and use the method of matching neighbouring

To the overall shape of the points. The main steps are shown To simplify simplify the overall shape ofbelow. the lake lake and and ensure ensure the the localization localization accuracy accuracy of of local local feature feature points, we combine the advantages of two algorithms and use the method of matching neighbouring Removethe the advantages self-intersectingoflines MPP points, we(1)combine twoofalgorithms and use the method of matching neighbouring points. points. The The main main steps steps are are shown shown below. below. When the parameter cell size changes, it can easily produce self-intersecting lines in narrow

zones, as shown in Figure 6a. Normally, the cross section is not visible because the cell size is too big. (1) the (1) Remove Remove the self-intersecting self-intersecting lines lines of of MPP MPP As shown in Figure 6b, the line segment composed of points p1, p2, p3, and p4 is self-intersecting. cross section op3p2 is not visible and should be removed. Thus, pointself-intersecting p3 and p2 will be removed, When the parameter cell size changes, it easily lines WhenThe the parameter cell size changes, it can can easily produce produce self-intersecting lines in in narrow narrow and the intersection point o will be added to constitute a new polygon. In some instances, point p2 zones, in Figure 6a. Normally, the cross section is not visible because the cell cell size size is is too too big. big. zones, as as shown shown in Figure 6a. Normally, the cross section is not visible because the and o are overlapping, as shown in the upper right of Figure 6b. As shown in Figure 6b, the line segment composed of points p1, p2, p3, and p4 is self-intersecting. As shown in Figure 6b, the line segment composed of points p1, p2, p3, and p4 is self-intersecting. The Thus, point point p3 p3 and and p2 p2 will will be be removed, removed, The cross cross section section op3p2 op3p2 is is not not visible visible and and should should be be removed. removed. Thus, and and the the intersection intersection point point oo will will be be added added to to constitute constitute aa new new polygon. polygon. In In some some instances, instances, point point p2 p2 and o are overlapping, as shown in the upper right of Figure 6b. and o are overlapping, as shown in the upper right of Figure 6b.

Figure 6. Self-intersecting lines of MPP.

Figure 6. Self-intersecting lines of MPP. Figure 6. Self-intersecting lines of MPP.

ISPRS Int. J. Geo-Inf. 2018, 7, 41

(2)

9 of 25

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW Filter corner points

9 of 25

Filter corner pointspoints will be selected from corner points that are in the neighbourhood of The(2)simplified feature the MPP points after the removal of self-intersections. If corner the distance between theneighbourhood corner point and the The simplified feature points will be selected from points that are in the of the MPP points after the removal of self-intersections. If the distance between the corner point ISPRS J. Geo-Inf. 2018, 7, x FOR PEER point REVIEWwill be selected. As shown in Figure 7, the red 9 and of 25 are the MPP point isInt. the shortest, the corner points the MPP point is the shortest, the corner point will be selected. As shown in Figure 7, the red points MPP points after the removal of self-intersections, the green points are the corner points, and the blue (2) the Filter corner points are MPP points after the removal of self-intersections, the green points are the corner points, and points are the selected corner points that make up the simplified point set. However, it may contain the blue points are the selected corner points that make up the simplified point set. However, it may The simplified feature points distances will be selected from corner points are in the neighbourhood multiplecontain corner pointscorner with similar to the MPP point inthat the neighbourhood the MPP multiple points with similar distances to the MPP point in the neighbourhood ofofthe of the MPP points after the removal of self-intersections. If the distance between the corner point and points. Given that the features of a sharp area are more obvious than those of at flat area, the MPP points. Given that the features of a sharp area are more obvious than those of at flat area, the corner the MPP point is the shortest, the corner point will be selected. As shown in Figure 7, the red points corner angle with ain small angle in a neighbourhood of MPPwill points be selected preferentially. with a small a neighbourhood of MPP points bewill selected preferentially. ForFor example, are the MPP points after the removal of self-intersections, the green points are the corner points, and example, points c6 and c7 are both in the neighbourhood of point p4, and the distance between c6 points c6the and c7points are both in the neighbourhood of point andsimplified the distance c6 and p4 is close blue are the selected corner points that makep4, up the pointbetween set. However, it may and p4 is close to the distance between c7 and p4. In this case, the simplified point set will be selected to the distance between c7 and p4.with In this case, the simplified will be selected according to contain multiple corner points similar distances to the MPPpoint pointset in the neighbourhood of the according to the types of corner points. The priority of corner points for selection is as follows: point MPP points. Given that the features of a sharp area are more obvious than those of at flat area, the the types of corner points. The priority of corner points for selection is as follows: point of acute (right) of acute (right) angle > point of rounded angle> point of obtuse angle. with a small angle angle in point a neighbourhood of MPP points will be selected preferentially. For angle > corner point rounded obtusethan angle. In of Figure 7, the angle of>point c6 of is smaller the angle of point c7, so point c6 is selected. example, points c6 and c7 are both in the neighbourhood of point p4, and the distance between c6 and p4 is close to the distance between c7 and p4. In this case, the simplified point set will be selected according to the types of corner points. The priority of corner points for selection is as follows: point of acute (right) angle > point of rounded angle> point of obtuse angle. In Figure 7, the angle of point c6 is smaller than the angle of point c7, so point c6 is selected.

Figure 7. Filtering of corner points.

Figure 7. Filtering of corner points. (3) Densification of simplified points

In Figure 7, the angle of point c6 is flat, smaller than the angle point c7, so point c6 in is Figure selected. When the boundary is relatively the distribution of MPPof points is sparse, as shown (3)

8. In this case, because thepoints distance between two adjacent points is too great, to maintain the Figure 7. Filtering of corner points. Densification of simplified

consistency of the distribution density of the points, the corner point in the neighbourhood of the

(3) the Densification ofissimplified points midpoint of the connecting line between the two adjacent be selected as in theFigure 8. When boundary relatively flat, the distribution of MPPpoints pointsshould is sparse, as shown densification point. The yellow point in Figure 8 is the densification point. If there arethe several In this case, When because the distance between two adjacent points is too great, to maintain consistency the boundary is relatively flat, the distribution of MPP points is sparse, as shown in Figure candidate corner points in the neighbourhood of the midpoint, the point should be selected according of the distribution density ofthe thedistance points,between the corner in points the neighbourhood the midpoint of 8. In this case, because two point adjacent is too great, to of maintain the to the priority outlined in the second step. consistency of the distribution density of the points, the corner point in the neighbourhood of the the connecting line between the two adjacent points should be selected as the densification point. midpoint connecting between thepoint. two adjacent points should be selected as the The yellow pointofinthe Figure 8 is theline densification If there are several candidate corner points in densification point. The yellow point in Figure 8 is the densification point. If there are several the neighbourhood of the midpoint, the point should be selected according to the priority outlined in candidate corner points in the neighbourhood of the midpoint, the point should be selected according the second step. to the priority outlined in the second step.

Figure 8. Densification of simplified points.

Figure 8. Densification of simplified points.

Figure 8. Densification of simplified points.

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

ISPRS Int. J. Geo-Inf. 2018, 7, 41

10 of 25

10 of 25

Figure 9 shows an example of simplifying a polygonal boundary using the method proposed in this paper. Figure 9a shows the original boundary of a lake. The number of pixels comprising the Figure 9 shows an example of simplifying a polygonal boundary using the method proposed in original boundary is 959. Figure 9b–d show the simplification results of the lake when cellsize = 2, this paper. Figure 9a shows the originalCellsize boundary a lake. number of pixels comprising cellsize = 4 and cellsize = 8, respectively. is theof side lengthThe (pixel) of square cells. As shown in the original boundary 959. Figure 9b–d show the of simplification of the lake the figures, withisincreasing cellsize, more details the boundary results are abandoned. The when overallcellsize shape = 2, cellsize 4 and cellsize is= simplified 8, respectively. Cellsize is the side length (pixel) square cells. Asaccuracy. shown in the of=the boundary as well, and the simplified points haveofgood localization Figure show the calculation results of different types of feature points when The cellsize = 2 andshape cellsizeof the figures, with9e,f increasing cellsize, more details of the boundary are abandoned. overall = 4. The red points are the MPP points after the removal of self-intersections, the green points are the 9e,f boundary is simplified as well, and the simplified points have good localization accuracy. Figure corner points, the blue points are the selected corner points that make up the simplified point set, and show the calculation results of different types of feature points when cellsize = 2 and cellsize = 4. The red the yellow points are the densification points. With increasing cellsize, the MPP points after the points are the MPP points after the removal of self-intersections, the green points are the corner removal of self-intersections decrease in number. points, the blue points are the selected corner points that make up the simplified point set, and the Table 1 shows detailed information on parameter changes before and after simplification. With yellow points are the densification cellsize, the MPP points after theand removal increasing cellsize, the numbers points. of MPP With pointsincreasing and simplified points decrease in number, the of self-intersections decrease in increase. number.The percent changes in area are relatively small. percent changes in length

Figure 9. An example of simplifying a polygonal boundary. (a) Original boundary; (b) cellsize = 2; (c)

Figure 9. An example of simplifying a polygonal boundary. (a) Original boundary; (b) cellsize = 2; cellsize = 4; (d) cellsize = 8; (e) feature points (cellsize = 2); (f) feature points (cellsize = 4). (c) cellsize = 4; (d) cellsize = 8; (e) feature points (cellsize = 2); (f) feature points (cellsize = 4).

Table 1 shows detailed information on parameter changes before and after simplification. With increasing cellsize, the numbers of MPP points and simplified points decrease in number, and the percent changes in length increase. The percent changes in area are relatively small.

ISPRS Int. J. Geo-Inf. 2018, 7, 41

11 of 25

Table 1. Detailed information before and after simplification. ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

Cellsize (pixel)

11 of 25

Number of Number of Number of Length Table 1. Detailed informationSimplified before and Points after simplification. Corner Points MPP Points Change (%)

2 Cellsize 4 (pixel) 2 8 4 8

Number of 74 Corner 74 Points 7474 74 74

Number 78 of MPP Points 55 7833 55 33

4. Experiments and Evaluations

Number 43 of Simplified Points 35 43 30 35 30

Length Change −0.036 (%) −0.046 −0.036 −0.082 −0.046 −0.082

Area Change (%)

Area Change 0.006 (%) −0.001 0.006 −0.005 −0.001 −0.005

4. Experiments and Evaluations

4.1. Study Area

4.1.this Study Area experimental data that were used to simplify water areas at a large scale were In study, obtained In from Tiandi map fordata China. We used data at the 17thareas levelat from the Tiandi this the study, experimental that were used to simplify water a large scale were map. obtained from the Tiandi map for level China.isWe used The datastudy at the area 17th level from the Tiandi map. The The corresponding scale at the 17th 1:5000. is located in the northeast part of corresponding scale at the 17th level is 1:5000. The study area is located in the northeast part of are Changshou District of Chongqing in China, as shown in Figure 10. Several lakes and a few rivers Changshou District of Chongqing in China, as shown in Figure 10. Several lakes and a few rivers are present within this area. The study area is located between 29.90 and 29.99 degrees north latitude present within area. The study areaeast is located between 29.90 and degrees north latitude and tiles, and between 107.12this and 107.23 degrees longitude. The data at 29.99 the 17th level consist of 4480 between 107.12 and 107.23 degrees east longitude. The data at the 17th level consist of 4480 tiles, and and the size of one tile is 256 by 256 pixels (1 pixel = 1.194 m). The coordinate range of these tiles the size of one tile is 256 by 256 pixels (1 pixel = 1.194 m). The coordinate range of these tiles is 43,700 is 43,700 ≤ x ≤ 43,769, 209,082 ≤ y ≤ 209,145. The image that is formed from these tiles contains ≤ x ≤ 43,769, 209,082 ≤ y ≤ 209,145. The image that is formed from these tiles contains 17,920 by 16,384 17,920pixels. by 16,384 pixels.

Figure 10. Study area.

Figure 10. Study area.

ISPRS Int. J. Geo-Inf. 2018, 7, 41

12 of 25

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

12 of 25

4.2. 4.2. Results Results and and Analysis Analysis There There are are many many difficulties difficulties when when extracting extracting water water areas areas from from the the tile tile map. map. As As the the data data are are obtained from the Tile Map Server (TMS) on the Internet, each tile has a uniform resource obtained from the Tile Map Server (TMS) on the Internet, each tile has a uniform resource locator locator (URL), with the the other geographic elements elements on on aa map, map, (URL), which which can can be be downloaded downloaded for for free. free. Compared Compared with other geographic the water areas are usually rendered in blue. Thus, we can use a method of threshold segmentation the water areas are usually rendered in blue. Thus, we can use a method of threshold segmentation to to separate separate the the water water areas areas from from the the other other geographical geographical elements elements on on aa map. map. However, However, because because of of overlays overlays with with other other geographical geographical elements, elements, such such as as annotations annotations and and roads, roads, the the extracted extracted water water areas areas from from aa map map may may contain contain obvious obvious “holes” “holes” or or fractures. fractures. Considering Considering these these two two factors, factors, there there are are three three steps for extracting water areas from a tile map: (1) extracting water areas by colour segmentation; steps for extracting water areas from a tile map: (1) extracting water areas by colour segmentation; (2) (2) connecting connecting fractures fractures in in water water areas; areas; and and (3) (3) filling filling holes holes in in water water areas. areas. The The detailed detailed explanations explanations and demonstrations can be found in a previous work [58]. and demonstrations can be found in a previous work [58]. Figure We removed Figure 11 11 shows shows the the experimental experimental data data after after extraction. extraction. We removed the the narrow narrow rivers rivers and and retained the lakes. There were 22 lakes in total. We chose four lakes, as marked with red boxes retained the lakes. There were 22 lakes in total. We chose four lakes, as marked with red boxes in in Figure in in detail. Figure 12 shows the results of simplification of the four lakes at different Figure 11, 11,totodisplay display detail. Figure 12 shows the results of simplification of the four lakes at scales (1:10,000, 1:25,000, 1:50,000). In Figure In 12(a1,b1,c1,d1) show the show calculation results of results different different scales (1:10,000, 1:25,000, 1:50,000). Figure 12(a1,b1,c1,d1) the calculation of types of feature points when cellsize = 10. The red points are the MPP points after the removal of different types of feature points when cellsize = 10. The red points are the MPP points after the removal self-intersections, the the green points are the points, the blue are theare selected corner corner points of self-intersections, green points arecorner the corner points, the points blue points the selected that make the simplified point set, andset, theand yellow points are the are densification points. points. points thatup make up the simplified point the yellow points the densification

Figure 11. 11. Experimental Experimental data data after after extraction. extraction. Figure

ISPRS Int. J. Geo-Inf. 2018, 7, 41 ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

13 of 25 13 of 25

Figure 12. Results of simplification in different scales. (a) Feature points (cellsize = 10), cellsize = 10,

Figure 12. Results of simplification in different scales. (a) Feature points (cellsize = 10), cellsize = 10, cellsize = 30, cellsize = 50; (b) feature points (cellsize = 10), cellsize = 10, cellsize = 30, cellsize = 50; (c) feature cellsize = 30, cellsize = 50; (b) feature points (cellsize = 10), cellsize = 10, cellsize = 30, cellsize = 50; (c) feature points (cellsize = 10), cellsize = 10, cellsize = 30, cellsize = 50; (d) feature points (cellsize = 10), cellsize = 10, points (cellsize = 10), cellsize = 10, cellsize = 30, cellsize = 50; (d) feature points (cellsize = 10), cellsize = 10, cellsize = 30, cellsize = 50. cellsize = 30, cellsize = 50.

As shown in Figure 12, with an increase in cellsize, the simplifying effect of the boundary becomes more obvious. Inwith the process of increasing the the cellsize, small bends areof gradually abandoned. As shown in Figure 12, an increase in cellsize, simplifying effect the boundary becomes As for overall characteristics, the shape of the whole lake is maintained after simplification. As more obvious. In the process of increasing the cellsize, small bends are gradually abandoned. for As for localcharacteristics, characteristics, the the most feature points in the localafter areasimplification. are maintained, As andfor thelocal overall shapeprominent of the whole lake is maintained position precision of the feature points is points guaranteed. number of maintained, all points before after characteristics, the most prominent feature in theThe local area are and and the position simplification is shown in Table 2.

precision of the feature points is guaranteed. The number of all points before and after simplification is shown in Table 2.

ISPRS Int. J. Geo-Inf. 2018, 7, 41

14 of 25

Table 2. Number of points before and after simplification. Object

Scale

Number of Corner Points

Number of MPP Points

Number of Simplified Points

a

1:10,000 1:25,000 1:50,000

487 487 487

189 75 43

162 67 38

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

b

1:10,000 1:25,000 Object 1:50,000

c

a 1:10,000 1:25,000 1:50,000

d

1:10,000 1:25,000 c 1:50,000

b

d

Table 2. Number of points before and 86 after simplification. 198

198

29

14 of 25

72 28

Scale Number of Corner Points Number of MPP Points Number of Simplified Points 198 487 18 17 1:10,000 189 162 1:25,000 487 75 67 668 218 194 1:50,000 43 38 668 487 88 80 1:10,000 198 86 72 668 62 56 1:25,000 198 29 28 1:50,000 198 18 17 315 101 85 1:10,000 21851 194 315 668 49 1:25,000 88 80 315 668 35 34 1:50,000 668 62 56 1:10,000 315 101 85 1:25,000 315 51 49 Table increase in cellsize, 35the number of simplified points 1:50,000 2, with an315 34

As shown in decreases. Figure 13 shows the results of simplification of all the lakes at three different scales (1:10,000, 1:25,000, As values shown in 2, withare an cellsize increase=in10, cellsize, the=number of simplified points decreases. 1:50,000). The of Table the cellsize cellsize 30, cellsize = 50, respectively. To obtain a Figure 13 shows the results of simplification of all the lakes at three different scales (1:10,000, 1:25,000, good result, the default values of R, A and F can be set as R = 1.5, A = 175, and F = 3, respectively. We also 1:50,000). The values of the cellsize are cellsize = 10, cellsize = 30, cellsize = 50, respectively. To obtain a used the DP and Wang and Muller [11] (WM) algorithms to conduct contrastive experiments. During good result, the default values of R, A and F can be set as R = 1.5, A = 175, and F = 3, respectively. We the experiment, theDP original water are raster We use ArcGIScontrastive software experiments. (www.arcgis.com) also used the and Wang andareas Muller [11] (WM)data. algorithms to conduct for the conversion the data format and registration of the scale. Given the three methods are During the of experiment, the original water areas are raster data. We that use ArcGIS software (www.arcgis.com) for the conversion of the data and registrationwhether of the scale. Given that the controlled by different parameters, it is difficult toformat strictly determine their results reflect the three methods are controlled by different parameters, it is difficult to strictly determine whether their same map scale. By combining the rules for setting values from reference [1] and manual experience in results reflect the same map scale. By combining the rules for setting values from reference [1] and cartographic generalization, the parameters of the DP and WM algorithms used in this study can be manual experience in cartographic generalization, the parameters of the DP and WM algorithms used set as shown in Figure 4, which is relatively reasonable in this experiment. in this study can be set as shown in Figure 4, which is relatively reasonable in this experiment.

Figure 13. Cont.

ISPRS Int. J. Geo-Inf. 2018, 7, 41

15 of 25

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

15 of 25

Figure 13. Results of simplification of all the lakes. (a) Cellsize = 10; (b) cellsize = 30; (c) cellsize = 50.

Figure 13. Results of simplification of all the lakes. (a) Cellsize = 10; (b) cellsize = 30; (c) cellsize = 50.

Muller proposed that v = 0.4 mm [59] (size on the drawing) is the minimum value when it is visually Thus,v we define minimum (mv) ofisa the feature as follows: Mulleridentifiable. proposed that = 0.4 mmthe [59] (size onvisibility the drawing) minimum value when it is

visually identifiable. Thus, we define the visibility 0.4minimum mm ≤ ≤ 1.5 mm.(mv) of a feature as follows: 0.4 mm ≤ mv ≤ 1.5 mm.

ISPRS Int. J. Geo-Inf. 2018, 7, 41

16 of 25

To ensure the best visual effect, we set the parameter value mv to be approximately 1 during the experiment. The corresponding relationship of scale, cellsize, and mv is shown in Table 3: Table 3. Corresponding relationship of scale, cellsize, and mv. Scale

Cellsize

mv

Original Resolution

1:10,000 1:25,000 1:50,000

10 pixels 30 pixels 50 pixels

1.194 mm 1.432 mm 1.194 mm

1 pixel = 1.194 m 1 pixel = 1.194 m 1 pixel = 1.194 m

Table 4 presents the comparisons of three groups of experiments. Figure 13 shows the simplification results for all the lakes in three different scales. The values of the parameter cellsize at three scales are cellsize = 10, cellsize = 30, cellsize = 50, respectively. Figure 14 shows the typical simplification results of the DP, WM, and BIP methods. Figures 15–17 show the comparison of changes in length, area and number of points at the three scales for the three methods, respectively. Figure 18 and Table 5 show the comparison of areal displacement at the three scales for the three methods. Table 4. Detailed information of changes after simplification. Methods

Scales

Changes in Perimeter Length (%)

Changes in Area (%)

Changes in the Number of Points (%)

Parameter Settings

Execution Time (s)

BIP DP WM BIP DP WM BIP DP WM

1:10,000 1:10,000 1:10,000 1:25,000 1:25,000 1:25,000 1:50,000 1:50,000 1:50,000

−0.047 −0.045 −0.047 −0.176 −0.094 −0.119 −0.253 −0.133 −0.164

0.044 −0.008 0.007 −0.055 −0.014 0.048 −0.103 −0.052 0.068

−0.442 −0.727 −0.346 −0.774 −0.844 −0.549 −0.869 −0.876 −0.645

10 pixels 6m 20 m 30 pixels 15 m 40 m 50 pixels 25 m 50 m

4.3 s 0.2 s 0.8 s 3.7 s 0.1 s 0.6 s 2.8 s 0.1 s 0.3 s

From Figures 13–18 and Tables 4 and 5, we observe the following: (1)

(2)

The proposed BIP method can be effectively used for polygonal boundary simplification of water areas at different scales. The BIP method for the simplification of water areas aligns with people’s visual processes. With an increase in visual distance, i.e., as the scale becomes smaller, the small parts of an object will appear invisible. The invisible parts should be abandoned in the process of simplification. As shown in Figure 14, the black polygons are the results of simplification using the DP and WM algorithms, and the pink polygons are the results of simplification using the proposed BIP method. The smaller part of the polygon marked with the black dotted box can be removed using our method because this part is too small to be seen when the scale is small. The same areas are conserved using the DP and WM algorithms, and this type of simplification is unreasonable as we can hardly see these areas at the small scale. The changes in length resulting from the proposed BIP method are slightly larger than those resulting from the DP and WM methods. With a decrease in scale, the changes in length increase for the three methods. As shown in Table 4, the average per cent changes in the perimeters at scales 1:25,000 and 1:50,000 are −0.176 and −0.253, respectively, for the BIP method. The corresponding per cent changes for the DP method are −0.094 and −0.133, respectively. However, there is an interesting phenomenon in Figure 15, which shows the detailed changes in the perimeter of each object at three scales. As shown in Figure 15, five lakes have remarkable changes in their perimeters because of characteristics based on the visual processes of the BIP method, as described in the above Section 2. In other words, removing the invisible parts leads to significant changes in the perimeters. The changes in the area between the original and simplified boundaries follow the same rule, as shown in Figure 16.

ISPRS Int. J. Geo-Inf. 2018, 7, 41

(3)

(4)

17 of 25

Figure 17 shows the comparison of changes in the number of points. The average per cent change in the number of points of the proposed method is −0.442 at the largest scale. With an increase in cellsize, per2018, cent7, change increases. The average per cent change in the number of is ISPRS Int. J.the Geo-Inf. x FOR PEER REVIEW 17 points of 25 −0.869 at the smallest scale. Overall, the point reduction ratio of the proposed method at the three in is cellsize, perthe centDP change increases. The average perWM cent methods. change in the number of points is in scales lowerthe than method, but higher than the Additionally, as shown −0.869 at the smallest scale. Overall, the point reduction ratio of the proposed method at the three Table 4, since the BIP method uses the image data to perform simplification, the execution time of scales is lower than the DP method, but higher than the WM methods. Additionally, as shown the proposed method is longer than the other two methods. in Table 4, since the BIP method uses the image data to perform simplification, the execution We also use the areal displacement [60] to measure the location accuracy. As shown in Figure 18 time of the proposed method is longer than the other two methods. and Table 5,use thethe average areal displacement of the BIP methodaccuracy. at the three scalesinisFigure smaller (4) We also areal displacement [60] to measure the location As shown 18 than that and of the DP5,method. In addition, the average displacement of the BIPismethod at scales Table the average areal displacement of theareal BIP method at the three scales smaller than 2 ) is smaller than that of the WM method (1974 m2 ). However, the average of 1:10,000 (1494 that of the DP m method. In addition, the average areal displacement of the BIP method at scales 1:10,000 (1494 of m2the ) is BIP smaller than that of theof WM method (1974 m2). However, average arealofdisplacement method at scales 1:25,000 and 1:50,000 (7918 m2the and 13,468 m2 , 2 and 13,468 m2, 2 and areal displacement thethat BIP method at scales of 1:25,000 and 1:50,000 (7918 respectively) is larger of than of the WM method (7628 m 13,362 m2m , respectively). Thus, 2 and 13,362 m2, respectively). Thus, respectively) is larger than that of the WM method (7628 m the location accuracy of the proposed method at the three scales is better than that of the DP the location accuracy of the proposed method at the three scales is better than that of the DP method, but close to the WM method. method, but close to the WM method.

Figure 14. Comparison of the DP, WM and BIP methods. (a) A simplification example in different

Figure 14. Comparison of the DP, WM and BIP methods. (a) A simplification example in different scales using the DP, WM (black line) and BIP (pink line) methods; (b) a typical example; (c) the lake scales using the DP, WM (black line) and BIP (pink line) methods; (b) a typical example; (c) the lake No. 13. No. 13.

ISPRS Int. J. Geo-Inf. 2018, 7, 41 ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

Figure 15. Comparison of changes in length. (a) BIP; (b) DP; (c) WM. Figure 15. Comparison of changes in length. (a) BIP; (b) DP; (c) WM.

18 of 25 18 of 25

ISPRS Int. J. Geo-Inf. 2018, 7, 41 ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

Figure16. 16.Comparison Comparisonof of changes changes in in area. Figure area. (a) (a)BIP; BIP;(b) (b)DP; DP;(c) (c)WM. WM.

19 of 25 19 of 25

ISPRS Int. J. Geo-Inf. 2018, 7, 41

20 of 25

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

20 of 25

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW

20 of 25

Figure 17. Comparison of changes in the number of points. (a) BIP; (b) DP; (c) WM. Figure 17. Comparison of changes in the number of points. (a) BIP; (b) DP; (c) WM.

Figure 17. Comparison of changes in the number of points. (a) BIP; (b) DP; (c) WM.

Figure 18. Comparison of areal displacement. (a) BIP; (b) DP; (c) WM. Figure 18. Comparison of areal displacement. (a) BIP; (b) DP; (c) WM. Figure 18. Comparison of areal displacement. (a) BIP; (b) DP; (c) WM. Table 5. Average areal displacement of the three methods. Table 5. Average areal displacement of the three methods.

BIP DP of the three WMmethods. Table 5.Scale Average areal displacement Scale BIPm2 DPm2 WMm2 1:10,000 1494 2946 1974 1:10,000 7918 1494 2946DP 1974WM Scale BIPm2 1:25,000 9570 m2 7628 m2 22 22 1:25,000 13,468 7918 m 9570 7628 m 1:50,000 m22 2 13,842 m mm 2 1:10,000 1494 m 2946m m2 13,362 1974 2 2 22 2 2 1:50,000 13,468 m 13,842 m 13,362 m 1:25,000 7918 m 9570 m 7628 m 4.3. A Case of Hierarchical Simplification Using Methodm2 1:50,000 13,468the m2BIP 13,842 13,362 m2 4.3. A Case of Hierarchical Simplification Using the BIP Method As the proposed BIP method aligns with people’s visual processes, it is very applicable to the As of theHierarchical proposed method with visual processes, it isthe very applicable to the 4.3. simplification A Case Simplification Using thepeople’s BIP Method of waterBIP areas with aligns hierarchical branches. Figure 19 shows case of hierarchical of water areas with hierarchical Figure shows the branch case of information. hierarchical simplification using the BIP method. Figure 19a isbranches. the original lake 19 with detailed As the proposed BIP method aligns with people’s visual processes, it is very applicable to the simplification using the BIP method. Figure 19a is the original lake with detailed branch information.

simplification of water areas with hierarchical branches. Figure 19 shows the case of hierarchical

ISPRS Int. J. Geo-Inf. 2018, 7, 41

21 of 25

simplification using the BIP method. Figure 19a is the original lake with detailed branch information. ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 21 of 25 The experimental data that were used to simplify water areas at a small scale were obtained from the Baidu map of China. We were usedused datatoat the 14th level from Baidu map.obtained The corresponding The experimental data that simplify water areas at a the small scale were from the scale Baidu at themap 14thoflevel is 1:50,000. The original resolution of the lake is 1 pixel = 16 m. Figure China. We used data at the 14th level from the Baidu map. The corresponding scale at 19b shows the simplification results after removing the branches in the first layer using the BIP method. the 14th level is 1:50,000. The original resolution of the lake is 1 pixel = 16 m. Figure 19b shows the results after removing the branches in the first ovals. layer using the BIP method. the Some Somesimplification typical simplification areas are marked with grey dotted As cellsize increases, thicker typical simplification areas are marked withthe grey dotted ovals.results As cellsize thethe thicker branches will be removed. Figure 19c shows simplification afterincreases, removing branches be removed. 19c showsAccording the simplification after removing the branches in in thebranches secondwill layer using theFigure BIP method. to theresults definition of the visibility constraint, the second layer using the BIP method. According to the definition of the visibility constraint, the the values cellsize = 9, cellsize = 33 and cellsize = 60 are used to generate the results at the scales of values cellsize = 9, cellsize = 33 and cellsize = 60 are used to generate the results at the scales of 1:150,000, 1:150,000, 1:500,000, and 1:1,000,000, respectively. 1:500,000, and 1:1,000,000, respectively.

Figure 19. A case of hierarchical simplification using the BIP method. (a) Original lake; (b) removal

Figure 19. A case of hierarchical simplification using the BIP method. (a) Original lake; (b) removal of of the branches in the first layer, cellsize = 9; (c) removal of the branches in the second layer, cellsize = the branches in the first layer, cellsize = 9; (c) removal of the branches in the second layer, cellsize = 33; 33; (d) cellsize = 60. (d) cellsize = 60.

FOR PEER REVIEW ISPRS Int. J. Geo-Inf. 2018, 2018,7,7,x41

22 of 25

Figure 20 shows the changes in length (Figure 20a) and area (Figure 20b) with increasing cellsize. Figure shows in length (Figure 20b)As with increasing cellsize. The default 20 values of the R, Achanges and F are R = 1.5,(Figure A = 175,20a) andand F = area 3, respectively. shown in Figure 20a, The default values of R, A and F are R = 1.5, A = 175, and F = 3, respectively. As shown in Figure 20a, the rate of change in length is relatively uniform when the cellsize is less than 30. When the cellsize is the rate than of change length is relatively uniform whenAs theshown cellsizein is Figure less than 30.the When cellsize in is greater 30, theinrate of change begins to decrease. 20b, rate the of change greater than 30, the rate of change begins decrease. shown Figure 20b, the rate of the change in area is relatively high when the cellsize is intothe range ofAs 10–30, butin relatively gradual when cellsize area is relatively high when the cellsize is in the range of 10–30, but relatively gradual when the cellsize is in the ranges of 0–10 and 30–40. The values of the cellsize used for cutting off the branches are is in the within ranges of and 30–40. values of the cellsize used cutting offand the branches are(second located located the0–10 gradual range.The These values are cellsize = 9for (first layer) cellsize = 33 within the gradual range. These values are cellsize = 9 (first layer) and cellsize = 33 (second layer). layer).

Figure 20. Changes in length and area using the BIP method. (a) Length change; (b) area change. Figure 20. Changes in length and area using the BIP method. (a) Length change; (b) area change.

5. Conclusions 5. Conclusions This paper proposes a new approach for line simplification using image processing techniques This paper proposes a new approach for line simplification using image processing techniques based on a raster map. In the traditional methods of line simplification, vector data are used as the based on a raster map. In the traditional methods of line simplification, vector data are used as the primary experimental data. However, our study focuses on the rules of people’s visual processes to primary experimental data. However, our study focuses on the rules of people’s visual processes to simplify a water area boundary using the BIP method based on image data. The following simplify a water area boundary using the BIP method based on image data. The following conclusions conclusions can be drawn: can be drawn: (1) The proposed BIP method can be effectively used for polygonal boundary simplification of (1) The proposed BIP method can be effectively used for polygonal boundary simplification of water water areas at different scales. By using the method of corner detection based on CSS, the local areas at different scales. By using the method of corner detection based on CSS, the local features features of the water areas are maintained. of the water areas are maintained. (2) The proposed BIP simplification method aligns with people’s visual processes. As long narrow (2) parts The proposed BIP method aligns with people’s visual processes. As long narrow (as shown in simplification Figure 14a) usually attach around water areas, the BIP method can remove parts (as shown in Figure 14a) usually attach around water areas, the BIP method can remove the the invisible long, narrow parts when the scale decreases by the definition of the minimum invisible long, narrow parts when the scale decreases by the definition of the minimum visibility visibility (mv) of a feature. Therefore, this method is especially suitable for simplifying the (mv) of a feature. this method boundary of waterTherefore, areas at larger scales. is especially suitable for simplifying the boundary of water areas at larger scales. (3) In comparison with the DP and WM methods, the length and area change of the proposed (3) method In comparison with larger the DPat and WMscales, methods, and area changepercentage of the proposed method is slightly three andthe thelength maximum average difference of is slightly larger at three scales, and the maximum average percentage difference of length at the length at the three scales is only 0.06%, and the maximum average percentage difference of area three is onlyis0.06%, and the maximum average percentage difference of area BIP at the three at the scales three scales only 0.05%. The location accuracy resulting from the proposed method scales is than only that 0.05%. Thethe location accuracy the method. proposed BIP method is better than is better form DP method, butresulting close to from the WM that form the DP method, but close to the WM method. (4) Given that the proposed BIP method can more accurately remove the details that humans do not (4) need Givenatthat the scales, proposed method can more accurately remove the detailsat that do not larger the BIP point reduction ratio of the proposed BIP method thehumans three scales is need at larger scales, the point reduction ratio of the proposed BIP method at the three scales is lower than the DP method, but higher than the WM method. Additionally, since the BIP method lowerthe than the DP method, but higher than the WM method. Additionally, since the BIPismethod uses image data to perform simplification, the execution time of the BIP method longer uses the the other imagetwo data to perform simplification, the execution time of the BIP method is longer than methods. than the other two methods. In addition to water areas, we tried to apply the proposed BIP method to any other line elements such In asaddition contourtolines, or administrative boundaries. Apparently, due to line the elements different waterbuildings, areas, we tried to apply the proposed BIP method to any other characteristics different types or of administrative line elements, the method cannot satisfydue thetoneeds of data such as contouroflines, buildings, boundaries. Apparently, the different generalization for all types of line elements, such as buildings withcannot orthogonal characteristics. the characteristics of different types of line elements, the method satisfy the needs ofIndata future, the proposed be improved and appliedwith to other geographic line elements. generalization for all method types of should line elements, such as buildings orthogonal characteristics. In the future, the proposed method should be improved and applied to other geographic line elements.

ISPRS Int. J. Geo-Inf. 2018, 7, 41

23 of 25

Acknowledgments: This research was supported by the National Key Research and Development Program of China (Grant No. 2017YFB0503500), the National High-Tech Research and Development Plan of China (Grant No. 2015AA1239012), and the National Natural Science Foundation of China (Grant No. 41531180). Author Contributions: Yilang Shen and Tinghua Ai conceived and designed the study; Yilang Shen performed the experiments; Yilang Shen analyzed the results and wrote the paper; Yilang Shen, Tinghua Ai and Yakun He read and approved the manusctipt. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3.

4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.

Ai, T.; Ke, S.; Yang, M.; Li, J. Envelope generation and simplification of polylines using Delaunay triangulation. Int. J. Geogr. Inf. Sci. 2017, 31, 297–319. [CrossRef] Ratschek, H.; Rokne, J.; Leriger, M. Robustness in GIS algorithm implementation with application to line simplification. Int. J. Geogr. Inf. Sci. 2001, 15, 707–720. [CrossRef] Mitropoulos, V.; Xydia, A.; Nakos, B.; Vescoukis, V. The Use of Epsilon-convex Area for Attributing Bends along A Cartographic Line. In Proceedings of the 22nd International Cartographic Conference, La Coruña, Spain, 9–16 July 2005. Qian, H.; Zhang, M.; Wu, F. A new simplification approach based on the Oblique-Dividing-Curve method for contour lines. ISPRS Int. J. Geo-Inf. 2016, 5, 153. [CrossRef] Ai, T. The drainage network extraction from contour lines for contour line generalization. ISPRS J. Photogramm. Remote Sens. 2007, 62, 93–103. [CrossRef] Ai, T.; Zhou, Q.; Zhang, X.; Huang, Y.; Zhou, M. A simplification of ria coastline with geomorphologic characteristics reserved. Mar. Geodesy 2014, 37, 167–186. [CrossRef] Lang, T. Rules for the robot draughtsmen. Geogr. Mag. 1969, 42, 50–51. Douglas, D.H.; Peucker, T.K. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Can. Cartogr. 1973, 10, 112–122. [CrossRef] Li, Z.L.; Openshaw, S. Algorithms for automated line generalization based on a natural principle of objective generalization. Int. J. Geogr. Inf. Syst. 1992, 6, 373–389. [CrossRef] Visvalingam, M.; Whyatt, J.D. Line generalization by repeated elimination of points. Cartogr. J. 1993, 30, 46–51. [CrossRef] Wang, Z.; Muller, J.C. Line generalization based on analysis of shape characteristics. Cartogr. Geogr. Inf. Sci. 1998, 25, 3–15. [CrossRef] Jiang, B.; Nakos, B. Line Simplification Using Self-organizing Maps. In Proceedings of the ISPRS Workshop on Spatial Analysis and Decision Making, Hong Kong, China, 3–5 December 2003. Zhou, S.; Jones, C. Shape-aware Line Generalisation with Weighted Effective Area. In Developments in Spatial Data Handling; Springer: Berlin, Germany, 2004; pp. 369–380. Raposo, P. Scale-specific automated line simplification by vertex clustering on a hexagonal tessellation. Cartogr. Geogr. Inf. Sci. 2013, 40, 427–443. [CrossRef] Gökgöz, T.; Sen, A.; Memduhoglu, A.; Hacar, M. A new algorithm for cartographic simplification of streams and lakes using deviation angles and error bands. ISPRS Int. J. Geo-Inf. 2015, 4, 2185–2204. [CrossRef] Samsonov, T.E.; Yakimova, O.P. Shape-adaptive geometric simplification of heterogeneous line datasets. Int. J. Geogr. Inf. Sci. 2017, 31, 1485–1520. [CrossRef] Dyken, C.; Dæhlen, M.; Sevaldrud, T. Simultaneous curve simplification. J. Geogr. Syst. 2009, 11, 273–289. [CrossRef] Saalfeld, A. Topologically consistent line simplification with the Douglas-Peucker algorithm. Cartogr. Geogr. Inf. Sci. 1999, 26, 7–18. [CrossRef] Mustafa, N.; Krishnan, S.; Varadhan, G.; Venkatasubramanian, S. Dynamic simplification and visualization of large maps. Int. J. Geogr. Inf. Sci. 2006, 20, 273–320. [CrossRef] Nöllenburg, M.; Merrick, D.; Wolff, A.; Benkert, M. Morphing polylines: A step towards continuous generalization. Comput. Environ. Urban Syst. 2008, 32, 248–260. [CrossRef] Park, W.; Yu, K. Generalization of Digital Topographic Map Using Hybrid Line Simplification. In Proceedings of the ASPRS 2010 Annual Conference, San Diego, CA, USA, 26–30 April 2010.

ISPRS Int. J. Geo-Inf. 2018, 7, 41

22.

23.

24.

25. 26. 27. 28.

29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43.

44.

24 of 25

Ai, T.; Zhang, X. The Aggregation of Urban Building Clusters Based on the Skeleton Partitioning of Gap Space. In The European Information Society: Leading the Way with Geo-Information; Springer: Berlin/Heidelberg, Germany, 2007; pp. 153–170. Stanislawski, L.V.; Raposo, P.; Howard, M.; Buttenfield, B.P. Automated Metric Assessment of Line Simplification in Humid Landscapes. In Proceedings of the AutoCarto 2012, Columbus, OH, USA, 16–18 September 2012. Nakos, B.; Gaffuri, J.; Mustière, S. A Transition from Simplification to Generalisation of Natural Occurring Lines. In Proceedings of the 11th ICA Workshop on Generalisation and Multiple Representation, Montpelier, France, 20–21 June 2008. Mustière, S. An Algorithm for Legibility Evaluation of Symbolized Lines. In Proceedings of the 4th InterCarto Conference, Barnaul, Russia, 1–4 July 1998; pp. 43–47. De Bruin, S. Modelling positional uncertainty of line features by accounting for stochastic deviations from straight line segments. Trans. GIS 2008, 12, 165–177. [CrossRef] Veregin, H. Quantifying positional error induced by line simplification. Int. J. Geogr. Inf. Sci. 2000, 14, 113–130. [CrossRef] Stoter, J.; Zhang, X.; Stigmar, H.; Harrie, L. Evaluation in Generalisation. In Abstracting Geographic Information in a Data Rich World; Burghardt, D., Duchêne, C., Mackaness, W., Eds.; Springer: Cham, Switzerland, 2014; pp. 259–297. Chrobak, T.; Szombara, S.; Kozoł, K.; Lupa, M. A method for assessing generalized data accuracy with linear object resolution verification. Geocarto Int. 2017, 32, 238–256. [CrossRef] McMaster, R.B. The integration of simplification and smoothing algorithms in line generalization. Can. Cartogr. 1989, 26, 101–121. [CrossRef] Bose, P.; Cabello, S.; Cheong, O.; Gudmundsson, J.; Van Kreveld, M.; Speckmann, B. Area-preserving approximations of polygonal paths. J. Discret. Algorithms 2006, 4, 554–566. [CrossRef] Buchin, K.; Meulemans, W.; Renssen, A.V.; Speckmann, B. Area-preserving simplification and schematization of polygonal subdivisions. ACM Tran. Spat. Algorithms Syst. 2016, 2, 1–36. [CrossRef] Tong, X.; Jin, Y.; Li, L.; Ai, T. Area-preservation simplification of polygonal boundaries by the use of the structured total least squares method with constraints. Trans. GIS 2015, 19, 780–799. [CrossRef] Stum, A.K.; Buttenfield, B.P.; Stanislawski, L.V. Partial polygon pruning of hydrographic features in automated generalization. Trans. GIS 2017, 21, 1061–1078. [CrossRef] Arcelli, C.; Ramella, G. Finding contour-based abstractions of planar patterns. Pattern Recognit. 1993, 26, 1563–1577. [CrossRef] Kolesnikov, A.; Fränti, P. Polygonal approximation of closed discrete curves. Pattern Recognit. 2007, 40, 1282–1293. [CrossRef] Prasad, D.K.; Leung, M.K.H. Polygonal representation of digital curves. In Digital Image Processing; Stefan, G.S., Ed.; InTech: Vienna, Austria, 2012; pp. 71–90. Ray, K.S.; Ray, B.K. Polygonal approximation of digital curve based on reverse engineering concept. Int. J. Image Graph 2013, 13, 1350017. [CrossRef] Witkin, A.P. Scale space filtering. In Proceedings of the 8th International Joint Conference Artificial Intelligence, Karlsruhe, Germany, 8–12 August 1983; pp. 1019–1022. Lindeberg, T. Scale-space theory: A basic tool for analyzing structures at different scales. J. Appl. Stat. 1994, 21, 225–270. [CrossRef] Babaud, J.; Witkin, A.P.; Baudin, M.; Duda, R.O. Uniqueness of the Gaussian kernel for scale-space filtering. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 1, 26–33. [CrossRef] Li, Z. Digital map generalization at the age of the enlightenment: A review of the first forty years. Cartogr. J. 2007, 44, 80–93. [CrossRef] Fung, G.S.K.; Yung, N.H.C.; Pang, G.K.H. Vehicle shape approximation from motion for visual traffic surveillance. In Proceedings of the IEEE 4th International Conference on Intelligent Transportation Systems, Oakland, CA, USA, 25–29 August 2001; pp. 608–613. Serra, B.; Berthod, M. 3-D model localization using high-resolution reconstruction of monocular image sequences. IEEE Trans. Image Process. 1997, 61, 175–188. [CrossRef] [PubMed]

ISPRS Int. J. Geo-Inf. 2018, 7, 41

45.

46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60.

25 of 25

Manku, G.S.; Jain, P.; Aggarwal, A.; Kumar, A.; Banerjee, L. Object Tracking Using Affine Structure for Point Correspondence. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 17–19 June 1997; pp. 704–709. Kitchen, L.; Rosenfeld, A. Gray level corner detection. Pattern Recognit Lett. 1982, 1, 95–102. [CrossRef] Smith, S.M.; Brady, J.M. SUSAN—A new approach to low-level image processing. Int. J. Comput. Vis. 1997, 23, 45–78. [CrossRef] Rosenfeld, A.; Johnston, E. Angle detection on digital curves. IEEE Trans. Comput. 1973, 100, 875–878. [CrossRef] Liu, H.C.; Srinath, M.D. Corner detection from chain-code. Pattern Recognit 1990, 23, 51–68. [CrossRef] Chabat, F.; Yang, G.Z.; Hansell, D.M. A corner orientation detector. Image Vis. Comput. 1999, 17, 761–769. [CrossRef] Rattarangsi, A.; Chin, R.T. Scale-based detection of corners of planar curves. In Proceedings of the 10th International Conference on Pattern Recognition, Atlantic City, NJ, USA, 16–21 June 1990; pp. 923–930. Mokhtarian, F.; Suomela, R. Robust image corner detection through curvature scale space. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1376–1381. [CrossRef] Mokhtarian, F. Enhancing the Curvature Scale Space Corner Detector. In Proceedings of the Scandinavian Conference on Image Analysis, Bergen, Norway, 11–14 June 2001; pp. 145–152. Mokhtarian, F.; Mackworth, A.K. A theory of multi-scale curvature-based shape representation for planar curves. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 789–805. [CrossRef] He, X.C.; Yung, N.H. Corner detector based on global and local curvature properties. Opt. Eng. 2008, 47, 057008. [CrossRef] Sklansky, J.; Chazin, R.L.; Hansen, B.J. Minimum-perimeter polygons or digitized silouettes. IEEE Trans. Comput. 1972, 100, 260–268. [CrossRef] Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2008; pp. 803–807. Shen, Y.; Ai, T. A hierarchical approach for measuring the consistency of water areas between multiple representations of tile maps with different scales. ISPRS Int. J. Geo-Inf. 2017, 6, 240. [CrossRef] Muller, J.C. Fractal and automated line generalization. Cartogr. J. 1987, 24, 27–34. [CrossRef] McMaster, R.B. A statistical analysis of mathematical measures for linear simplification. Am. Cartogr. 1986, 13, 103–116. [CrossRef] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).