Mesh Simplification by Curvature-Enhanced Quadratic Error Metrics

: Polygonal meshes have a significant role in computer graphics, design and manufacturing technology for surface representation and it is often required to reduce their complexity to save memory. An efficient algorithm for detail retaining mesh simplification is proposed; in particular, the method presented is an iterative edge contraction algorithm based on the work of Garland and Heckberts. The original algorithm is improved by enhancing the quadratic error metrics with a penalizing factor based on discrete Gaussian curvature, which is estimated efficiently through the Gauss-Bonnet theorem, to account for the presence of fine details during the edge decimation process. Experimental results show that this new algorithm helps preserve the visually salient features of the model without compromising performance.


Introduction
Nowadays, due to ever-improving computer-aided modeling in both the film industry and manufacturing, it's easy to find ultra-detailed 3D models with millions of faces. Moreover, 3D scanning techniques often produce even larger files due to their redundancy in collecting points. While in some cases this enormous resolution is necessary, in other scenarios, such as mobile applications or in additive manufacturing, it may be beneficial to have a reduced number of triangles in the mesh while maintaining even the finer details.
One of the most common approaches in polygonal mesh decimation is the iterative edge contraction; in order to simplify the mesh, this algorithm iteratively selects an edge and collapses it into a single vertex; this process is repeated until the number of faces of the model is reduced down to the desired quantity. Other well-known algorithms are vertex decimation, presented for the first time by Schroeder et al. (1997), which iteratively picks a vertex for removal, removes its adjacent faces and re-triangulates the resulting hole, or vertex clustering, which joins vertices that are within some threshold distance as presented in Rossignac and Borrel (1993) and Hua et al. (2015). Edge contraction algorithms often use an error function to decide the way edges are collapsed, so the choice of this function leads to different results. Among the choices of the error metric used in the literature, one of the most well known and widely used ones is the quadric error metrics, proposed by Garland and Heckbert (1997), which is fast and reliable, but cannot account for the presence of key features. As later shown by Heckbert and Garland (1999), this method implicitly relies on the principal curvatures of the surface. Michaud et al. (2017) used an error metric based solely on the curvature of the surface, computed through the use of algebraic spheres. While most of those approaches guarantee a small error in terms of distance from the original mesh, they often cannot preserve the most visually salient features of the model. Zhang et al. (2012) presented a way of taking into account relevant features by using the Laplacian operator as a measure of saliency, penalizing contractions in the most feature-rich areas. Moreover, Li et al. (2010) proposed a feature-preserving simplification algorithm based on an absolute curvature weighted quadric error metric, although the choice of using half-edge collapses could lead to suboptimal vertex positions (Garland and Heckbert, 1997). Another simpler approach can be found in Yao et al. (2015) where the quadratic error metrics is modified through an additive term based on a custommade curvature estimate. One of the main drawbacks of this approach, other than using a suboptimal curvature estimation method (Surazhsky et al., 2003), is that the additive term becomes quickly negligible as the simplification progresses.
Among other attempts at providing a reliable method to detect perceptually salient features of the mesh, Watanabe and Belyaev (2001) proposed a curvature-based ridge finding algorithm and Lee et al. (2005) proposed a center-surround mechanism to detect important features based on curvature variation on different scales; in both paper the estimated importance is then used as a weighting on the standard quadric error metrics. More recently Song et al. (2014) proposed a saliency extraction technique by making use of spectral analysis.
In this study, we present a mesh simplification algorithm which makes use of the quadric error metric for optimal vertex positioning and that accounts for discrete curvature in the model, computed through the Gauss-Bonnet scheme, to simplify more aggressively flat and rounded regions while maintaining a high triangle count in the zones where there are details and key features (spikes, sharp edges). The method proposed for determining the most visually salient areas has the goal to be, other than simple to implement, extremely fast to execute in comparison to the core simplification routine, which could be a competitive advantage over more complex methods such as the one proposed by Lee et al. (2005).
The section Algorithm presentation firstly summarizes the edge contraction technique as well as the quadric error metric utilization. Secondly, a simple mesh saliency estimation method is described, as well as its integration into the simplification process. Finally pseudocode and a complexity analysis outline are given. In the section Results and discussion the practical results of this method are exposed through a series of examples on benchmark models.

The STL Format
This paper is focused on the STL file format, which is one of the simplest and yet more widespread ones to represent meshes in 3D modelling and computer graphics. The surface of an object is represented, with obvious limits in resolution, as a list of triangular facets each written in the following form (Szilvśi-Nagy and Matyasi, 2003): where, n is the normal of the facet represented as the position of its endpoint and vi are the vertices. The STL file does not store any topological information about the way the triangles form the actual object. There are thus no data structures about adjacency relation nor any way to understand on-the-go whether a file represents a valid mesh or not. A big drawback of this implementation is that each vertex is stored one time for every triangle it is part of, causing a severe redundancy in the information stored in the file. Due to this organization of the file, it is necessary to traverse the entire mesh at the beginning of the simplification process and store information about the topology of the model. The algorithm keeps track of the triangles and the edges adjacent to each vertex in dedicated data structures.

Edge Contraction
An edge contraction, which can be written as (v1, v2)  v , is the process of joining v1 and v2 by removing them, connecting all their incident edges to a new vertex v and then removing all degenerate faces created in the process, as shown in Fig. 1. If the mesh is manifold and closed, which means that the infinitesimal neighborhood of every point is topologically equivalent to a disk, the faces removed at each step are two. Note that in the original paper of Garland and Heckbert (1997) the approach used is slightly different, in fact, they contract pairs of vertices even if there is no edge between them. While this method is effective in simplifying small disjoint models, it can produce non-manifold meshes which, especially for additive manufacturing, can lead to serious problems during the printing phase; for this reason, we decided to enable only edge contractions. The quality of the decimation algorithm lies in the solution adopted to solve two main problems: How to place the new vertex v and in which order to process the edges.

Error Minimization
To solve the problems mentioned above, as presented in (Garland and Heckbert, 1997), the error at a vertex v = (xv, yv, zv, 1) ⊤ has to be properly defined. Let v be a set of planes, each defined by the equation ax + by + cz-d = 0, with a 2 + b 2 + c 2 = 1. Each plane is then represented as p = (a, b, c, d) ⊤ . Recall that the distance from the point v to the plane p is p ⊤ v. The error at vertex v is then defined as the sum of squared distances of the vertex from the planes: Rearranging the expression above, the following is obtained: The error at vertex v is thus the quadratic form v ⊤ Qv. In the starting mesh, for each vertex, p is the set of planes adjacent to it. Note that the initial error at each vertex is 0 for it lies on its adjacent planes. In the contraction (v1, v2) v the set of planes of the resulting vertex is the union of the two starting ones. This can be approximated by using the rule that 12 v vv Q Q Q , even if this means double-counting some of the planes. During each contraction, the algorithm places v so that it minimizes the error of the resulting vertex, ans the resulting mesh will be as close as possible to the initial one. Being the error a quadratic form, the optimal position for v can be easily found by solving a system of partial derivatives of  with respect to x, y and z; as stated in Garland and Heckbert (1997) If the matrix isn't invertible one can simply search for the minimum along the edge. This is thus a reliable method for choosing the position of the new vertex. Note that the error of the new vertex is also a good measure of the cost of an edge contraction, the algorithm will then process edges in increasing cost order.

Mesh Saliency Estimation
While the error minimization technique presented in Garland and Heckbert (1997) yields extremely good results in uniform surfaces, it has no way of detecting the presence of detail in the mesh, thus some of the features of the model could be washed away in the process. By taking into account the Gaussian curvature of the surface, the algorithm can penalize edges with larger absolute values of Gaussian curvature, increasing their cost so that they are processed later than edges in flatter and less detailed regions. The resulting mesh will have larger and coarser triangles where the surface is smooth, while finer triangles where more detail is needed.
Note that being the mesh a non-C 2 -differentiable surface, the curvature is not defined. The Gaussian curvature K in a vertex can be approximated using the Gauss-Bonnet theorem as described in Surazhsky et al. Assuming that K is constant in the local neighborhood of the vertex, which can be approximated as the Voronoi area or the barycentric area around it (Fig. 2), one can solve for K: As stated in Surazhsky et al. (2003), this estimation is one of the best among the various approximation algorithms. This approach offers at the same time a precise approximation of the Gaussian curvature of the mesh as well a very simple way to implement the curvature penalization, in fact, the curvature at each vertex can be calculated very efficiently through a series of sums of angles and areas.
Let   12 12 , vv vv K K K . Once the curvature of each vertex has been pre-calculated, one can scale the cost of each edge contraction through some function as so: Note that K might vary very harshly, so it is reasonable to map it to [0, 1]. We found that good results can be obtained through the following function, where  is a parameter to tune the effect of curvature: Figure 3 shows the plots of our estimation for the remapped curvature on two models often used in the literature in mesh processing.

Pseudocode
The algorithm can be summarized as follows: 1. Compute Q and K for every vertex 2. Compute the best contraction vertex v for every edge and enqueue the pair using the resulting cost 3. Poll an edge from the queue and get its adjacent triangles and edges 4. Replace the edge with the best possible vertex, updating its matrix as well as the adjacent triangles and edges 5. Return to (3) until the desired size of the mesh is reached

Algorithm Analysis Outline
To analyze the running time complexity of the above algorithm, the following assumptions are needed. Being all matrices 44, operations such as sums and inversions can be considered as O(1); indeed many of such operations can be precomputed for actual constant-time performance. Moreover, the sets used to

Results and Discussion
Various tests were performed on a wide range of parameters, executing our mesh simplification routine on models widely used in the literature. A comparison between the models simplified with and without the curvature-based penalization follows.
In Fig. 4a the results of a compression from 32 to 6.4 k triangles with  set to 150 on the left and to +inf (standard QEM error function) on the right are shown. Note in Fig. 5 that while in the smoother zones of the model the triangles are coarser, in the more detailed ones such as the eyes and nostrils there are finer triangles that lead to much greater detail in comparison to the standard method. A similar result can be obtained in the compression down to 1.6 k triangles (Fig. 4b).
Similar results are obtained on other meshes, such as the models in Fig. 6 and 7. Note that the back of the bunny presents larger triangles while the ears and the face present densification of vertices, increasing the level of detail maintained after the simplification; similarly, the smoother zones of the heart model present coarser triangles while the veins maintain a higher triangle count.   Table 1 shows, comparing our method to the standard one, the root mean squared error of the simplified models (20% of original triangles), which were computed in Rhinoceros 6 by "Mesh-Mesh Deviation" tool of the "Rhino Open Projects" (Savio et al., 2013), which computes the distance between the surface of the original mesh and the simplified one 1 . It must be noted that the metric itself has no way of detecting the presence of detail, thus heavily penalizing the coarser triangles that are created in the smoother zones of the model. In order to account for the quality of the simplification of salient areas, the tests were run, other than on the whole model, on selected parts of the mesh, namely on the head for the cow model and on the veins for the heart model, which are also shown in Fig. 5 and 7b. In general, using the curvature penalization method yields a smaller standard deviation between the meshes than the original method. The advantage, in terms of root mean squared error of the 1 Rhino Open Projects. https://www.food4rhino.com/app/rhinoopenprojects. [Accessed: 07-21-2020] simplified model, of our algorithm over the standard QEM one becomes even larger when considering only the visually salient areas. Moreover, one of the strengths of our approach is that the user can decide to what extent the curvature will affect the overall result of the simplification; by tuning the parameter a, one can choose the result that best fits his needs.
From the tests it resulted that the overhead of estimating the curvature through Gauss-Bonnet, which is due mainly to the computation of the angles between the edges of the triangles and updating the values after each contraction, is minimal ( Table 2). The tests were run on a Java implementation of the method proposed, using an AMD FX-8320 processor; it has to be noted that the code we produced is only for demonstration purposes and thus lacks any form of optimization.
Experimental results thus show that this algorithm is capable of preserving the smaller features of the mesh without impacting performance. This could be useful especially in areas of research such as medicine or archaeology, where the models returned by 3D scanning techniques must retain even the smaller features.

Conclusion
In this study we presented a simple mesh saliency detection method based on discrete curvature estimation, which in turn is accomplished through the use of the Gauss-Bonnet theorem. This measure of visual importance is then embedded into the simplification routine to preserve the features of the mesh. The algorithm presented offers a good improvement over the original one, adding an alternative approach to the already existing ones. While the results obtained are encouraging, this algorithm has still room for improvement. Among the upgrades that are to be done in future work, we enumerate improving overall performance through the use of better data structures for keeping track of vertex neighboring triangles and edges, as well as not enqueuing edges that will be removed before being processed and implementing a better system to detect the formation of degenerate or non-manifold surfaces.