diff --git a/CHANGELOG.md b/CHANGELOG.md index e4fdf73..1ff87ed 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,9 +2,24 @@ The following changes are present in the `main` branch of the repository and are not yet part of a release: + - N/A + +## Version 0.10.0 + +This release implements ["Weighted Laplacian Smoothing for Surface Reconstruction of Particle-based Fluids" (Löschner, Böttcher, Jeske, Bender; 2023)](https://animation.rwth-aachen.de/publication/0583/), mesh cleanup based on ["Mesh Displacement: An Improved Contouring Method for Trivariate Data" (Moore, Warren; 1991)](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49.5214&rep=rep1&type=pdf) and a new, more efficient domain decomposition (see README.md for more details). + + - Lib: Implement new spatial decomposition based on a regular grid of subdomains (subdomains are dense marching cubes grids) - CLI: Make new spatial decomposition available in CLI with `--subdomain-grid=on` - - Lib: Implement new spatial decomposition based on a regular grid of subdomains, subdomains are dense marching cubes grids - - Lib: Support for reading and writing PLY meshes + - Lib: Implement weighted Laplacian smoothing to remove bumps from surfaces according to paper "Weighted Laplacian Smoothing for Surface Reconstruction of Particle-based Fluids" (Löschner, Böttcher, Jeske, Bender 2023) + - CLI: Add arguments to enable and control weighted Laplacian smoothing `--mesh-smoothing-iters=...`, `--mesh-smoothing-weights=on` etc. + - Lib: Implement `marching_cubes_cleanup` function: a marching cubes "mesh cleanup" decimation inspired by "Mesh Displacement: An Improved Contouring Method for Trivariate Data" (Moore, Warren 1991) + - CLI: Add argument to enable mesh cleanup: `--mesh-cleanup=on` + - Lib: Add functions to `TriMesh3d` to find non-manifold edges and vertices + - CLI: Add arguments to check if output meshes are manifold (no non-manifold edges and vertices): `--mesh-check-manifold=on`, `--mesh-check-closed=on` + - Lib: Support for mixed triangle and quad meshes + - Lib: Implement `convert_tris_to_quads` function: greedily merge triangles to quads if they fulfill certain criteria (maximum angle in quad, "squareness" of the quad, angle between triangle normals) + - CLI: Add arguments to enable and control triangle to quad conversion with `--generate-quads=on` etc. + - Lib: Support for reading and writing PLY meshes (`MixedTriQuadMesh3d`) - CLI: Support for filtering input particles using an AABB with `--particle-aabb-min`/`--particle-aabb-max` - CLI: Support for clamping the triangle mesh using an AABB with `--mesh-aabb-min`/`--mesh-aabb-max` @@ -69,9 +84,9 @@ The following changes are present in the `main` branch of the repository and are This release fixes a couple of bugs that may lead to inconsistent surface reconstructions when using domain decomposition (i.e. reconstructions with artificial bumps exactly at the subdomain boundaries, especially on flat surfaces). Currently there are no other known bugs and the domain decomposed approach appears to be really fast and robust. -In addition the CLI now reports more detailed timing statistics for multi-threaded reconstructions. +In addition, the CLI now reports more detailed timing statistics for multi-threaded reconstructions. -Otherwise this release contains just some small changes to command line parameters. +Otherwise, this release contains just some small changes to command line parameters. - Lib: Add a `ParticleDensityComputationStrategy` enum to the `SpatialDecompositionParameters` struct. In order for domain decomposition to work consistently, the per particle densities have to be evaluated to a consistent value between domains. This is especially important for the ghost particles. Previously, this resulted inconsistent density values on boundaries if the ghost particle margin was not at least 2x the compact support radius (as this ensures that the inner ghost particles actually have the correct density). This option is now still available as the `IndependentSubdomains` strategy. The preferred way, that avoids the 2x ghost particle margin is the `SynchronizeSubdomains` where the density values of the particles in the subdomains are first collected into a global storage. This can be faster as the previous method as this avoids having to collect a double-width ghost particle layer. In addition there is the "playing it safe" option, the `Global` strategy, where the particle densities are computed in a completely global step before any domain decomposition. This approach however is *really* slow for large quantities of particles. For more information, read the documentation on the `ParticleDensityComputationStrategy` enum. - Lib: Fix bug where the workspace storage was not cleared correctly leading to inconsistent results depending on the sequence of processed subdomains @@ -91,7 +106,7 @@ Otherwise this release contains just some small changes to command line paramete The biggest new feature is a domain decomposed approach for the surface reconstruction by performing a spatial decomposition of the particle set with an octree. The resulting local patches can then be processed in parallel (leaving a single layer of boundary cells per patch untriangulated to avoid incompatible boundaries). -Afterwards, a stitching procedure walks the octree back upwards and merges the octree leaves by averaging density values on the boundaries. +Afterward, a stitching procedure walks the octree back upwards and merges the octree leaves by averaging density values on the boundaries. As the library uses task based parallelism, a task for stitching can be enqueued as soon as all children of an octree node are processed. Depending on the number of available threads and the particle data, this approach results in a speedup of 4-10x in comparison to the global parallel approach in selected benchmarks. At the moment, this domain decomposition approach is only available when allowing to parallelize over particles using the `--mt-particles` flag. diff --git a/CITATION.cff b/CITATION.cff index 2cca56a..af22017 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -2,7 +2,7 @@ # Visit https://bit.ly/cffinit to generate yours today! cff-version: 1.2.0 -title: splashsurf +title: '"splashsurf" Surface Reconstruction Software' message: >- If you use this software in your work, please consider citing it using these metadata. @@ -12,10 +12,10 @@ authors: given-names: Fabian affiliation: RWTH Aachen University orcid: 'https://orcid.org/0000-0001-6818-2953' -url: 'https://www.floeschner.de/splashsurf' +url: 'https://splashsurf.physics-simulation.org' abstract: >- Splashsurf is a surface reconstruction tool and framework for reconstructing surfaces from particle data. license: MIT -version: 0.9.1 -date-released: '2023-04-19' +version: 0.10.0 +date-released: '2023-09-25' diff --git a/Cargo.lock b/Cargo.lock index cd452f9..3183845 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -10,9 +10,9 @@ checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe" [[package]] name = "aho-corasick" -version = "1.0.5" +version = "1.1.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0c378d78423fdad8089616f827526ee33c19f2fddbd5de1629152c9593ba4783" +checksum = "ea5d730647d4fadd988536d06fecce94b7b4f2a7efdae548f1cf4b63205518ab" dependencies = [ "memchr 2.6.3", ] @@ -148,9 +148,9 @@ checksum = "b4682ae6287fcf752ecaabbfcc7b6f9b72aa33933dc23a554d853aea8eea8635" [[package]] name = "bumpalo" -version = "3.13.0" +version = "3.14.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a3e2c3daef883ecc1b5d58c15adae93470a91d425f3532ba1695849656af3fc1" +checksum = "7f30e7476521f6f8af1a1c4c0b8cc94f0bee37d91763d0ca2665f299b6cd8aec" [[package]] name = "bytecount" @@ -172,7 +172,7 @@ checksum = "965ab7eb5f8f97d2a083c799f3a1b994fc397b2fe2da5d1da1626ce15a39f2b1" dependencies = [ "proc-macro2", "quote", - "syn 2.0.32", + "syn 2.0.37", ] [[package]] @@ -235,9 +235,9 @@ checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "chrono" -version = "0.4.30" +version = "0.4.31" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "defd4e7873dbddba6c7c91e199c7fcb946abc4a6a4ac3195400bcfb01b5de877" +checksum = "7f2c685bad3eb3d45a01354cedb7d5faa66194d1d58ba6e267a8de788f79db38" dependencies = [ "android-tzdata", "iana-time-zone", @@ -276,9 +276,9 @@ dependencies = [ [[package]] name = "clap" -version = "4.4.2" +version = "4.4.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6a13b88d2c62ff462f88e4a121f17a82c1af05693a2f192b5c38d14de73c19f6" +checksum = "b1d7b8d5ec32af0fadc644bf1fd509a688c2103b185644bb1e29d164e0703136" dependencies = [ "clap_builder", "clap_derive", @@ -286,9 +286,9 @@ dependencies = [ [[package]] name = "clap_builder" -version = "4.4.2" +version = "4.4.4" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2bb9faaa7c2ef94b2743a21f5a29e6f0010dff4caa69ac8e9d6cf8b6fa74da08" +checksum = "5179bb514e4d7c2051749d8fcefa2ed6d06a9f4e6d69faf3805f5d80b8cf8d56" dependencies = [ "anstream", "anstyle", @@ -305,7 +305,7 @@ dependencies = [ "heck", "proc-macro2", "quote", - "syn 2.0.32", + "syn 2.0.37", ] [[package]] @@ -390,16 +390,6 @@ version = "1.1.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7059fff8937831a9ae6f0fe4d658ffabf58f2ca96aa9dec1c889f936f705f216" -[[package]] -name = "crossbeam-channel" -version = "0.5.8" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "a33c2bf77f2df06183c3aa30d1e96c0695a313d4f9c453cc3762a6db39f99200" -dependencies = [ - "cfg-if", - "crossbeam-utils", -] - [[package]] name = "crossbeam-deque" version = "0.8.3" @@ -490,9 +480,9 @@ dependencies = [ [[package]] name = "fastrand" -version = "2.0.0" +version = "2.0.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "6999dc1837253364c2ebb0704ba97994bd874e8f195d665c50b7548f6ea92764" +checksum = "25cbce373ec4653f1a01a31e8a5e5ec0c622dc27ff9c4e6606eefef5cbbed4a5" [[package]] name = "fern" @@ -581,9 +571,9 @@ checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8" [[package]] name = "hermit-abi" -version = "0.3.2" +version = "0.3.3" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "443144c8cdadd93ebf52ddb4056d257f5b52c04d3c804e657d19eb73fc33668b" +checksum = "d77f7ec81a6d05a3abb01ab6eb7590f6083d08449fe5a1c8b1e620283546ccb7" [[package]] name = "iana-time-zone" @@ -610,9 +600,9 @@ dependencies = [ [[package]] name = "indicatif" -version = "0.17.6" +version = "0.17.7" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "0b297dc40733f23a0e52728a58fa9489a5b7638a324932de16b41adc3ef80730" +checksum = "fb28741c9db9a713d93deb3bb9515c20788cef5815265bee4980e87bde7e0f25" dependencies = [ "console", "instant", @@ -691,9 +681,9 @@ dependencies = [ [[package]] name = "libc" -version = "0.2.147" +version = "0.2.148" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b4668fb0ea861c1df094127ac5f1da3409a82116a4ba74fca2e58ef927159bb3" +checksum = "9cdc71e17332e86d2e1d38c1f99edcb6288ee11b815fb1a4b049eaa2114d369b" [[package]] name = "libm" @@ -748,9 +738,9 @@ dependencies = [ [[package]] name = "matrixmultiply" -version = "0.3.7" +version = "0.3.8" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "090126dc04f95dc0d1c1c91f61bdd474b3930ca064c1edc8a849da2c6cbe1e77" +checksum = "7574c1cf36da4798ab73da5b215bbf444f50718207754cb522201d78d1cd0ff2" dependencies = [ "autocfg", "rawpointer", @@ -895,16 +885,6 @@ dependencies = [ "libm", ] -[[package]] -name = "num_cpus" -version = "1.16.0" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4161fcb6d602d4d2081af7c3a45852d875a03dd337a6bfdd6e06407b61342a43" -dependencies = [ - "hermit-abi", - "libc", -] - [[package]] name = "number_prefix" version = "0.4.0" @@ -1049,9 +1029,9 @@ checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de" [[package]] name = "proc-macro2" -version = "1.0.66" +version = "1.0.67" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "18fb31db3f9bddb2ea821cde30a9f70117e3f119938b5ee630b7403aa6e2ead9" +checksum = "3d433d9f1a3e8c1263d9456598b16fec66f4acc9a74dacffd35c7bb09b3a1328" dependencies = [ "unicode-ident", ] @@ -1134,9 +1114,9 @@ checksum = "60a357793950651c4ed0f3f52338f53b2f809f32d83a07f72909fa13e4c6c1e3" [[package]] name = "rayon" -version = "1.7.0" +version = "1.8.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1d2df5196e37bcc87abebc0053e20787d73847bb33134a69841207dd0a47f03b" +checksum = "9c27db03db7734835b3f53954b534c91069375ce6ccaa2e065441e07d9b6cdb1" dependencies = [ "either", "rayon-core", @@ -1144,14 +1124,12 @@ dependencies = [ [[package]] name = "rayon-core" -version = "1.11.0" +version = "1.12.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "4b8f95bd6966f5c87776639160a66bd8ab9895d9d4ab01ddba9fc60661aebe8d" +checksum = "5ce3fb6ad83f861aac485e76e1985cd109d9a3713802152be56c3b1f0e0658ed" dependencies = [ - "crossbeam-channel", "crossbeam-deque", "crossbeam-utils", - "num_cpus", ] [[package]] @@ -1214,9 +1192,9 @@ dependencies = [ [[package]] name = "rustix" -version = "0.38.13" +version = "0.38.14" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d7db8590df6dfcd144d22afd1b83b36c21a18d7cbc1dc4bb5295a8712e9eb662" +checksum = "747c788e9ce8e92b12cd485c49ddf90723550b654b32508f979b71a7b1ecda4f" dependencies = [ "bitflags 2.4.0", "errno", @@ -1265,9 +1243,9 @@ dependencies = [ [[package]] name = "semver" -version = "1.0.18" +version = "1.0.19" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "b0293b4b29daaf487284529cc2f5675b8e57c61f70167ba415a463651fd6a918" +checksum = "ad977052201c6de01a8ef2aa3378c4bd23217a056337d1d6da40468d267a4fb0" dependencies = [ "serde", ] @@ -1289,14 +1267,14 @@ checksum = "4eca7ac642d82aa35b60049a6eccb4be6be75e599bd2e9adb5f875a737654af2" dependencies = [ "proc-macro2", "quote", - "syn 2.0.32", + "syn 2.0.37", ] [[package]] name = "serde_json" -version = "1.0.106" +version = "1.0.107" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "2cc66a619ed80bf7a0f6b17dd063a84b88f6dea1813737cf469aef1d081142c2" +checksum = "6b420ce6e3d8bd882e9b243c6eed35dbc9a6110c9769e74b584e0d68d1f20c65" dependencies = [ "itoa", "ryu", @@ -1333,9 +1311,9 @@ dependencies = [ [[package]] name = "smallvec" -version = "1.11.0" +version = "1.11.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "62bb4feee49fdd9f707ef802e22365a35de4b7b299de4763d44bfea899442ff9" +checksum = "942b4a808e05215192e39f4ab80813e599068285906cc91aa64f923db842bd5a" [[package]] name = "spin" @@ -1425,9 +1403,9 @@ dependencies = [ [[package]] name = "syn" -version = "2.0.32" +version = "2.0.37" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "239814284fd6f1a4ffe4ca893952cdd93c224b6a1571c9a9eadd670295c0c9e2" +checksum = "7303ef2c05cd654186cb250d29049a24840ca25d2747c25c0381c8d9e2f582e8" dependencies = [ "proc-macro2", "quote", @@ -1464,7 +1442,7 @@ checksum = "49922ecae66cc8a249b77e68d1d0623c1b2c514f0060c27cdc68bd62a1219d35" dependencies = [ "proc-macro2", "quote", - "syn 2.0.32", + "syn 2.0.37", ] [[package]] @@ -1489,9 +1467,9 @@ dependencies = [ [[package]] name = "typenum" -version = "1.16.0" +version = "1.17.0" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "497961ef93d974e23eb6f433eb5fe1b7930b659f06d12dec6fc44a8f554c0bba" +checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825" [[package]] name = "ultraviolet" @@ -1513,15 +1491,15 @@ dependencies = [ [[package]] name = "unicode-ident" -version = "1.0.11" +version = "1.0.12" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "301abaae475aa91687eb82514b328ab47a211a533026cb25fc3e519b86adfc3c" +checksum = "3354b9ac3fae1ff6755cb6db53683adb661634f67557942dea4facebec0fee4b" [[package]] name = "unicode-width" -version = "0.1.10" +version = "0.1.11" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c0edd1e5b14653f783770bce4a4dabb4a5108a5370a5f5d8cfe8710c361f6c8b" +checksum = "e51733f11c9c4f72aa0c160008246859e340b00807569a0da0e7a1079b27ba85" [[package]] name = "utf8parse" @@ -1591,7 +1569,7 @@ dependencies = [ "once_cell", "proc-macro2", "quote", - "syn 2.0.32", + "syn 2.0.37", "wasm-bindgen-shared", ] @@ -1613,7 +1591,7 @@ checksum = "54681b18a46765f095758388f2d0cf16eb8d4169b639ab575a8f5693af210c7b" dependencies = [ "proc-macro2", "quote", - "syn 2.0.32", + "syn 2.0.37", "wasm-bindgen-backend", "wasm-bindgen-shared", ] @@ -1662,9 +1640,9 @@ checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6" [[package]] name = "winapi-util" -version = "0.1.5" +version = "0.1.6" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "70ec6ce85bb158151cae5e5c87f95a8e97d2c0c4b001223f33a334e3ce5de178" +checksum = "f29e6f9198ba0d26b4c9f07dbe6f9ed633e1f3d5b8b414090084349e46a52596" dependencies = [ "winapi", ] diff --git a/README.md b/README.md index 2abe987..8aea2dc 100644 --- a/README.md +++ b/README.md @@ -25,19 +25,27 @@ reconstructed from this particle data. The next image shows a reconstructed surf with a "smoothing length" of `2.2` times the particles radius and a cell size of `1.1` times the particle radius. The third image shows a finer reconstruction with a cell size of `0.45` times the particle radius. These surface meshes can then be fed into 3D rendering software such as [Blender](https://www.blender.org/) to generate beautiful water animations. -The result might look something like this (please excuse the lack of 3D rendering skills): +The result might look something like this:

Rendered water animation

+Note: This animation does not show the recently added smoothing features of the tool, for more recent rendering see [this video](https://youtu.be/2bYvaUXlBQs). + +--- + **Contents** - [The `splashsurf` CLI](#the-splashsurf-cli) - [Introduction](#introduction) + - [Domain decomposition](#domain-decomposition) + - [Octree-based decomposition](#octree-based-decomposition) + - [Subdomain grid-based decomposition](#subdomain-grid-based-decomposition) - [Notes](#notes) - [Installation](#installation) - [Usage](#usage) - [Recommended settings](#recommended-settings) + - [Weighted surface smoothing](#weighted-surface-smoothing) - [Benchmark example](#benchmark-example) - [Sequences of files](#sequences-of-files) - [Input file formats](#input-file-formats) @@ -53,34 +61,56 @@ The result might look something like this (please excuse the lack of 3D renderin - [The `convert` subcommand](#the-convert-subcommand) - [License](#license) + # The `splashsurf` CLI The following sections mainly focus on the CLI of `splashsurf`. For more information on the library, see the [corresponding readme](https://github.com/InteractiveComputerGraphics/splashsurf/blob/main/splashsurf_lib) in the `splashsurf_lib` subfolder or the [`splashsurf_lib` crate](https://crates.io/crates/splashsurf_lib) on crates.io. ## Introduction -This is a basic but high-performance implementation of a marching cubes based surface reconstruction for SPH fluid simulations (e.g performed with [SPlisHSPlasH](https://github.com/InteractiveComputerGraphics/SPlisHSPlasH)). +This is CLI to run a fast marching cubes based surface reconstruction for SPH fluid simulations (e.g. performed with [SPlisHSPlasH](https://github.com/InteractiveComputerGraphics/SPlisHSPlasH)). The output of this tool is the reconstructed triangle surface mesh of the fluid. At the moment it supports computing normals on the surface using SPH gradients and interpolating scalar and vector particle attributes to the surface. -No additional smoothing or decimation operations are currently implemented. -As input, it supports reading particle positions from `.vtk`, `.bgeo`, `.ply`, `.json` and binary `.xyz` files (i.e. files containing a binary dump of a particle position array). -In addition, required parameters are the kernel radius and particle radius (to compute the volume of particles) used for the original SPH simulation as well as the surface threshold. - -By default, a domain decomposition of the particle set is performed using octree-based subdivision. -The implementation first computes the density of each particle using the typical SPH approach with a cubic kernel. -This density is then evaluated or mapped onto a sparse grid using spatial hashing in the support radius of each particle. -This implies that memory is only allocated in areas where the fluid density is non-zero. This is in contrast to a naive approach where the marching cubes background grid is allocated for the whole domain. -The marching cubes reconstruction is performed only in the narrowband of grid cells where the density values cross the surface threshold. Cells completely in the interior of the fluid are skipped. For more details, please refer to the [readme of the library]((https://github.com/InteractiveComputerGraphics/splashsurf/blob/main/splashsurf_lib/README.md)). +To get rid of the typical bumps from SPH simulations, it supports a weighted Laplacian smoothing approach [detailed below](#weighted-surface-smoothing). +As input, it supports reading particle positions from `.vtk`/`.vtu`, `.bgeo`, `.ply`, `.json` and binary `.xyz` (i.e. files containing a binary dump of a particle position array) files. +Required parameters to perform a reconstruction are the kernel radius and particle radius (to compute the volume of particles) used for the original SPH simulation as well as the marching cubes resolution (a default iso-surface threshold is pre-configured). + +## Domain decomposition + +A naive dense marching cubes reconstruction allocating a full 3D array over the entire fulid domain quickly becomes infeasible for larger simulations. +Instead, one could use a global hashmap where only cubes that contain non-zero fluid density values are allocated. +This approach is used in `splashsurf` if domain decomposition is disabled completely. +However, the global hashmap approach does not lead to good cache locality and is not well suited for parallelization (even specialized parallel map implementations like [`dashmap`](https://github.com/xacrimon/dashmap) have their performance limitations). +To improve on this situation `splashsurf` currently implements two domain decomposition approaches. + +### Octree-based decomposition +The octree-based decomposition is currently the default approach if no other option is specified but will probably be replaced by the grid-based approach described below. +For the octree-based decomposition an octree is built over all particles with an automatically determined target number of particles per leaf node. +For each leaf node, a hashmap is used like outlined above. +As each hashmap is smaller, cache locality is improved and due to the decomposition, each thread can work on its own local hashmap. Finally, all surface patches are stitched together by walking the octree back up, resulting in a closed surface. +Downsides of this approach are that the octree construction starting from the root and stitching back towards the root limit the amount of paralleism during some stages. + +### Subdomain grid-based decomposition + +Since version 0.10.0, `splashsurf` implements a new domain decomposition approach called the "subdomain grid" approach, toggeled with the `--subdomain-grid=on` flag. +Here, the goal is to divide the fluid domain into subdomains with a fixed number of marching cubes cells, by default `64x64x64` cubes. +For each subdomain a dense 3D array is allocated for the marching cubes cells. +Of course, only subdomains that contain fluid particles are actually allocated. +For subdomains that contain only a very small number of fluid particles (less th 5% of the largest subdomain) a hashmap is used instead to not waste too much storage. +As most domains are dense however, the marching cubes triangulation per subdomain is very fast as it can make full use of cache locality and the entire procedure is trivially parallelizable. +For the stitching we ensure that we perform floating point operations in the same order at the subdomain boundaries (this can be ensured without synchronization). +If the field values on the subdomain boundaries are identical from both sides, the marching cubes triangulations will be topologically compatible and can be merged in a post-processing step that is also parallelizable. +Overall, this approach should almost always be faster than the previous octree-based aproach. + ## Notes -For small numbers of fluid particles (i.e. in the low thousands or less) the multithreaded implementation may have worse performance due to the task based parallelism and the additional overhead of domain decomposition and stitching. +For small numbers of fluid particles (i.e. in the low thousands or less) the domain decomposition implementation may have worse performance due to the task based parallelism and the additional overhead of domain decomposition and stitching. In this case, you can try to disable the domain decomposition. The reconstruction will then use a global approach that is parallelized using thread-local hashmaps. For larger quantities of particles the decomposition approach is expected to be always faster. Due to the use of hash maps and multi-threading (if enabled), the output of this implementation is not deterministic. -In the future, flags may be added to switch the internal data structures to use binary trees for debugging purposes. As shown below, the tool can handle the output of large simulations. However, it was not tested with a wide range of parameters and may not be totally robust against corner-cases or extreme parameters. @@ -103,6 +133,29 @@ Good settings for the surface reconstruction depend on the original simulation a - `surface-threshold`: a good value depends on the selected `particle-radius` and `smoothing-length` and can be used to counteract a fluid volume increase e.g. due to a larger particle radius. In combination with the other recommended values a threshold of `0.6` seemed to work well. - `cube-size` usually should not be chosen larger than `1.0` to avoid artifacts (e.g. single particles decaying into rhomboids), start with a value in the range of `0.75` to `0.5` and decrease/increase it if the result is too coarse or the reconstruction takes too long. +### Weighted surface smoothing +The CLI implements the paper ["Weighted Laplacian Smoothing for Surface Reconstruction of Particle-based Fluids" (Löschner, Böttcher, Jeske, Bender; 2023)](https://animation.rwth-aachen.de/publication/0583/) which proposes a fast smoothing approach to avoid typical bumpy surfaces while preventing loss of volume that typically occurs with simple smoothing methods. +The following images show a rendering of a typical surface reconstruction (on the right) with visible bumps due to the particles compared to the same surface reconstruction with weighted smoothing applied (on the left): + +

+Image of the original surface reconstruction without smoothing (bumpy & rough) Image of the surface reconstruction with weighted smoothing applied (nice & smooth) +

+ +You can see this rendering in motion in [this video](https://youtu.be/2bYvaUXlBQs). +To apply this smoothing, we recommend the following settings: + - `--mesh-smoothing-weights=on`: This enables the use of special weights during the smoothing process that preserve fluid details. For more information we refer to the [paper](https://animation.rwth-aachen.de/publication/0583/). + - `--mesh-smoothing-iters=25`: This enables smoothing of the output mesh. The individual iterations are relatively fast and 25 iterations appeared to strike a good balance between an initially bumpy surface and potential over-smoothing. + - `--mesh-cleanup=on`/`--decimate-barnacles=on`: On of the options should be used when applying smoothing, otherwise artifacts can appear on the surface (for more details see the paper). The `mesh-cleanup` flag enables a general purpose marching cubes mesh cleanup procedure that removes small sliver triangles everywhere on the mesh. The `decimate-barnacles` enables a more targeted decimation that only removes specific triangle configurations that are problematic for the smoothing. The former approach results in a "nicer" mesh overall but can be slower than the latter. + - `--normals-smoothing-iters=10`: If normals are being exported (with `--normals=on`), this results in an even smoother appearance during rendering. + +For the reconstruction parameters in conjunction with the weighted smoothing we recommend parameters close to the simulation parameters. +That means selecting the same particle radius as in the simulation, a corresponding smoothing length (e.g. for SPlisHSPlasH a value of `2.0`), a surface-threshold between `0.6` and `0.7` and a cube size usually between `0.5` and `1.0`. + +A full invocation of the tool might look like this: +``` +splashsurf reconstruct particles.vtk -r=0.025 -l=2.0 -c=0.5 -t=0.6 --subdomain-grid=on --mesh-cleanup=on --mesh-smoothing-weights=on --mesh-smoothing-iters=25 --normals=on --normals-smoothing-iters=10 +``` + ### Benchmark example For example: ``` @@ -179,9 +232,19 @@ Note that the tool collects all existing filenames as soon as the command is inv The first and last file of a sequences that should be processed can be specified with the `-s`/`--start-index` and/or `-e`/`--end-index` arguments. By specifying the flag `--mt-files=on`, several files can be processed in parallel. -If this is enabled, you should ideally also set `--mt-particles=off` as enabling both will probably degrade performance. +If this is enabled, you should also set `--mt-particles=off` as enabling both will probably degrade performance. The combination of `--mt-files=on` and `--mt-particles=off` can be faster if many files with only few particles have to be processed. +The number of threads can be influenced using the `--num-threads`/`-n` argument or the `RAYON_NUM_THREADS` environment variable + +**NOTE:** Currently, some functions do not have a sequential implementation and always parallelize over the particles or the mesh/domain. +This includes: + - the new "subdomain-grid" domain decomposition approach, as an alternative to the previous octree-based approach + - some post-processing functionality (interpolation of smoothing weights, interpolation of normals & other fluid attributes) + +Using the `--mt-particles=off` argument does not have an effect on these parts of the surface reconstruction. +For now, it is therefore recommended to not parallelize over multiple files if this functionality is used. + ## Input file formats ### VTK @@ -242,16 +305,18 @@ The file format is inferred from the extension of output filename. ### The `reconstruct` command ``` -splashsurf-reconstruct (v0.9.3) - Reconstruct a surface from particle data +splashsurf-reconstruct (v0.10.0) - Reconstruct a surface from particle data Usage: splashsurf reconstruct [OPTIONS] --particle-radius --smoothing-length --cube-size Options: + -q, --quiet Enable quiet mode (no output except for severe panic messages), overrides verbosity level + -v... Print more verbose output, use multiple "v"s for even more verbose output (-v, -vv) -h, --help Print help -V, --version Print version Input/output: - -o, --output-file Filename for writing the reconstructed surface to disk (default: "{original_filename}_surface.vtk") + -o, --output-file Filename for writing the reconstructed surface to disk (supported formats: VTK, PLY, OBJ, default: "{original_filename}_surface.vtk") --output-dir Optional base directory for all output files (default: current working directory) -s, --start-index Index of the first input file to process when processing a sequence of files (default: lowest index of the sequence) -e, --end-index Index of the last input file to process when processing a sequence of files (default: highest index of the sequence) @@ -268,39 +333,79 @@ Numerical reconstruction parameters: The cube edge length used for marching cubes in multiplies of the particle radius, corresponds to the cell size of the implicit background grid -t, --surface-threshold The iso-surface threshold for the density, i.e. the normalized value of the reconstructed density level that indicates the fluid surface (in multiplies of the rest density) [default: 0.6] - --domain-min + --particle-aabb-min Lower corner of the domain where surface reconstruction should be performed (requires domain-max to be specified) - --domain-max + --particle-aabb-max Upper corner of the domain where surface reconstruction should be performed (requires domain-min to be specified) Advanced parameters: - -d, --double-precision= Whether to enable the use of double precision for all computations [default: off] [possible values: off, on] - --mt-files= Flag to enable multi-threading to process multiple input files in parallel [default: off] [possible values: off, on] - --mt-particles= Flag to enable multi-threading for a single input file by processing chunks of particles in parallel [default: on] [possible values: off, on] + -d, --double-precision= Enable the use of double precision for all computations [default: off] [possible values: off, on] + --mt-files= Enable multi-threading to process multiple input files in parallel (NOTE: Currently, the subdomain-grid domain decomposition approach and some post-processing functions including interpolation do not have sequential versions and therefore do not work well with this option enabled) [default: off] [possible values: off, on] + --mt-particles= Enable multi-threading for a single input file by processing chunks of particles in parallel [default: on] [possible values: off, on] -n, --num-threads Set the number of threads for the worker thread pool -Octree (domain decomposition) parameters: +Domain decomposition (octree or grid) parameters: + --subdomain-grid= + Enable spatial decomposition using a regular grid-based approach [default: off] [possible values: off, on] + --subdomain-cubes + Each subdomain will be a cube consisting of this number of MC cube cells along each coordinate axis [default: 64] --octree-decomposition= - Whether to enable spatial decomposition using an octree (faster) instead of a global approach [default: on] [possible values: off, on] + Enable spatial decomposition using an octree (faster) instead of a global approach [default: on] [possible values: off, on] --octree-stitch-subdomains= - Whether to enable stitching of the disconnected local meshes resulting from the reconstruction when spatial decomposition is enabled (slower, but without stitching meshes will not be closed) [default: on] [possible values: off, on] + Enable stitching of the disconnected local meshes resulting from the reconstruction when spatial decomposition is enabled (slower, but without stitching meshes will not be closed) [default: on] [possible values: off, on] --octree-max-particles The maximum number of particles for leaf nodes of the octree, default is to compute it based on the number of threads and particles --octree-ghost-margin-factor Safety factor applied to the kernel compact support radius when it's used as a margin to collect ghost particles in the leaf nodes when performing the spatial decomposition --octree-global-density= - Whether to compute particle densities in a global step before domain decomposition (slower) [default: off] [possible values: off, on] + Enable computing particle densities in a global step before domain decomposition (slower) [default: off] [possible values: off, on] --octree-sync-local-density= - Whether to compute particle densities per subdomain but synchronize densities for ghost-particles (faster, recommended). Note: if both this and global particle density computation is disabled the ghost particle margin has to be increased to at least 2.0 to compute correct density values for ghost particles [default: on] [possible values: off, on] + Enable computing particle densities per subdomain but synchronize densities for ghost-particles (faster, recommended). Note: if both this and global particle density computation is disabled the ghost particle margin has to be increased to at least 2.0 to compute correct density values for ghost particles [default: on] [possible values: off, on] -Interpolation: +Interpolation & normals: --normals= - Whether to compute surface normals at the mesh vertices and write them to the output file [default: off] [possible values: off, on] + Enable omputing surface normals at the mesh vertices and write them to the output file [default: off] [possible values: off, on] --sph-normals= - Whether to compute the normals using SPH interpolation (smoother and more true to actual fluid surface, but slower) instead of just using area weighted triangle normals [default: on] [possible values: off, on] + Enable computing the normals using SPH interpolation instead of using the area weighted triangle normals [default: off] [possible values: off, on] + --normals-smoothing-iters + Number of smoothing iterations to run on the normal field if normal interpolation is enabled (disabled by default) + --output-raw-normals= + Enable writing raw normals without smoothing to the output mesh if normal smoothing is enabled [default: off] [possible values: off, on] --interpolate-attributes List of point attribute field names from the input file that should be interpolated to the reconstructed surface. Currently this is only supported for VTK and VTU input files +Postprocessing: + --mesh-cleanup= + Enable MC specific mesh decimation/simplification which removes bad quality triangles typically generated by MC [default: off] [possible values: off, on] + --decimate-barnacles= + Enable decimation of some typical bad marching cubes triangle configurations (resulting in "barnacles" after Laplacian smoothing) [default: off] [possible values: off, on] + --keep-verts= + Enable keeping vertices without connectivity during decimation instead of filtering them out (faster and helps with debugging) [default: off] [possible values: off, on] + --mesh-smoothing-iters + Number of smoothing iterations to run on the reconstructed mesh + --mesh-smoothing-weights= + Enable feature weights for mesh smoothing if mesh smoothing enabled. Preserves isolated particles even under strong smoothing [default: off] [possible values: off, on] + --mesh-smoothing-weights-normalization + Normalization value from weighted number of neighbors to mesh smoothing weights [default: 13.0] + --output-smoothing-weights= + Enable writing the smoothing weights as a vertex attribute to the output mesh file [default: off] [possible values: off, on] + --generate-quads= + Enable trying to convert triangles to quads if they meet quality criteria [default: off] [possible values: off, on] + --quad-max-edge-diag-ratio + Maximum allowed ratio of quad edge lengths to its diagonals to merge two triangles to a quad (inverse is used for minimum) [default: 1.75] + --quad-max-normal-angle + Maximum allowed angle (in degrees) between triangle normals to merge them to a quad [default: 10] + --quad-max-interior-angle + Maximum allowed vertex interior angle (in degrees) inside of a quad to merge two triangles to a quad [default: 135] + --mesh-aabb-min + Lower corner of the bounding-box for the surface mesh, triangles completely outside are removed (requires mesh-aabb-max to be specified) + --mesh-aabb-max + Upper corner of the bounding-box for the surface mesh, triangles completely outside are removed (requires mesh-aabb-min to be specified) + --mesh-aabb-clamp-verts= + Enable clamping of vertices outside of the specified mesh AABB to the AABB (only has an effect if mesh-aabb-min/max are specified) [default: off] [possible values: off, on] + --output-raw-mesh= + Enable writing the raw reconstructed mesh before applying any post-processing steps [default: off] [possible values: off, on] + Debug options: --output-dm-points Optional filename for writing the point cloud representation of the intermediate density map to disk @@ -309,7 +414,13 @@ Debug options: --output-octree Optional filename for writing the octree used to partition the particles to disk --check-mesh= - Whether to check the final mesh for topological problems such as holes (note that when stitching is disabled this will lead to a lot of reported problems) [default: off] [possible values: off, on] + Enable checking the final mesh for holes and non-manifold edges and vertices [default: off] [possible values: off, on] + --check-mesh-closed= + Enable checking the final mesh for holes [default: off] [possible values: off, on] + --check-mesh-manifold= + Enable checking the final mesh for non-manifold edges and vertices [default: off] [possible values: off, on] + --check-mesh-debug= + Enable debug output for the check-mesh operations (has no effect if no other check-mesh option is enabled) [default: off] [possible values: off, on] ``` ### The `convert` subcommand @@ -343,6 +454,6 @@ Options: # License -For license information of this project, see the LICENSE file. +For license information of this project, see the [LICENSE](LICENSE) file. The splashsurf logo is based on two graphics ([1](https://www.svgrepo.com/svg/295647/wave), [2](https://www.svgrepo.com/svg/295652/surfboard-surfboard)) published on SVG Repo under a CC0 ("No Rights Reserved") license. The dragon model shown in the images on this page are part of the ["Stanford 3D Scanning Repository"](https://graphics.stanford.edu/data/3Dscanrep/). diff --git a/data/icosphere.obj b/data/icosphere.obj new file mode 100644 index 0000000..b374c99 --- /dev/null +++ b/data/icosphere.obj @@ -0,0 +1,126 @@ +# Blender 3.5.0 +# www.blender.org +o Icosphere +v 0.000000 -1.000000 0.000000 +v 0.723607 -0.447220 0.525725 +v -0.276388 -0.447220 0.850649 +v -0.894426 -0.447216 0.000000 +v -0.276388 -0.447220 -0.850649 +v 0.723607 -0.447220 -0.525725 +v 0.276388 0.447220 0.850649 +v -0.723607 0.447220 0.525725 +v -0.723607 0.447220 -0.525725 +v 0.276388 0.447220 -0.850649 +v 0.894426 0.447216 0.000000 +v 0.000000 1.000000 0.000000 +v -0.162456 -0.850654 0.499995 +v 0.425323 -0.850654 0.309011 +v 0.262869 -0.525738 0.809012 +v 0.850648 -0.525736 0.000000 +v 0.425323 -0.850654 -0.309011 +v -0.525730 -0.850652 0.000000 +v -0.688189 -0.525736 0.499997 +v -0.162456 -0.850654 -0.499995 +v -0.688189 -0.525736 -0.499997 +v 0.262869 -0.525738 -0.809012 +v 0.951058 0.000000 0.309013 +v 0.951058 0.000000 -0.309013 +v 0.000000 0.000000 1.000000 +v 0.587786 0.000000 0.809017 +v -0.951058 0.000000 0.309013 +v -0.587786 0.000000 0.809017 +v -0.587786 0.000000 -0.809017 +v -0.951058 0.000000 -0.309013 +v 0.587786 0.000000 -0.809017 +v 0.000000 0.000000 -1.000000 +v 0.688189 0.525736 0.499997 +v -0.262869 0.525738 0.809012 +v -0.850648 0.525736 0.000000 +v -0.262869 0.525738 -0.809012 +v 0.688189 0.525736 -0.499997 +v 0.162456 0.850654 0.499995 +v 0.525730 0.850652 0.000000 +v -0.425323 0.850654 0.309011 +v -0.425323 0.850654 -0.309011 +v 0.162456 0.850654 -0.499995 +s 0 +f 1 14 13 +f 2 14 16 +f 1 13 18 +f 1 18 20 +f 1 20 17 +f 2 16 23 +f 3 15 25 +f 4 19 27 +f 5 21 29 +f 6 22 31 +f 2 23 26 +f 3 25 28 +f 4 27 30 +f 5 29 32 +f 6 31 24 +f 7 33 38 +f 8 34 40 +f 9 35 41 +f 10 36 42 +f 11 37 39 +f 39 42 12 +f 39 37 42 +f 37 10 42 +f 42 41 12 +f 42 36 41 +f 36 9 41 +f 41 40 12 +f 41 35 40 +f 35 8 40 +f 40 38 12 +f 40 34 38 +f 34 7 38 +f 38 39 12 +f 38 33 39 +f 33 11 39 +f 24 37 11 +f 24 31 37 +f 31 10 37 +f 32 36 10 +f 32 29 36 +f 29 9 36 +f 30 35 9 +f 30 27 35 +f 27 8 35 +f 28 34 8 +f 28 25 34 +f 25 7 34 +f 26 33 7 +f 26 23 33 +f 23 11 33 +f 31 32 10 +f 31 22 32 +f 22 5 32 +f 29 30 9 +f 29 21 30 +f 21 4 30 +f 27 28 8 +f 27 19 28 +f 19 3 28 +f 25 26 7 +f 25 15 26 +f 15 2 26 +f 23 24 11 +f 23 16 24 +f 16 6 24 +f 17 22 6 +f 17 20 22 +f 20 5 22 +f 20 21 5 +f 20 18 21 +f 18 4 21 +f 18 19 4 +f 18 13 19 +f 13 3 19 +f 16 17 6 +f 16 14 17 +f 14 1 17 +f 13 15 3 +f 13 14 15 +f 14 2 15 diff --git a/data/icosphere_normals.obj b/data/icosphere_normals.obj new file mode 100644 index 0000000..5fadb4f --- /dev/null +++ b/data/icosphere_normals.obj @@ -0,0 +1,206 @@ +# Blender 3.5.0 +# www.blender.org +o Icosphere +v 0.000000 -1.000000 0.000000 +v 0.723607 -0.447220 0.525725 +v -0.276388 -0.447220 0.850649 +v -0.894426 -0.447216 0.000000 +v -0.276388 -0.447220 -0.850649 +v 0.723607 -0.447220 -0.525725 +v 0.276388 0.447220 0.850649 +v -0.723607 0.447220 0.525725 +v -0.723607 0.447220 -0.525725 +v 0.276388 0.447220 -0.850649 +v 0.894426 0.447216 0.000000 +v 0.000000 1.000000 0.000000 +v -0.162456 -0.850654 0.499995 +v 0.425323 -0.850654 0.309011 +v 0.262869 -0.525738 0.809012 +v 0.850648 -0.525736 0.000000 +v 0.425323 -0.850654 -0.309011 +v -0.525730 -0.850652 0.000000 +v -0.688189 -0.525736 0.499997 +v -0.162456 -0.850654 -0.499995 +v -0.688189 -0.525736 -0.499997 +v 0.262869 -0.525738 -0.809012 +v 0.951058 0.000000 0.309013 +v 0.951058 0.000000 -0.309013 +v 0.000000 0.000000 1.000000 +v 0.587786 0.000000 0.809017 +v -0.951058 0.000000 0.309013 +v -0.587786 0.000000 0.809017 +v -0.587786 0.000000 -0.809017 +v -0.951058 0.000000 -0.309013 +v 0.587786 0.000000 -0.809017 +v 0.000000 0.000000 -1.000000 +v 0.688189 0.525736 0.499997 +v -0.262869 0.525738 0.809012 +v -0.850648 0.525736 0.000000 +v -0.262869 0.525738 -0.809012 +v 0.688189 0.525736 -0.499997 +v 0.162456 0.850654 0.499995 +v 0.525730 0.850652 0.000000 +v -0.425323 0.850654 0.309011 +v -0.425323 0.850654 -0.309011 +v 0.162456 0.850654 -0.499995 +vn 0.1024 -0.9435 0.3151 +vn 0.7002 -0.6617 0.2680 +vn -0.2680 -0.9435 0.1947 +vn -0.2680 -0.9435 -0.1947 +vn 0.1024 -0.9435 -0.3151 +vn 0.9050 -0.3304 0.2680 +vn 0.0247 -0.3304 0.9435 +vn -0.8897 -0.3304 0.3151 +vn -0.5746 -0.3304 -0.7488 +vn 0.5346 -0.3304 -0.7779 +vn 0.8026 -0.1256 0.5831 +vn -0.3066 -0.1256 0.9435 +vn -0.9921 -0.1256 -0.0000 +vn -0.3066 -0.1256 -0.9435 +vn 0.8026 -0.1256 -0.5831 +vn 0.4089 0.6617 0.6284 +vn -0.4713 0.6617 0.5831 +vn -0.7002 0.6617 -0.2680 +vn 0.0385 0.6617 -0.7488 +vn 0.7240 0.6617 -0.1947 +vn 0.2680 0.9435 -0.1947 +vn 0.4911 0.7947 -0.3568 +vn 0.4089 0.6617 -0.6284 +vn -0.1024 0.9435 -0.3151 +vn -0.1876 0.7947 -0.5773 +vn -0.4713 0.6617 -0.5831 +vn -0.3313 0.9435 -0.0000 +vn -0.6071 0.7947 -0.0000 +vn -0.7002 0.6617 0.2680 +vn -0.1024 0.9435 0.3151 +vn -0.1876 0.7947 0.5773 +vn 0.0385 0.6617 0.7488 +vn 0.2680 0.9435 0.1947 +vn 0.4911 0.7947 0.3568 +vn 0.7240 0.6617 0.1947 +vn 0.8897 0.3304 -0.3151 +vn 0.7947 0.1876 -0.5773 +vn 0.5746 0.3304 -0.7488 +vn -0.0247 0.3304 -0.9435 +vn -0.3035 0.1876 -0.9342 +vn -0.5346 0.3304 -0.7779 +vn -0.9050 0.3304 -0.2680 +vn -0.9822 0.1876 -0.0000 +vn -0.9050 0.3304 0.2680 +vn -0.5346 0.3304 0.7779 +vn -0.3035 0.1876 0.9342 +vn -0.0247 0.3304 0.9435 +vn 0.5746 0.3304 0.7488 +vn 0.7947 0.1876 0.5773 +vn 0.8897 0.3304 0.3151 +vn 0.3066 0.1256 -0.9435 +vn 0.3035 -0.1876 -0.9342 +vn 0.0247 -0.3304 -0.9435 +vn -0.8026 0.1256 -0.5831 +vn -0.7947 -0.1876 -0.5773 +vn -0.8897 -0.3304 -0.3151 +vn -0.8026 0.1256 0.5831 +vn -0.7947 -0.1876 0.5773 +vn -0.5746 -0.3304 0.7488 +vn 0.3066 0.1256 0.9435 +vn 0.3035 -0.1876 0.9342 +vn 0.5346 -0.3304 0.7779 +vn 0.9921 0.1256 -0.0000 +vn 0.9822 -0.1876 -0.0000 +vn 0.9050 -0.3304 -0.2680 +vn 0.4713 -0.6617 -0.5831 +vn 0.1876 -0.7947 -0.5773 +vn -0.0385 -0.6617 -0.7488 +vn -0.4089 -0.6617 -0.6284 +vn -0.4911 -0.7947 -0.3568 +vn -0.7240 -0.6617 -0.1947 +vn -0.7240 -0.6617 0.1947 +vn -0.4911 -0.7947 0.3568 +vn -0.4089 -0.6617 0.6284 +vn 0.7002 -0.6617 -0.2680 +vn 0.6071 -0.7947 -0.0000 +vn 0.3313 -0.9435 -0.0000 +vn -0.0385 -0.6617 0.7488 +vn 0.1876 -0.7947 0.5773 +vn 0.4713 -0.6617 0.5831 +s 0 +f 1//1 14//1 13//1 +f 2//2 14//2 16//2 +f 1//3 13//3 18//3 +f 1//4 18//4 20//4 +f 1//5 20//5 17//5 +f 2//6 16//6 23//6 +f 3//7 15//7 25//7 +f 4//8 19//8 27//8 +f 5//9 21//9 29//9 +f 6//10 22//10 31//10 +f 2//11 23//11 26//11 +f 3//12 25//12 28//12 +f 4//13 27//13 30//13 +f 5//14 29//14 32//14 +f 6//15 31//15 24//15 +f 7//16 33//16 38//16 +f 8//17 34//17 40//17 +f 9//18 35//18 41//18 +f 10//19 36//19 42//19 +f 11//20 37//20 39//20 +f 39//21 42//21 12//21 +f 39//22 37//22 42//22 +f 37//23 10//23 42//23 +f 42//24 41//24 12//24 +f 42//25 36//25 41//25 +f 36//26 9//26 41//26 +f 41//27 40//27 12//27 +f 41//28 35//28 40//28 +f 35//29 8//29 40//29 +f 40//30 38//30 12//30 +f 40//31 34//31 38//31 +f 34//32 7//32 38//32 +f 38//33 39//33 12//33 +f 38//34 33//34 39//34 +f 33//35 11//35 39//35 +f 24//36 37//36 11//36 +f 24//37 31//37 37//37 +f 31//38 10//38 37//38 +f 32//39 36//39 10//39 +f 32//40 29//40 36//40 +f 29//41 9//41 36//41 +f 30//42 35//42 9//42 +f 30//43 27//43 35//43 +f 27//44 8//44 35//44 +f 28//45 34//45 8//45 +f 28//46 25//46 34//46 +f 25//47 7//47 34//47 +f 26//48 33//48 7//48 +f 26//49 23//49 33//49 +f 23//50 11//50 33//50 +f 31//51 32//51 10//51 +f 31//52 22//52 32//52 +f 22//53 5//53 32//53 +f 29//54 30//54 9//54 +f 29//55 21//55 30//55 +f 21//56 4//56 30//56 +f 27//57 28//57 8//57 +f 27//58 19//58 28//58 +f 19//59 3//59 28//59 +f 25//60 26//60 7//60 +f 25//61 15//61 26//61 +f 15//62 2//62 26//62 +f 23//63 24//63 11//63 +f 23//64 16//64 24//64 +f 16//65 6//65 24//65 +f 17//66 22//66 6//66 +f 17//67 20//67 22//67 +f 20//68 5//68 22//68 +f 20//69 21//69 5//69 +f 20//70 18//70 21//70 +f 18//71 4//71 21//71 +f 18//72 19//72 4//72 +f 18//73 13//73 19//73 +f 13//74 3//74 19//74 +f 16//75 17//75 6//75 +f 16//76 14//76 17//76 +f 14//77 1//77 17//77 +f 13//78 15//78 3//78 +f 13//79 14//79 15//79 +f 14//80 2//80 15//80 diff --git a/data/plane.obj b/data/plane.obj new file mode 100644 index 0000000..98e138e --- /dev/null +++ b/data/plane.obj @@ -0,0 +1,1245 @@ +# Blender 3.5.0 +# www.blender.org +o Grid +v -1.000000 0.000000 1.000000 +v -0.900000 0.000000 1.000000 +v -0.800000 0.000000 1.000000 +v -0.700000 0.000000 1.000000 +v -0.600000 0.000000 1.000000 +v -0.500000 0.000000 1.000000 +v -0.400000 0.000000 1.000000 +v -0.300000 0.000000 1.000000 +v -0.200000 0.000000 1.000000 +v -0.100000 0.000000 1.000000 +v 0.000000 0.000000 1.000000 +v 0.100000 0.000000 1.000000 +v 0.200000 0.000000 1.000000 +v 0.300000 0.000000 1.000000 +v 0.400000 0.000000 1.000000 +v 0.500000 0.000000 1.000000 +v 0.600000 0.000000 1.000000 +v 0.700000 0.000000 1.000000 +v 0.800000 0.000000 1.000000 +v 0.900000 0.000000 1.000000 +v 1.000000 0.000000 1.000000 +v -1.000000 0.000000 0.900000 +v -0.900000 0.000000 0.900000 +v -0.800000 0.000000 0.900000 +v -0.700000 0.000000 0.900000 +v -0.600000 0.000000 0.900000 +v -0.500000 0.000000 0.900000 +v -0.400000 0.000000 0.900000 +v -0.300000 0.000000 0.900000 +v -0.200000 0.000000 0.900000 +v -0.100000 0.000000 0.900000 +v 0.000000 0.000000 0.900000 +v 0.100000 0.000000 0.900000 +v 0.200000 0.000000 0.900000 +v 0.300000 0.000000 0.900000 +v 0.400000 0.000000 0.900000 +v 0.500000 0.000000 0.900000 +v 0.600000 0.000000 0.900000 +v 0.700000 0.000000 0.900000 +v 0.800000 0.000000 0.900000 +v 0.900000 0.000000 0.900000 +v 1.000000 0.000000 0.900000 +v -1.000000 0.000000 0.800000 +v -0.900000 0.000000 0.800000 +v -0.800000 0.000000 0.800000 +v -0.700000 0.000000 0.800000 +v -0.600000 0.000000 0.800000 +v -0.500000 0.000000 0.800000 +v -0.400000 0.000000 0.800000 +v -0.300000 0.000000 0.800000 +v -0.200000 0.000000 0.800000 +v -0.100000 0.000000 0.800000 +v 0.000000 0.000000 0.800000 +v 0.100000 0.000000 0.800000 +v 0.200000 0.000000 0.800000 +v 0.300000 0.000000 0.800000 +v 0.400000 0.000000 0.800000 +v 0.500000 0.000000 0.800000 +v 0.600000 0.000000 0.800000 +v 0.700000 0.000000 0.800000 +v 0.800000 0.000000 0.800000 +v 0.900000 0.000000 0.800000 +v 1.000000 0.000000 0.800000 +v -1.000000 0.000000 0.700000 +v -0.900000 0.000000 0.700000 +v -0.800000 0.000000 0.700000 +v -0.700000 0.000000 0.700000 +v -0.600000 0.000000 0.700000 +v -0.500000 0.000000 0.700000 +v -0.400000 0.000000 0.700000 +v -0.300000 0.000000 0.700000 +v -0.200000 0.000000 0.700000 +v -0.100000 0.000000 0.700000 +v 0.000000 0.000000 0.700000 +v 0.100000 0.000000 0.700000 +v 0.200000 0.000000 0.700000 +v 0.300000 0.000000 0.700000 +v 0.400000 0.000000 0.700000 +v 0.500000 0.000000 0.700000 +v 0.600000 0.000000 0.700000 +v 0.700000 0.000000 0.700000 +v 0.800000 0.000000 0.700000 +v 0.900000 0.000000 0.700000 +v 1.000000 0.000000 0.700000 +v -1.000000 0.000000 0.600000 +v -0.900000 0.000000 0.600000 +v -0.800000 0.000000 0.600000 +v -0.700000 0.000000 0.600000 +v -0.600000 0.000000 0.600000 +v -0.500000 0.000000 0.600000 +v -0.400000 0.000000 0.600000 +v -0.300000 0.000000 0.600000 +v -0.200000 0.000000 0.600000 +v -0.100000 0.000000 0.600000 +v 0.000000 0.000000 0.600000 +v 0.100000 0.000000 0.600000 +v 0.200000 0.000000 0.600000 +v 0.300000 0.000000 0.600000 +v 0.400000 0.000000 0.600000 +v 0.500000 0.000000 0.600000 +v 0.600000 0.000000 0.600000 +v 0.700000 0.000000 0.600000 +v 0.800000 0.000000 0.600000 +v 0.900000 0.000000 0.600000 +v 1.000000 0.000000 0.600000 +v -1.000000 0.000000 0.500000 +v -0.900000 0.000000 0.500000 +v -0.800000 0.000000 0.500000 +v -0.700000 0.000000 0.500000 +v -0.600000 0.000000 0.500000 +v -0.500000 0.000000 0.500000 +v -0.400000 0.000000 0.500000 +v -0.300000 0.000000 0.500000 +v -0.200000 0.000000 0.500000 +v -0.100000 0.000000 0.500000 +v 0.000000 0.000000 0.500000 +v 0.100000 0.000000 0.500000 +v 0.200000 0.000000 0.500000 +v 0.300000 0.000000 0.500000 +v 0.400000 0.000000 0.500000 +v 0.500000 0.000000 0.500000 +v 0.600000 0.000000 0.500000 +v 0.700000 0.000000 0.500000 +v 0.800000 0.000000 0.500000 +v 0.900000 0.000000 0.500000 +v 1.000000 0.000000 0.500000 +v -1.000000 0.000000 0.400000 +v -0.900000 0.000000 0.400000 +v -0.800000 0.000000 0.400000 +v -0.700000 0.000000 0.400000 +v -0.600000 0.000000 0.400000 +v -0.500000 0.000000 0.400000 +v -0.400000 0.000000 0.400000 +v -0.300000 0.000000 0.400000 +v -0.200000 0.000000 0.400000 +v -0.100000 0.000000 0.400000 +v 0.000000 0.000000 0.400000 +v 0.100000 0.000000 0.400000 +v 0.200000 0.000000 0.400000 +v 0.300000 0.000000 0.400000 +v 0.400000 0.000000 0.400000 +v 0.500000 0.000000 0.400000 +v 0.600000 0.000000 0.400000 +v 0.700000 0.000000 0.400000 +v 0.800000 0.000000 0.400000 +v 0.900000 0.000000 0.400000 +v 1.000000 0.000000 0.400000 +v -1.000000 0.000000 0.300000 +v -0.900000 0.000000 0.300000 +v -0.800000 0.000000 0.300000 +v -0.700000 0.000000 0.300000 +v -0.600000 0.000000 0.300000 +v -0.500000 0.000000 0.300000 +v -0.400000 0.000000 0.300000 +v -0.300000 0.000000 0.300000 +v -0.200000 0.000000 0.300000 +v -0.100000 0.000000 0.300000 +v 0.000000 0.000000 0.300000 +v 0.100000 0.000000 0.300000 +v 0.200000 0.000000 0.300000 +v 0.300000 0.000000 0.300000 +v 0.400000 0.000000 0.300000 +v 0.500000 0.000000 0.300000 +v 0.600000 0.000000 0.300000 +v 0.700000 0.000000 0.300000 +v 0.800000 0.000000 0.300000 +v 0.900000 0.000000 0.300000 +v 1.000000 0.000000 0.300000 +v -1.000000 0.000000 0.200000 +v -0.900000 0.000000 0.200000 +v -0.800000 0.000000 0.200000 +v -0.700000 0.000000 0.200000 +v -0.600000 0.000000 0.200000 +v -0.500000 0.000000 0.200000 +v -0.400000 0.000000 0.200000 +v -0.300000 0.000000 0.200000 +v -0.200000 0.000000 0.200000 +v -0.100000 0.000000 0.200000 +v 0.000000 0.000000 0.200000 +v 0.100000 0.000000 0.200000 +v 0.200000 0.000000 0.200000 +v 0.300000 0.000000 0.200000 +v 0.400000 0.000000 0.200000 +v 0.500000 0.000000 0.200000 +v 0.600000 0.000000 0.200000 +v 0.700000 0.000000 0.200000 +v 0.800000 0.000000 0.200000 +v 0.900000 0.000000 0.200000 +v 1.000000 0.000000 0.200000 +v -1.000000 0.000000 0.100000 +v -0.900000 0.000000 0.100000 +v -0.800000 0.000000 0.100000 +v -0.700000 0.000000 0.100000 +v -0.600000 0.000000 0.100000 +v -0.500000 0.000000 0.100000 +v -0.400000 0.000000 0.100000 +v -0.300000 0.000000 0.100000 +v -0.200000 0.000000 0.100000 +v -0.100000 0.000000 0.100000 +v 0.000000 0.000000 0.100000 +v 0.100000 0.000000 0.100000 +v 0.200000 0.000000 0.100000 +v 0.300000 0.000000 0.100000 +v 0.400000 0.000000 0.100000 +v 0.500000 0.000000 0.100000 +v 0.600000 0.000000 0.100000 +v 0.700000 0.000000 0.100000 +v 0.800000 0.000000 0.100000 +v 0.900000 0.000000 0.100000 +v 1.000000 0.000000 0.100000 +v -1.000000 0.000000 0.000000 +v -0.900000 0.000000 0.000000 +v -0.800000 0.000000 0.000000 +v -0.700000 0.000000 0.000000 +v -0.600000 0.000000 0.000000 +v -0.500000 0.000000 0.000000 +v -0.400000 0.000000 0.000000 +v -0.300000 0.000000 0.000000 +v -0.200000 0.000000 0.000000 +v -0.100000 0.000000 0.000000 +v 0.000000 0.000000 0.000000 +v 0.100000 0.000000 0.000000 +v 0.200000 0.000000 0.000000 +v 0.300000 0.000000 0.000000 +v 0.400000 0.000000 0.000000 +v 0.500000 0.000000 0.000000 +v 0.600000 0.000000 0.000000 +v 0.700000 0.000000 0.000000 +v 0.800000 0.000000 0.000000 +v 0.900000 0.000000 0.000000 +v 1.000000 0.000000 0.000000 +v -1.000000 0.000000 -0.100000 +v -0.900000 0.000000 -0.100000 +v -0.800000 0.000000 -0.100000 +v -0.700000 0.000000 -0.100000 +v -0.600000 0.000000 -0.100000 +v -0.500000 0.000000 -0.100000 +v -0.400000 0.000000 -0.100000 +v -0.300000 0.000000 -0.100000 +v -0.200000 0.000000 -0.100000 +v -0.100000 0.000000 -0.100000 +v 0.000000 0.000000 -0.100000 +v 0.100000 0.000000 -0.100000 +v 0.200000 0.000000 -0.100000 +v 0.300000 0.000000 -0.100000 +v 0.400000 0.000000 -0.100000 +v 0.500000 0.000000 -0.100000 +v 0.600000 0.000000 -0.100000 +v 0.700000 0.000000 -0.100000 +v 0.800000 0.000000 -0.100000 +v 0.900000 0.000000 -0.100000 +v 1.000000 0.000000 -0.100000 +v -1.000000 0.000000 -0.200000 +v -0.900000 0.000000 -0.200000 +v -0.800000 0.000000 -0.200000 +v -0.700000 0.000000 -0.200000 +v -0.600000 0.000000 -0.200000 +v -0.500000 0.000000 -0.200000 +v -0.400000 0.000000 -0.200000 +v -0.300000 0.000000 -0.200000 +v -0.200000 0.000000 -0.200000 +v -0.100000 0.000000 -0.200000 +v 0.000000 0.000000 -0.200000 +v 0.100000 0.000000 -0.200000 +v 0.200000 0.000000 -0.200000 +v 0.300000 0.000000 -0.200000 +v 0.400000 0.000000 -0.200000 +v 0.500000 0.000000 -0.200000 +v 0.600000 0.000000 -0.200000 +v 0.700000 0.000000 -0.200000 +v 0.800000 0.000000 -0.200000 +v 0.900000 0.000000 -0.200000 +v 1.000000 0.000000 -0.200000 +v -1.000000 0.000000 -0.300000 +v -0.900000 0.000000 -0.300000 +v -0.800000 0.000000 -0.300000 +v -0.700000 0.000000 -0.300000 +v -0.600000 0.000000 -0.300000 +v -0.500000 0.000000 -0.300000 +v -0.400000 0.000000 -0.300000 +v -0.300000 0.000000 -0.300000 +v -0.200000 0.000000 -0.300000 +v -0.100000 0.000000 -0.300000 +v 0.000000 0.000000 -0.300000 +v 0.100000 0.000000 -0.300000 +v 0.200000 0.000000 -0.300000 +v 0.300000 0.000000 -0.300000 +v 0.400000 0.000000 -0.300000 +v 0.500000 0.000000 -0.300000 +v 0.600000 0.000000 -0.300000 +v 0.700000 0.000000 -0.300000 +v 0.800000 0.000000 -0.300000 +v 0.900000 0.000000 -0.300000 +v 1.000000 0.000000 -0.300000 +v -1.000000 0.000000 -0.400000 +v -0.900000 0.000000 -0.400000 +v -0.800000 0.000000 -0.400000 +v -0.700000 0.000000 -0.400000 +v -0.600000 0.000000 -0.400000 +v -0.500000 0.000000 -0.400000 +v -0.400000 0.000000 -0.400000 +v -0.300000 0.000000 -0.400000 +v -0.200000 0.000000 -0.400000 +v -0.100000 0.000000 -0.400000 +v 0.000000 0.000000 -0.400000 +v 0.100000 0.000000 -0.400000 +v 0.200000 0.000000 -0.400000 +v 0.300000 0.000000 -0.400000 +v 0.400000 0.000000 -0.400000 +v 0.500000 0.000000 -0.400000 +v 0.600000 0.000000 -0.400000 +v 0.700000 0.000000 -0.400000 +v 0.800000 0.000000 -0.400000 +v 0.900000 0.000000 -0.400000 +v 1.000000 0.000000 -0.400000 +v -1.000000 0.000000 -0.500000 +v -0.900000 0.000000 -0.500000 +v -0.800000 0.000000 -0.500000 +v -0.700000 0.000000 -0.500000 +v -0.600000 0.000000 -0.500000 +v -0.500000 0.000000 -0.500000 +v -0.400000 0.000000 -0.500000 +v -0.300000 0.000000 -0.500000 +v -0.200000 0.000000 -0.500000 +v -0.100000 0.000000 -0.500000 +v 0.000000 0.000000 -0.500000 +v 0.100000 0.000000 -0.500000 +v 0.200000 0.000000 -0.500000 +v 0.300000 0.000000 -0.500000 +v 0.400000 0.000000 -0.500000 +v 0.500000 0.000000 -0.500000 +v 0.600000 0.000000 -0.500000 +v 0.700000 0.000000 -0.500000 +v 0.800000 0.000000 -0.500000 +v 0.900000 0.000000 -0.500000 +v 1.000000 0.000000 -0.500000 +v -1.000000 0.000000 -0.600000 +v -0.900000 0.000000 -0.600000 +v -0.800000 0.000000 -0.600000 +v -0.700000 0.000000 -0.600000 +v -0.600000 0.000000 -0.600000 +v -0.500000 0.000000 -0.600000 +v -0.400000 0.000000 -0.600000 +v -0.300000 0.000000 -0.600000 +v -0.200000 0.000000 -0.600000 +v -0.100000 0.000000 -0.600000 +v 0.000000 0.000000 -0.600000 +v 0.100000 0.000000 -0.600000 +v 0.200000 0.000000 -0.600000 +v 0.300000 0.000000 -0.600000 +v 0.400000 0.000000 -0.600000 +v 0.500000 0.000000 -0.600000 +v 0.600000 0.000000 -0.600000 +v 0.700000 0.000000 -0.600000 +v 0.800000 0.000000 -0.600000 +v 0.900000 0.000000 -0.600000 +v 1.000000 0.000000 -0.600000 +v -1.000000 0.000000 -0.700000 +v -0.900000 0.000000 -0.700000 +v -0.800000 0.000000 -0.700000 +v -0.700000 0.000000 -0.700000 +v -0.600000 0.000000 -0.700000 +v -0.500000 0.000000 -0.700000 +v -0.400000 0.000000 -0.700000 +v -0.300000 0.000000 -0.700000 +v -0.200000 0.000000 -0.700000 +v -0.100000 0.000000 -0.700000 +v 0.000000 0.000000 -0.700000 +v 0.100000 0.000000 -0.700000 +v 0.200000 0.000000 -0.700000 +v 0.300000 0.000000 -0.700000 +v 0.400000 0.000000 -0.700000 +v 0.500000 0.000000 -0.700000 +v 0.600000 0.000000 -0.700000 +v 0.700000 0.000000 -0.700000 +v 0.800000 0.000000 -0.700000 +v 0.900000 0.000000 -0.700000 +v 1.000000 0.000000 -0.700000 +v -1.000000 0.000000 -0.800000 +v -0.900000 0.000000 -0.800000 +v -0.800000 0.000000 -0.800000 +v -0.700000 0.000000 -0.800000 +v -0.600000 0.000000 -0.800000 +v -0.500000 0.000000 -0.800000 +v -0.400000 0.000000 -0.800000 +v -0.300000 0.000000 -0.800000 +v -0.200000 0.000000 -0.800000 +v -0.100000 0.000000 -0.800000 +v 0.000000 0.000000 -0.800000 +v 0.100000 0.000000 -0.800000 +v 0.200000 0.000000 -0.800000 +v 0.300000 0.000000 -0.800000 +v 0.400000 0.000000 -0.800000 +v 0.500000 0.000000 -0.800000 +v 0.600000 0.000000 -0.800000 +v 0.700000 0.000000 -0.800000 +v 0.800000 0.000000 -0.800000 +v 0.900000 0.000000 -0.800000 +v 1.000000 0.000000 -0.800000 +v -1.000000 0.000000 -0.900000 +v -0.900000 0.000000 -0.900000 +v -0.800000 0.000000 -0.900000 +v -0.700000 0.000000 -0.900000 +v -0.600000 0.000000 -0.900000 +v -0.500000 0.000000 -0.900000 +v -0.400000 0.000000 -0.900000 +v -0.300000 0.000000 -0.900000 +v -0.200000 0.000000 -0.900000 +v -0.100000 0.000000 -0.900000 +v 0.000000 0.000000 -0.900000 +v 0.100000 0.000000 -0.900000 +v 0.200000 0.000000 -0.900000 +v 0.300000 0.000000 -0.900000 +v 0.400000 0.000000 -0.900000 +v 0.500000 0.000000 -0.900000 +v 0.600000 0.000000 -0.900000 +v 0.700000 0.000000 -0.900000 +v 0.800000 0.000000 -0.900000 +v 0.900000 0.000000 -0.900000 +v 1.000000 0.000000 -0.900000 +v -1.000000 0.000000 -1.000000 +v -0.900000 0.000000 -1.000000 +v -0.800000 0.000000 -1.000000 +v -0.700000 0.000000 -1.000000 +v -0.600000 0.000000 -1.000000 +v -0.500000 0.000000 -1.000000 +v -0.400000 0.000000 -1.000000 +v -0.300000 0.000000 -1.000000 +v -0.200000 0.000000 -1.000000 +v -0.100000 0.000000 -1.000000 +v 0.000000 0.000000 -1.000000 +v 0.100000 0.000000 -1.000000 +v 0.200000 0.000000 -1.000000 +v 0.300000 0.000000 -1.000000 +v 0.400000 0.000000 -1.000000 +v 0.500000 0.000000 -1.000000 +v 0.600000 0.000000 -1.000000 +v 0.700000 0.000000 -1.000000 +v 0.800000 0.000000 -1.000000 +v 0.900000 0.000000 -1.000000 +v 1.000000 0.000000 -1.000000 +s 0 +f 2 22 1 +f 3 23 2 +f 4 24 3 +f 5 25 4 +f 6 26 5 +f 7 27 6 +f 8 28 7 +f 9 29 8 +f 10 30 9 +f 11 31 10 +f 12 32 11 +f 13 33 12 +f 14 34 13 +f 15 35 14 +f 16 36 15 +f 17 37 16 +f 18 38 17 +f 19 39 18 +f 20 40 19 +f 21 41 20 +f 23 43 22 +f 24 44 23 +f 25 45 24 +f 26 46 25 +f 27 47 26 +f 28 48 27 +f 29 49 28 +f 30 50 29 +f 31 51 30 +f 32 52 31 +f 33 53 32 +f 34 54 33 +f 35 55 34 +f 36 56 35 +f 37 57 36 +f 38 58 37 +f 39 59 38 +f 40 60 39 +f 41 61 40 +f 42 62 41 +f 44 64 43 +f 45 65 44 +f 46 66 45 +f 47 67 46 +f 48 68 47 +f 49 69 48 +f 50 70 49 +f 51 71 50 +f 52 72 51 +f 53 73 52 +f 54 74 53 +f 55 75 54 +f 56 76 55 +f 57 77 56 +f 58 78 57 +f 59 79 58 +f 60 80 59 +f 61 81 60 +f 62 82 61 +f 63 83 62 +f 65 85 64 +f 66 86 65 +f 67 87 66 +f 68 88 67 +f 69 89 68 +f 70 90 69 +f 71 91 70 +f 72 92 71 +f 73 93 72 +f 74 94 73 +f 75 95 74 +f 76 96 75 +f 77 97 76 +f 78 98 77 +f 79 99 78 +f 80 100 79 +f 81 101 80 +f 82 102 81 +f 83 103 82 +f 84 104 83 +f 86 106 85 +f 87 107 86 +f 88 108 87 +f 89 109 88 +f 90 110 89 +f 91 111 90 +f 92 112 91 +f 93 113 92 +f 94 114 93 +f 95 115 94 +f 96 116 95 +f 97 117 96 +f 98 118 97 +f 99 119 98 +f 100 120 99 +f 101 121 100 +f 102 122 101 +f 103 123 102 +f 104 124 103 +f 105 125 104 +f 107 127 106 +f 108 128 107 +f 109 129 108 +f 110 130 109 +f 111 131 110 +f 112 132 111 +f 113 133 112 +f 114 134 113 +f 115 135 114 +f 116 136 115 +f 117 137 116 +f 118 138 117 +f 119 139 118 +f 120 140 119 +f 121 141 120 +f 122 142 121 +f 123 143 122 +f 124 144 123 +f 125 145 124 +f 126 146 125 +f 128 148 127 +f 129 149 128 +f 130 150 129 +f 131 151 130 +f 132 152 131 +f 133 153 132 +f 134 154 133 +f 135 155 134 +f 136 156 135 +f 137 157 136 +f 138 158 137 +f 139 159 138 +f 140 160 139 +f 141 161 140 +f 142 162 141 +f 143 163 142 +f 144 164 143 +f 145 165 144 +f 146 166 145 +f 147 167 146 +f 149 169 148 +f 150 170 149 +f 151 171 150 +f 152 172 151 +f 153 173 152 +f 154 174 153 +f 155 175 154 +f 156 176 155 +f 157 177 156 +f 158 178 157 +f 159 179 158 +f 160 180 159 +f 161 181 160 +f 162 182 161 +f 163 183 162 +f 164 184 163 +f 165 185 164 +f 166 186 165 +f 167 187 166 +f 168 188 167 +f 170 190 169 +f 171 191 170 +f 172 192 171 +f 173 193 172 +f 174 194 173 +f 175 195 174 +f 176 196 175 +f 177 197 176 +f 178 198 177 +f 179 199 178 +f 180 200 179 +f 181 201 180 +f 182 202 181 +f 183 203 182 +f 184 204 183 +f 185 205 184 +f 186 206 185 +f 187 207 186 +f 188 208 187 +f 189 209 188 +f 191 211 190 +f 192 212 191 +f 193 213 192 +f 194 214 193 +f 195 215 194 +f 196 216 195 +f 197 217 196 +f 198 218 197 +f 199 219 198 +f 200 220 199 +f 201 221 200 +f 202 222 201 +f 203 223 202 +f 204 224 203 +f 205 225 204 +f 206 226 205 +f 207 227 206 +f 208 228 207 +f 209 229 208 +f 210 230 209 +f 212 232 211 +f 213 233 212 +f 214 234 213 +f 215 235 214 +f 216 236 215 +f 217 237 216 +f 218 238 217 +f 219 239 218 +f 220 240 219 +f 221 241 220 +f 222 242 221 +f 223 243 222 +f 224 244 223 +f 225 245 224 +f 226 246 225 +f 227 247 226 +f 228 248 227 +f 229 249 228 +f 230 250 229 +f 231 251 230 +f 233 253 232 +f 234 254 233 +f 235 255 234 +f 236 256 235 +f 237 257 236 +f 238 258 237 +f 239 259 238 +f 240 260 239 +f 241 261 240 +f 242 262 241 +f 243 263 242 +f 244 264 243 +f 245 265 244 +f 246 266 245 +f 247 267 246 +f 248 268 247 +f 249 269 248 +f 250 270 249 +f 251 271 250 +f 252 272 251 +f 254 274 253 +f 255 275 254 +f 256 276 255 +f 257 277 256 +f 258 278 257 +f 259 279 258 +f 260 280 259 +f 261 281 260 +f 262 282 261 +f 263 283 262 +f 264 284 263 +f 265 285 264 +f 266 286 265 +f 267 287 266 +f 268 288 267 +f 269 289 268 +f 270 290 269 +f 271 291 270 +f 272 292 271 +f 273 293 272 +f 275 295 274 +f 276 296 275 +f 277 297 276 +f 278 298 277 +f 279 299 278 +f 280 300 279 +f 281 301 280 +f 282 302 281 +f 283 303 282 +f 284 304 283 +f 285 305 284 +f 286 306 285 +f 287 307 286 +f 288 308 287 +f 289 309 288 +f 290 310 289 +f 291 311 290 +f 292 312 291 +f 293 313 292 +f 294 314 293 +f 296 316 295 +f 297 317 296 +f 298 318 297 +f 299 319 298 +f 300 320 299 +f 301 321 300 +f 302 322 301 +f 303 323 302 +f 304 324 303 +f 305 325 304 +f 306 326 305 +f 307 327 306 +f 308 328 307 +f 309 329 308 +f 310 330 309 +f 311 331 310 +f 312 332 311 +f 313 333 312 +f 314 334 313 +f 315 335 314 +f 317 337 316 +f 318 338 317 +f 319 339 318 +f 320 340 319 +f 321 341 320 +f 322 342 321 +f 323 343 322 +f 324 344 323 +f 325 345 324 +f 326 346 325 +f 327 347 326 +f 328 348 327 +f 329 349 328 +f 330 350 329 +f 331 351 330 +f 332 352 331 +f 333 353 332 +f 334 354 333 +f 335 355 334 +f 336 356 335 +f 338 358 337 +f 339 359 338 +f 340 360 339 +f 341 361 340 +f 342 362 341 +f 343 363 342 +f 344 364 343 +f 345 365 344 +f 346 366 345 +f 347 367 346 +f 348 368 347 +f 349 369 348 +f 350 370 349 +f 351 371 350 +f 352 372 351 +f 353 373 352 +f 354 374 353 +f 355 375 354 +f 356 376 355 +f 357 377 356 +f 359 379 358 +f 360 380 359 +f 361 381 360 +f 362 382 361 +f 363 383 362 +f 364 384 363 +f 365 385 364 +f 366 386 365 +f 367 387 366 +f 368 388 367 +f 369 389 368 +f 370 390 369 +f 371 391 370 +f 372 392 371 +f 373 393 372 +f 374 394 373 +f 375 395 374 +f 376 396 375 +f 377 397 376 +f 378 398 377 +f 380 400 379 +f 381 401 380 +f 382 402 381 +f 383 403 382 +f 384 404 383 +f 385 405 384 +f 386 406 385 +f 387 407 386 +f 388 408 387 +f 389 409 388 +f 390 410 389 +f 391 411 390 +f 392 412 391 +f 393 413 392 +f 394 414 393 +f 395 415 394 +f 396 416 395 +f 397 417 396 +f 398 418 397 +f 399 419 398 +f 401 421 400 +f 402 422 401 +f 403 423 402 +f 404 424 403 +f 405 425 404 +f 406 426 405 +f 407 427 406 +f 408 428 407 +f 409 429 408 +f 410 430 409 +f 411 431 410 +f 412 432 411 +f 413 433 412 +f 414 434 413 +f 415 435 414 +f 416 436 415 +f 417 437 416 +f 418 438 417 +f 419 439 418 +f 420 440 419 +f 2 23 22 +f 3 24 23 +f 4 25 24 +f 5 26 25 +f 6 27 26 +f 7 28 27 +f 8 29 28 +f 9 30 29 +f 10 31 30 +f 11 32 31 +f 12 33 32 +f 13 34 33 +f 14 35 34 +f 15 36 35 +f 16 37 36 +f 17 38 37 +f 18 39 38 +f 19 40 39 +f 20 41 40 +f 21 42 41 +f 23 44 43 +f 24 45 44 +f 25 46 45 +f 26 47 46 +f 27 48 47 +f 28 49 48 +f 29 50 49 +f 30 51 50 +f 31 52 51 +f 32 53 52 +f 33 54 53 +f 34 55 54 +f 35 56 55 +f 36 57 56 +f 37 58 57 +f 38 59 58 +f 39 60 59 +f 40 61 60 +f 41 62 61 +f 42 63 62 +f 44 65 64 +f 45 66 65 +f 46 67 66 +f 47 68 67 +f 48 69 68 +f 49 70 69 +f 50 71 70 +f 51 72 71 +f 52 73 72 +f 53 74 73 +f 54 75 74 +f 55 76 75 +f 56 77 76 +f 57 78 77 +f 58 79 78 +f 59 80 79 +f 60 81 80 +f 61 82 81 +f 62 83 82 +f 63 84 83 +f 65 86 85 +f 66 87 86 +f 67 88 87 +f 68 89 88 +f 69 90 89 +f 70 91 90 +f 71 92 91 +f 72 93 92 +f 73 94 93 +f 74 95 94 +f 75 96 95 +f 76 97 96 +f 77 98 97 +f 78 99 98 +f 79 100 99 +f 80 101 100 +f 81 102 101 +f 82 103 102 +f 83 104 103 +f 84 105 104 +f 86 107 106 +f 87 108 107 +f 88 109 108 +f 89 110 109 +f 90 111 110 +f 91 112 111 +f 92 113 112 +f 93 114 113 +f 94 115 114 +f 95 116 115 +f 96 117 116 +f 97 118 117 +f 98 119 118 +f 99 120 119 +f 100 121 120 +f 101 122 121 +f 102 123 122 +f 103 124 123 +f 104 125 124 +f 105 126 125 +f 107 128 127 +f 108 129 128 +f 109 130 129 +f 110 131 130 +f 111 132 131 +f 112 133 132 +f 113 134 133 +f 114 135 134 +f 115 136 135 +f 116 137 136 +f 117 138 137 +f 118 139 138 +f 119 140 139 +f 120 141 140 +f 121 142 141 +f 122 143 142 +f 123 144 143 +f 124 145 144 +f 125 146 145 +f 126 147 146 +f 128 149 148 +f 129 150 149 +f 130 151 150 +f 131 152 151 +f 132 153 152 +f 133 154 153 +f 134 155 154 +f 135 156 155 +f 136 157 156 +f 137 158 157 +f 138 159 158 +f 139 160 159 +f 140 161 160 +f 141 162 161 +f 142 163 162 +f 143 164 163 +f 144 165 164 +f 145 166 165 +f 146 167 166 +f 147 168 167 +f 149 170 169 +f 150 171 170 +f 151 172 171 +f 152 173 172 +f 153 174 173 +f 154 175 174 +f 155 176 175 +f 156 177 176 +f 157 178 177 +f 158 179 178 +f 159 180 179 +f 160 181 180 +f 161 182 181 +f 162 183 182 +f 163 184 183 +f 164 185 184 +f 165 186 185 +f 166 187 186 +f 167 188 187 +f 168 189 188 +f 170 191 190 +f 171 192 191 +f 172 193 192 +f 173 194 193 +f 174 195 194 +f 175 196 195 +f 176 197 196 +f 177 198 197 +f 178 199 198 +f 179 200 199 +f 180 201 200 +f 181 202 201 +f 182 203 202 +f 183 204 203 +f 184 205 204 +f 185 206 205 +f 186 207 206 +f 187 208 207 +f 188 209 208 +f 189 210 209 +f 191 212 211 +f 192 213 212 +f 193 214 213 +f 194 215 214 +f 195 216 215 +f 196 217 216 +f 197 218 217 +f 198 219 218 +f 199 220 219 +f 200 221 220 +f 201 222 221 +f 202 223 222 +f 203 224 223 +f 204 225 224 +f 205 226 225 +f 206 227 226 +f 207 228 227 +f 208 229 228 +f 209 230 229 +f 210 231 230 +f 212 233 232 +f 213 234 233 +f 214 235 234 +f 215 236 235 +f 216 237 236 +f 217 238 237 +f 218 239 238 +f 219 240 239 +f 220 241 240 +f 221 242 241 +f 222 243 242 +f 223 244 243 +f 224 245 244 +f 225 246 245 +f 226 247 246 +f 227 248 247 +f 228 249 248 +f 229 250 249 +f 230 251 250 +f 231 252 251 +f 233 254 253 +f 234 255 254 +f 235 256 255 +f 236 257 256 +f 237 258 257 +f 238 259 258 +f 239 260 259 +f 240 261 260 +f 241 262 261 +f 242 263 262 +f 243 264 263 +f 244 265 264 +f 245 266 265 +f 246 267 266 +f 247 268 267 +f 248 269 268 +f 249 270 269 +f 250 271 270 +f 251 272 271 +f 252 273 272 +f 254 275 274 +f 255 276 275 +f 256 277 276 +f 257 278 277 +f 258 279 278 +f 259 280 279 +f 260 281 280 +f 261 282 281 +f 262 283 282 +f 263 284 283 +f 264 285 284 +f 265 286 285 +f 266 287 286 +f 267 288 287 +f 268 289 288 +f 269 290 289 +f 270 291 290 +f 271 292 291 +f 272 293 292 +f 273 294 293 +f 275 296 295 +f 276 297 296 +f 277 298 297 +f 278 299 298 +f 279 300 299 +f 280 301 300 +f 281 302 301 +f 282 303 302 +f 283 304 303 +f 284 305 304 +f 285 306 305 +f 286 307 306 +f 287 308 307 +f 288 309 308 +f 289 310 309 +f 290 311 310 +f 291 312 311 +f 292 313 312 +f 293 314 313 +f 294 315 314 +f 296 317 316 +f 297 318 317 +f 298 319 318 +f 299 320 319 +f 300 321 320 +f 301 322 321 +f 302 323 322 +f 303 324 323 +f 304 325 324 +f 305 326 325 +f 306 327 326 +f 307 328 327 +f 308 329 328 +f 309 330 329 +f 310 331 330 +f 311 332 331 +f 312 333 332 +f 313 334 333 +f 314 335 334 +f 315 336 335 +f 317 338 337 +f 318 339 338 +f 319 340 339 +f 320 341 340 +f 321 342 341 +f 322 343 342 +f 323 344 343 +f 324 345 344 +f 325 346 345 +f 326 347 346 +f 327 348 347 +f 328 349 348 +f 329 350 349 +f 330 351 350 +f 331 352 351 +f 332 353 352 +f 333 354 353 +f 334 355 354 +f 335 356 355 +f 336 357 356 +f 338 359 358 +f 339 360 359 +f 340 361 360 +f 341 362 361 +f 342 363 362 +f 343 364 363 +f 344 365 364 +f 345 366 365 +f 346 367 366 +f 347 368 367 +f 348 369 368 +f 349 370 369 +f 350 371 370 +f 351 372 371 +f 352 373 372 +f 353 374 373 +f 354 375 374 +f 355 376 375 +f 356 377 376 +f 357 378 377 +f 359 380 379 +f 360 381 380 +f 361 382 381 +f 362 383 382 +f 363 384 383 +f 364 385 384 +f 365 386 385 +f 366 387 386 +f 367 388 387 +f 368 389 388 +f 369 390 389 +f 370 391 390 +f 371 392 391 +f 372 393 392 +f 373 394 393 +f 374 395 394 +f 375 396 395 +f 376 397 396 +f 377 398 397 +f 378 399 398 +f 380 401 400 +f 381 402 401 +f 382 403 402 +f 383 404 403 +f 384 405 404 +f 385 406 405 +f 386 407 406 +f 387 408 407 +f 388 409 408 +f 389 410 409 +f 390 411 410 +f 391 412 411 +f 392 413 412 +f 393 414 413 +f 394 415 414 +f 395 416 415 +f 396 417 416 +f 397 418 417 +f 398 419 418 +f 399 420 419 +f 401 422 421 +f 402 423 422 +f 403 424 423 +f 404 425 424 +f 405 426 425 +f 406 427 426 +f 407 428 427 +f 408 429 428 +f 409 430 429 +f 410 431 430 +f 411 432 431 +f 412 433 432 +f 413 434 433 +f 414 435 434 +f 415 436 435 +f 416 437 436 +f 417 438 437 +f 418 439 438 +f 419 440 439 +f 420 441 440 diff --git a/example_smoothed.jpg b/example_smoothed.jpg new file mode 100644 index 0000000..aa8411e Binary files /dev/null and b/example_smoothed.jpg differ diff --git a/example_unsmoothed.jpg b/example_unsmoothed.jpg new file mode 100644 index 0000000..b577762 Binary files /dev/null and b/example_unsmoothed.jpg differ diff --git a/splashsurf/README.md b/splashsurf/README.md index 737e241..641ffb0 100644 --- a/splashsurf/README.md +++ b/splashsurf/README.md @@ -9,7 +9,7 @@ CLI for surface reconstruction of particle data from SPH simulations, written in Rust. For a the library used by the CLI see the [`splashsurf_lib`](https://crates.io/crates/splashsurf_lib) crate.

-Image of the original particle data Image of a coarse reconstructed surface mesh Image of a fine reconstructed surface mesh +Image of the original particle data Image of a coarse reconstructed surface mesh Image of a fine reconstructed surface mesh

`splashsurf` is a tool to reconstruct surfaces meshes from SPH particle data. @@ -19,19 +19,27 @@ reconstructed from this particle data. The next image shows a reconstructed surf with a "smoothing length" of `2.2` times the particles radius and a cell size of `1.1` times the particle radius. The third image shows a finer reconstruction with a cell size of `0.45` times the particle radius. These surface meshes can then be fed into 3D rendering software such as [Blender](https://www.blender.org/) to generate beautiful water animations. -The result might look something like this (please excuse the lack of 3D rendering skills): +The result might look something like this:

Rendered water animation

+Note: This animation does not show the recently added smoothing features of the tool, for more recent rendering see [this video](https://youtu.be/2bYvaUXlBQs). + +--- + **Contents** - [The `splashsurf` CLI](#the-splashsurf-cli) - [Introduction](#introduction) + - [Domain decomposition](#domain-decomposition) + - [Octree-based decomposition](#octree-based-decomposition) + - [Subdomain grid-based decomposition](#subdomain-grid-based-decomposition) - [Notes](#notes) - [Installation](#installation) - [Usage](#usage) - [Recommended settings](#recommended-settings) + - [Weighted surface smoothing](#weighted-surface-smoothing) - [Benchmark example](#benchmark-example) - [Sequences of files](#sequences-of-files) - [Input file formats](#input-file-formats) @@ -47,34 +55,56 @@ The result might look something like this (please excuse the lack of 3D renderin - [The `convert` subcommand](#the-convert-subcommand) - [License](#license) + # The `splashsurf` CLI The following sections mainly focus on the CLI of `splashsurf`. For more information on the library, see the [corresponding readme](https://github.com/InteractiveComputerGraphics/splashsurf/blob/main/splashsurf_lib) in the `splashsurf_lib` subfolder or the [`splashsurf_lib` crate](https://crates.io/crates/splashsurf_lib) on crates.io. ## Introduction -This is a basic but high-performance implementation of a marching cubes based surface reconstruction for SPH fluid simulations (e.g performed with [SPlisHSPlasH](https://github.com/InteractiveComputerGraphics/SPlisHSPlasH)). +This is CLI to run a fast marching cubes based surface reconstruction for SPH fluid simulations (e.g. performed with [SPlisHSPlasH](https://github.com/InteractiveComputerGraphics/SPlisHSPlasH)). The output of this tool is the reconstructed triangle surface mesh of the fluid. At the moment it supports computing normals on the surface using SPH gradients and interpolating scalar and vector particle attributes to the surface. -No additional smoothing or decimation operations are currently implemented. -As input, it supports reading particle positions from `.vtk`, `.bgeo`, `.ply`, `.json` and binary `.xyz` files (i.e. files containing a binary dump of a particle position array). -In addition, required parameters are the kernel radius and particle radius (to compute the volume of particles) used for the original SPH simulation as well as the surface threshold. - -By default, a domain decomposition of the particle set is performed using octree-based subdivision. -The implementation first computes the density of each particle using the typical SPH approach with a cubic kernel. -This density is then evaluated or mapped onto a sparse grid using spatial hashing in the support radius of each particle. -This implies that memory is only allocated in areas where the fluid density is non-zero. This is in contrast to a naive approach where the marching cubes background grid is allocated for the whole domain. -The marching cubes reconstruction is performed only in the narrowband of grid cells where the density values cross the surface threshold. Cells completely in the interior of the fluid are skipped. For more details, please refer to the [readme of the library]((https://github.com/InteractiveComputerGraphics/splashsurf/blob/main/splashsurf_lib/README.md)). +To get rid of the typical bumps from SPH simulations, it supports a weighted Laplacian smoothing approach [detailed below](#weighted-surface-smoothing). +As input, it supports reading particle positions from `.vtk`/`.vtu`, `.bgeo`, `.ply`, `.json` and binary `.xyz` (i.e. files containing a binary dump of a particle position array) files. +Required parameters to perform a reconstruction are the kernel radius and particle radius (to compute the volume of particles) used for the original SPH simulation as well as the marching cubes resolution (a default iso-surface threshold is pre-configured). + +## Domain decomposition + +A naive dense marching cubes reconstruction allocating a full 3D array over the entire fulid domain quickly becomes infeasible for larger simulations. +Instead, one could use a global hashmap where only cubes that contain non-zero fluid density values are allocated. +This approach is used in `splashsurf` if domain decomposition is disabled completely. +However, the global hashmap approach does not lead to good cache locality and is not well suited for parallelization (even specialized parallel map implementations like [`dashmap`](https://github.com/xacrimon/dashmap) have their performance limitations). +To improve on this situation `splashsurf` currently implements two domain decomposition approaches. + +### Octree-based decomposition +The octree-based decomposition is currently the default approach if no other option is specified but will probably be replaced by the grid-based approach described below. +For the octree-based decomposition an octree is built over all particles with an automatically determined target number of particles per leaf node. +For each leaf node, a hashmap is used like outlined above. +As each hashmap is smaller, cache locality is improved and due to the decomposition, each thread can work on its own local hashmap. Finally, all surface patches are stitched together by walking the octree back up, resulting in a closed surface. +Downsides of this approach are that the octree construction starting from the root and stitching back towards the root limit the amount of paralleism during some stages. + +### Subdomain grid-based decomposition + +Since version 0.10.0, `splashsurf` implements a new domain decomposition approach called the "subdomain grid" approach, toggeled with the `--subdomain-grid=on` flag. +Here, the goal is to divide the fluid domain into subdomains with a fixed number of marching cubes cells, by default `64x64x64` cubes. +For each subdomain a dense 3D array is allocated for the marching cubes cells. +Of course, only subdomains that contain fluid particles are actually allocated. +For subdomains that contain only a very small number of fluid particles (less th 5% of the largest subdomain) a hashmap is used instead to not waste too much storage. +As most domains are dense however, the marching cubes triangulation per subdomain is very fast as it can make full use of cache locality and the entire procedure is trivially parallelizable. +For the stitching we ensure that we perform floating point operations in the same order at the subdomain boundaries (this can be ensured without synchronization). +If the field values on the subdomain boundaries are identical from both sides, the marching cubes triangulations will be topologically compatible and can be merged in a post-processing step that is also parallelizable. +Overall, this approach should almost always be faster than the previous octree-based aproach. + ## Notes -For small numbers of fluid particles (i.e. in the low thousands or less) the multithreaded implementation may have worse performance due to the task based parallelism and the additional overhead of domain decomposition and stitching. +For small numbers of fluid particles (i.e. in the low thousands or less) the domain decomposition implementation may have worse performance due to the task based parallelism and the additional overhead of domain decomposition and stitching. In this case, you can try to disable the domain decomposition. The reconstruction will then use a global approach that is parallelized using thread-local hashmaps. For larger quantities of particles the decomposition approach is expected to be always faster. Due to the use of hash maps and multi-threading (if enabled), the output of this implementation is not deterministic. -In the future, flags may be added to switch the internal data structures to use binary trees for debugging purposes. As shown below, the tool can handle the output of large simulations. However, it was not tested with a wide range of parameters and may not be totally robust against corner-cases or extreme parameters. @@ -97,6 +127,29 @@ Good settings for the surface reconstruction depend on the original simulation a - `surface-threshold`: a good value depends on the selected `particle-radius` and `smoothing-length` and can be used to counteract a fluid volume increase e.g. due to a larger particle radius. In combination with the other recommended values a threshold of `0.6` seemed to work well. - `cube-size` usually should not be chosen larger than `1.0` to avoid artifacts (e.g. single particles decaying into rhomboids), start with a value in the range of `0.75` to `0.5` and decrease/increase it if the result is too coarse or the reconstruction takes too long. +### Weighted surface smoothing +The CLI implements the paper ["Weighted Laplacian Smoothing for Surface Reconstruction of Particle-based Fluids" (Löschner, Böttcher, Jeske, Bender; 2023)](https://animation.rwth-aachen.de/publication/0583/) which proposes a fast smoothing approach to avoid typical bumpy surfaces while preventing loss of volume that typically occurs with simple smoothing methods. +The following images show a rendering of a typical surface reconstruction (on the right) with visible bumps due to the particles compared to the same surface reconstruction with weighted smoothing applied (on the left): + +

+Image of the original surface reconstruction without smoothing (bumpy & rough) Image of the surface reconstruction with weighted smoothing applied (nice & smooth) +

+ +You can see this rendering in motion in [this video](https://youtu.be/2bYvaUXlBQs). +To apply this smoothing, we recommend the following settings: + - `--mesh-smoothing-weights=on`: This enables the use of special weights during the smoothing process that preserve fluid details. For more information we refer to the [paper](https://animation.rwth-aachen.de/publication/0583/). + - `--mesh-smoothing-iters=25`: This enables smoothing of the output mesh. The individual iterations are relatively fast and 25 iterations appeared to strike a good balance between an initially bumpy surface and potential over-smoothing. + - `--mesh-cleanup=on`/`--decimate-barnacles=on`: On of the options should be used when applying smoothing, otherwise artifacts can appear on the surface (for more details see the paper). The `mesh-cleanup` flag enables a general purpose marching cubes mesh cleanup procedure that removes small sliver triangles everywhere on the mesh. The `decimate-barnacles` enables a more targeted decimation that only removes specific triangle configurations that are problematic for the smoothing. The former approach results in a "nicer" mesh overall but can be slower than the latter. + - `--normals-smoothing-iters=10`: If normals are being exported (with `--normals=on`), this results in an even smoother appearance during rendering. + +For the reconstruction parameters in conjunction with the weighted smoothing we recommend parameters close to the simulation parameters. +That means selecting the same particle radius as in the simulation, a corresponding smoothing length (e.g. for SPlisHSPlasH a value of `2.0`), a surface-threshold between `0.6` and `0.7` and a cube size usually between `0.5` and `1.0`. + +A full invocation of the tool might look like this: +``` +splashsurf reconstruct particles.vtk -r=0.025 -l=2.0 -c=0.5 -t=0.6 --subdomain-grid=on --mesh-cleanup=on --mesh-smoothing-weights=on --mesh-smoothing-iters=25 --normals=on --normals-smoothing-iters=10 +``` + ### Benchmark example For example: ``` @@ -173,9 +226,19 @@ Note that the tool collects all existing filenames as soon as the command is inv The first and last file of a sequences that should be processed can be specified with the `-s`/`--start-index` and/or `-e`/`--end-index` arguments. By specifying the flag `--mt-files=on`, several files can be processed in parallel. -If this is enabled, you should ideally also set `--mt-particles=off` as enabling both will probably degrade performance. +If this is enabled, you should also set `--mt-particles=off` as enabling both will probably degrade performance. The combination of `--mt-files=on` and `--mt-particles=off` can be faster if many files with only few particles have to be processed. +The number of threads can be influenced using the `--num-threads`/`-n` argument or the `RAYON_NUM_THREADS` environment variable + +**NOTE:** Currently, some functions do not have a sequential implementation and always parallelize over the particles or the mesh/domain. +This includes: + - the new "subdomain-grid" domain decomposition approach, as an alternative to the previous octree-based approach + - some post-processing functionality (interpolation of smoothing weights, interpolation of normals & other fluid attributes) + +Using the `--mt-particles=off` argument does not have an effect on these parts of the surface reconstruction. +For now, it is therefore recommended to not parallelize over multiple files if this functionality is used. + ## Input file formats ### VTK @@ -236,16 +299,18 @@ The file format is inferred from the extension of output filename. ### The `reconstruct` command ``` -splashsurf-reconstruct (v0.9.3) - Reconstruct a surface from particle data +splashsurf-reconstruct (v0.10.0) - Reconstruct a surface from particle data Usage: splashsurf reconstruct [OPTIONS] --particle-radius --smoothing-length --cube-size Options: + -q, --quiet Enable quiet mode (no output except for severe panic messages), overrides verbosity level + -v... Print more verbose output, use multiple "v"s for even more verbose output (-v, -vv) -h, --help Print help -V, --version Print version Input/output: - -o, --output-file Filename for writing the reconstructed surface to disk (default: "{original_filename}_surface.vtk") + -o, --output-file Filename for writing the reconstructed surface to disk (supported formats: VTK, PLY, OBJ, default: "{original_filename}_surface.vtk") --output-dir Optional base directory for all output files (default: current working directory) -s, --start-index Index of the first input file to process when processing a sequence of files (default: lowest index of the sequence) -e, --end-index Index of the last input file to process when processing a sequence of files (default: highest index of the sequence) @@ -262,39 +327,79 @@ Numerical reconstruction parameters: The cube edge length used for marching cubes in multiplies of the particle radius, corresponds to the cell size of the implicit background grid -t, --surface-threshold The iso-surface threshold for the density, i.e. the normalized value of the reconstructed density level that indicates the fluid surface (in multiplies of the rest density) [default: 0.6] - --domain-min + --particle-aabb-min Lower corner of the domain where surface reconstruction should be performed (requires domain-max to be specified) - --domain-max + --particle-aabb-max Upper corner of the domain where surface reconstruction should be performed (requires domain-min to be specified) Advanced parameters: - -d, --double-precision= Whether to enable the use of double precision for all computations [default: off] [possible values: off, on] - --mt-files= Flag to enable multi-threading to process multiple input files in parallel [default: off] [possible values: off, on] - --mt-particles= Flag to enable multi-threading for a single input file by processing chunks of particles in parallel [default: on] [possible values: off, on] + -d, --double-precision= Enable the use of double precision for all computations [default: off] [possible values: off, on] + --mt-files= Enable multi-threading to process multiple input files in parallel (NOTE: Currently, the subdomain-grid domain decomposition approach and some post-processing functions including interpolation do not have sequential versions and therefore do not work well with this option enabled) [default: off] [possible values: off, on] + --mt-particles= Enable multi-threading for a single input file by processing chunks of particles in parallel [default: on] [possible values: off, on] -n, --num-threads Set the number of threads for the worker thread pool -Octree (domain decomposition) parameters: +Domain decomposition (octree or grid) parameters: + --subdomain-grid= + Enable spatial decomposition using a regular grid-based approach [default: off] [possible values: off, on] + --subdomain-cubes + Each subdomain will be a cube consisting of this number of MC cube cells along each coordinate axis [default: 64] --octree-decomposition= - Whether to enable spatial decomposition using an octree (faster) instead of a global approach [default: on] [possible values: off, on] + Enable spatial decomposition using an octree (faster) instead of a global approach [default: on] [possible values: off, on] --octree-stitch-subdomains= - Whether to enable stitching of the disconnected local meshes resulting from the reconstruction when spatial decomposition is enabled (slower, but without stitching meshes will not be closed) [default: on] [possible values: off, on] + Enable stitching of the disconnected local meshes resulting from the reconstruction when spatial decomposition is enabled (slower, but without stitching meshes will not be closed) [default: on] [possible values: off, on] --octree-max-particles The maximum number of particles for leaf nodes of the octree, default is to compute it based on the number of threads and particles --octree-ghost-margin-factor Safety factor applied to the kernel compact support radius when it's used as a margin to collect ghost particles in the leaf nodes when performing the spatial decomposition --octree-global-density= - Whether to compute particle densities in a global step before domain decomposition (slower) [default: off] [possible values: off, on] + Enable computing particle densities in a global step before domain decomposition (slower) [default: off] [possible values: off, on] --octree-sync-local-density= - Whether to compute particle densities per subdomain but synchronize densities for ghost-particles (faster, recommended). Note: if both this and global particle density computation is disabled the ghost particle margin has to be increased to at least 2.0 to compute correct density values for ghost particles [default: on] [possible values: off, on] + Enable computing particle densities per subdomain but synchronize densities for ghost-particles (faster, recommended). Note: if both this and global particle density computation is disabled the ghost particle margin has to be increased to at least 2.0 to compute correct density values for ghost particles [default: on] [possible values: off, on] -Interpolation: +Interpolation & normals: --normals= - Whether to compute surface normals at the mesh vertices and write them to the output file [default: off] [possible values: off, on] + Enable omputing surface normals at the mesh vertices and write them to the output file [default: off] [possible values: off, on] --sph-normals= - Whether to compute the normals using SPH interpolation (smoother and more true to actual fluid surface, but slower) instead of just using area weighted triangle normals [default: on] [possible values: off, on] + Enable computing the normals using SPH interpolation instead of using the area weighted triangle normals [default: off] [possible values: off, on] + --normals-smoothing-iters + Number of smoothing iterations to run on the normal field if normal interpolation is enabled (disabled by default) + --output-raw-normals= + Enable writing raw normals without smoothing to the output mesh if normal smoothing is enabled [default: off] [possible values: off, on] --interpolate-attributes List of point attribute field names from the input file that should be interpolated to the reconstructed surface. Currently this is only supported for VTK and VTU input files +Postprocessing: + --mesh-cleanup= + Enable MC specific mesh decimation/simplification which removes bad quality triangles typically generated by MC [default: off] [possible values: off, on] + --decimate-barnacles= + Enable decimation of some typical bad marching cubes triangle configurations (resulting in "barnacles" after Laplacian smoothing) [default: off] [possible values: off, on] + --keep-verts= + Enable keeping vertices without connectivity during decimation instead of filtering them out (faster and helps with debugging) [default: off] [possible values: off, on] + --mesh-smoothing-iters + Number of smoothing iterations to run on the reconstructed mesh + --mesh-smoothing-weights= + Enable feature weights for mesh smoothing if mesh smoothing enabled. Preserves isolated particles even under strong smoothing [default: off] [possible values: off, on] + --mesh-smoothing-weights-normalization + Normalization value from weighted number of neighbors to mesh smoothing weights [default: 13.0] + --output-smoothing-weights= + Enable writing the smoothing weights as a vertex attribute to the output mesh file [default: off] [possible values: off, on] + --generate-quads= + Enable trying to convert triangles to quads if they meet quality criteria [default: off] [possible values: off, on] + --quad-max-edge-diag-ratio + Maximum allowed ratio of quad edge lengths to its diagonals to merge two triangles to a quad (inverse is used for minimum) [default: 1.75] + --quad-max-normal-angle + Maximum allowed angle (in degrees) between triangle normals to merge them to a quad [default: 10] + --quad-max-interior-angle + Maximum allowed vertex interior angle (in degrees) inside of a quad to merge two triangles to a quad [default: 135] + --mesh-aabb-min + Lower corner of the bounding-box for the surface mesh, triangles completely outside are removed (requires mesh-aabb-max to be specified) + --mesh-aabb-max + Upper corner of the bounding-box for the surface mesh, triangles completely outside are removed (requires mesh-aabb-min to be specified) + --mesh-aabb-clamp-verts= + Enable clamping of vertices outside of the specified mesh AABB to the AABB (only has an effect if mesh-aabb-min/max are specified) [default: off] [possible values: off, on] + --output-raw-mesh= + Enable writing the raw reconstructed mesh before applying any post-processing steps [default: off] [possible values: off, on] + Debug options: --output-dm-points Optional filename for writing the point cloud representation of the intermediate density map to disk @@ -303,7 +408,13 @@ Debug options: --output-octree Optional filename for writing the octree used to partition the particles to disk --check-mesh= - Whether to check the final mesh for topological problems such as holes (note that when stitching is disabled this will lead to a lot of reported problems) [default: off] [possible values: off, on] + Enable checking the final mesh for holes and non-manifold edges and vertices [default: off] [possible values: off, on] + --check-mesh-closed= + Enable checking the final mesh for holes [default: off] [possible values: off, on] + --check-mesh-manifold= + Enable checking the final mesh for non-manifold edges and vertices [default: off] [possible values: off, on] + --check-mesh-debug= + Enable debug output for the check-mesh operations (has no effect if no other check-mesh option is enabled) [default: off] [possible values: off, on] ``` ### The `convert` subcommand @@ -337,6 +448,6 @@ Options: # License -For license information of this project, see the LICENSE file. +For license information of this project, see the [LICENSE](LICENSE) file. The splashsurf logo is based on two graphics ([1](https://www.svgrepo.com/svg/295647/wave), [2](https://www.svgrepo.com/svg/295652/surfboard-surfboard)) published on SVG Repo under a CC0 ("No Rights Reserved") license. The dragon model shown in the images on this page are part of the ["Stanford 3D Scanning Repository"](https://graphics.stanford.edu/data/3Dscanrep/). diff --git a/splashsurf/src/io.rs b/splashsurf/src/io.rs index 9232c73..c58f805 100644 --- a/splashsurf/src/io.rs +++ b/splashsurf/src/io.rs @@ -1,14 +1,12 @@ use crate::io::vtk_format::VtkFile; use anyhow::{anyhow, Context}; use log::{info, warn}; -use splashsurf_lib::mesh::MeshAttribute; +use splashsurf_lib::mesh::{ + IntoVtkUnstructuredGridPiece, Mesh3d, MeshAttribute, MeshWithData, TriMesh3d, +}; use splashsurf_lib::nalgebra::Vector3; use splashsurf_lib::Real; use splashsurf_lib::{io, profile}; -use splashsurf_lib::{ - mesh::{Mesh3d, MeshWithData, TriMesh3d}, - vtkio::model::DataSet, -}; use std::collections::HashSet; use std::fs::File; use std::io::{BufWriter, Write}; @@ -245,7 +243,7 @@ pub fn write_mesh<'a, R: Real, MeshT: Mesh3d, P: AsRef>( _format_params: &OutputFormatParameters, ) -> Result<(), anyhow::Error> where - &'a MeshWithData: Into, + for<'b> &'b MeshWithData: IntoVtkUnstructuredGridPiece, { let output_file = output_file.as_ref(); info!( diff --git a/splashsurf/src/reconstruction.rs b/splashsurf/src/reconstruction.rs index b155c99..6b0ca4c 100644 --- a/splashsurf/src/reconstruction.rs +++ b/splashsurf/src/reconstruction.rs @@ -1,17 +1,14 @@ use crate::{io, logging}; use anyhow::{anyhow, Context}; -use arguments::{ - ReconstructionRunnerArgs, ReconstructionRunnerPathCollection, ReconstructionRunnerPaths, -}; use clap::value_parser; use indicatif::{ProgressBar, ProgressStyle}; use log::info; use rayon::prelude::*; use splashsurf_lib::mesh::{AttributeData, Mesh3d, MeshAttribute, MeshWithData, PointCloud3d}; use splashsurf_lib::nalgebra::{Unit, Vector3}; -use splashsurf_lib::profile; use splashsurf_lib::sph_interpolation::SphInterpolator; -use splashsurf_lib::{density_map, Index, Real}; +use splashsurf_lib::{density_map, profile, Aabb3d, Index, Real}; +use std::borrow::Cow; use std::convert::TryFrom; use std::path::PathBuf; @@ -24,7 +21,7 @@ static ARGS_BASIC: &str = "Numerical reconstruction parameters"; static ARGS_ADV: &str = "Advanced parameters"; static ARGS_OCTREE: &str = "Domain decomposition (octree or grid) parameters"; static ARGS_DEBUG: &str = "Debug options"; -static ARGS_INTERP: &str = "Interpolation"; +static ARGS_INTERP: &str = "Interpolation & normals"; static ARGS_POSTPROC: &str = "Postprocessing"; static ARGS_OTHER: &str = "Remaining options"; @@ -65,7 +62,7 @@ pub struct ReconstructSubcommandArgs { #[arg(help_heading = ARGS_BASIC, short = 't', long, default_value = "0.6")] pub surface_threshold: f64, - /// Whether to enable the use of double precision for all computations + /// Enable the use of double precision for all computations #[arg( help_heading = ARGS_ADV, short = 'd', @@ -97,7 +94,7 @@ pub struct ReconstructSubcommandArgs { )] pub particle_aabb_max: Option>, - /// Flag to enable multi-threading to process multiple input files in parallel + /// Enable multi-threading to process multiple input files in parallel (NOTE: Currently, the subdomain-grid domain decomposition approach and some post-processing functions including interpolation do not have sequential versions and therefore do not work well with this option enabled) #[arg( help_heading = ARGS_ADV, long = "mt-files", @@ -107,7 +104,7 @@ pub struct ReconstructSubcommandArgs { require_equals = true )] pub parallelize_over_files: Switch, - /// Flag to enable multi-threading for a single input file by processing chunks of particles in parallel + /// Enable multi-threading for a single input file by processing chunks of particles in parallel #[arg( help_heading = ARGS_ADV, long = "mt-particles", @@ -121,7 +118,7 @@ pub struct ReconstructSubcommandArgs { #[arg(help_heading = ARGS_ADV, long, short = 'n')] pub num_threads: Option, - /// Whether to enable spatial decomposition using a regular grid-based approach + /// Enable spatial decomposition using a regular grid-based approach #[arg( help_heading = ARGS_OCTREE, long, @@ -135,7 +132,7 @@ pub struct ReconstructSubcommandArgs { #[arg(help_heading = ARGS_OCTREE, long, default_value="64")] pub subdomain_cubes: u32, - /// Whether to enable spatial decomposition using an octree (faster) instead of a global approach + /// Enable spatial decomposition using an octree (faster) instead of a global approach #[arg( help_heading = ARGS_OCTREE, long, @@ -145,7 +142,7 @@ pub struct ReconstructSubcommandArgs { require_equals = true )] pub octree_decomposition: Switch, - /// Whether to enable stitching of the disconnected local meshes resulting from the reconstruction when spatial decomposition is enabled (slower, but without stitching meshes will not be closed) + /// Enable stitching of the disconnected local meshes resulting from the reconstruction when spatial decomposition is enabled (slower, but without stitching meshes will not be closed) #[arg( help_heading = ARGS_OCTREE, long, @@ -161,7 +158,7 @@ pub struct ReconstructSubcommandArgs { /// Safety factor applied to the kernel compact support radius when it's used as a margin to collect ghost particles in the leaf nodes when performing the spatial decomposition #[arg(help_heading = ARGS_OCTREE, long)] pub octree_ghost_margin_factor: Option, - /// Whether to compute particle densities in a global step before domain decomposition (slower) + /// Enable computing particle densities in a global step before domain decomposition (slower) #[arg( help_heading = ARGS_OCTREE, long, @@ -171,7 +168,7 @@ pub struct ReconstructSubcommandArgs { require_equals = true )] pub octree_global_density: Switch, - /// Whether to compute particle densities per subdomain but synchronize densities for ghost-particles (faster, recommended). + /// Enable computing particle densities per subdomain but synchronize densities for ghost-particles (faster, recommended). /// Note: if both this and global particle density computation is disabled the ghost particle margin has to be increased to at least 2.0 /// to compute correct density values for ghost particles. #[arg( @@ -184,7 +181,7 @@ pub struct ReconstructSubcommandArgs { )] pub octree_sync_local_density: Switch, - /// Whether to compute surface normals at the mesh vertices and write them to the output file + /// Enable omputing surface normals at the mesh vertices and write them to the output file #[arg( help_heading = ARGS_INTERP, long, @@ -194,21 +191,111 @@ pub struct ReconstructSubcommandArgs { require_equals = true )] pub normals: Switch, - /// Whether to compute the normals using SPH interpolation (smoother and more true to actual fluid surface, but slower) instead of just using area weighted triangle normals + /// Enable computing the normals using SPH interpolation instead of using the area weighted triangle normals #[arg( help_heading = ARGS_INTERP, long, - default_value = "on", + default_value = "off", value_name = "off|on", ignore_case = true, require_equals = true )] pub sph_normals: Switch, + /// Number of smoothing iterations to run on the normal field if normal interpolation is enabled (disabled by default) + #[arg(help_heading = ARGS_INTERP, long)] + pub normals_smoothing_iters: Option, + /// Enable writing raw normals without smoothing to the output mesh if normal smoothing is enabled + #[arg( + help_heading = ARGS_INTERP, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub output_raw_normals: Switch, /// List of point attribute field names from the input file that should be interpolated to the reconstructed surface. Currently this is only supported for VTK and VTU input files. #[arg(help_heading = ARGS_INTERP, long)] pub interpolate_attributes: Vec, - /// Lower corner of the bounding-box for the surface mesh, mesh outside gets cut away (requires mesh-max to be specified) + /// Enable MC specific mesh decimation/simplification which removes bad quality triangles typically generated by MC + #[arg( + help_heading = ARGS_POSTPROC, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub mesh_cleanup: Switch, + /// Enable decimation of some typical bad marching cubes triangle configurations (resulting in "barnacles" after Laplacian smoothing) + #[arg( + help_heading = ARGS_POSTPROC, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub decimate_barnacles: Switch, + /// Enable keeping vertices without connectivity during decimation instead of filtering them out (faster and helps with debugging) + #[arg( + help_heading = ARGS_POSTPROC, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub keep_verts: Switch, + /// Number of smoothing iterations to run on the reconstructed mesh + #[arg(help_heading = ARGS_POSTPROC, long)] + pub mesh_smoothing_iters: Option, + /// Enable feature weights for mesh smoothing if mesh smoothing enabled. Preserves isolated particles even under strong smoothing. + #[arg( + help_heading = ARGS_POSTPROC, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub mesh_smoothing_weights: Switch, + /// Normalization value from weighted number of neighbors to mesh smoothing weights + #[arg(help_heading = ARGS_POSTPROC, long, default_value = "13.0")] + pub mesh_smoothing_weights_normalization: f64, + /// Enable writing the smoothing weights as a vertex attribute to the output mesh file + #[arg( + help_heading = ARGS_POSTPROC, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub output_smoothing_weights: Switch, + + /// Enable trying to convert triangles to quads if they meet quality criteria + #[arg( + help_heading = ARGS_POSTPROC, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub generate_quads: Switch, + /// Maximum allowed ratio of quad edge lengths to its diagonals to merge two triangles to a quad (inverse is used for minimum) + #[arg(help_heading = ARGS_POSTPROC, long, default_value = "1.75")] + pub quad_max_edge_diag_ratio: f64, + /// Maximum allowed angle (in degrees) between triangle normals to merge them to a quad + #[arg(help_heading = ARGS_POSTPROC, long, default_value = "10")] + pub quad_max_normal_angle: f64, + /// Maximum allowed vertex interior angle (in degrees) inside of a quad to merge two triangles to a quad + #[arg(help_heading = ARGS_POSTPROC, long, default_value = "135")] + pub quad_max_interior_angle: f64, + + /// Lower corner of the bounding-box for the surface mesh, triangles completely outside are removed (requires mesh-aabb-max to be specified) #[arg( help_heading = ARGS_POSTPROC, long, @@ -218,7 +305,7 @@ pub struct ReconstructSubcommandArgs { requires = "mesh_aabb_max", )] pub mesh_aabb_min: Option>, - /// Upper corner of the bounding-box for the surface mesh, mesh outside gets cut away (requires mesh-min to be specified) + /// Upper corner of the bounding-box for the surface mesh, triangles completely outside are removed (requires mesh-aabb-min to be specified) #[arg( help_heading = ARGS_POSTPROC, long, @@ -228,6 +315,27 @@ pub struct ReconstructSubcommandArgs { requires = "mesh_aabb_min", )] pub mesh_aabb_max: Option>, + /// Enable clamping of vertices outside of the specified mesh AABB to the AABB (only has an effect if mesh-aabb-min/max are specified) + #[arg( + help_heading = ARGS_POSTPROC, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub mesh_aabb_clamp_verts: Switch, + + /// Enable writing the raw reconstructed mesh before applying any post-processing steps + #[arg( + help_heading = ARGS_POSTPROC, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub output_raw_mesh: Switch, /// Optional filename for writing the point cloud representation of the intermediate density map to disk #[arg(help_heading = ARGS_DEBUG, long, value_parser = value_parser!(PathBuf))] @@ -238,7 +346,7 @@ pub struct ReconstructSubcommandArgs { /// Optional filename for writing the octree used to partition the particles to disk #[arg(help_heading = ARGS_DEBUG, long, value_parser = value_parser!(PathBuf))] pub output_octree: Option, - /// Whether to check the final mesh for topological problems such as holes (note that when stitching is disabled this will lead to a lot of reported problems) + /// Enable checking the final mesh for holes and non-manifold edges and vertices #[arg( help_heading = ARGS_DEBUG, long, @@ -248,6 +356,36 @@ pub struct ReconstructSubcommandArgs { require_equals = true )] pub check_mesh: Switch, + /// Enable checking the final mesh for holes + #[arg( + help_heading = ARGS_DEBUG, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub check_mesh_closed: Switch, + /// Enable checking the final mesh for non-manifold edges and vertices + #[arg( + help_heading = ARGS_DEBUG, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub check_mesh_manifold: Switch, + /// Enable debug output for the check-mesh operations (has no effect if no other check-mesh option is enabled) + #[arg( + help_heading = ARGS_DEBUG, + long, + default_value = "off", + value_name = "off|on", + ignore_case = true, + require_equals = true + )] + pub check_mesh_debug: Switch, } #[derive(Copy, Clone, Debug, PartialEq, Eq, clap::ValueEnum)] @@ -342,15 +480,33 @@ mod arguments { use walkdir::WalkDir; pub struct ReconstructionRunnerPostprocessingArgs { - pub check_mesh: bool, + pub check_mesh_closed: bool, + pub check_mesh_manifold: bool, + pub check_mesh_debug: bool, + pub mesh_cleanup: bool, + pub decimate_barnacles: bool, + pub keep_vertices: bool, pub compute_normals: bool, pub sph_normals: bool, + pub normals_smoothing_iters: Option, pub interpolate_attributes: Vec, + pub mesh_smoothing_iters: Option, + pub mesh_smoothing_weights: bool, + pub mesh_smoothing_weights_normalization: f64, + pub generate_quads: bool, + pub quad_max_edge_diag_ratio: f64, + pub quad_max_normal_angle: f64, + pub quad_max_interior_angle: f64, + pub output_mesh_smoothing_weights: bool, + pub output_raw_normals: bool, + pub output_raw_mesh: bool, pub mesh_aabb: Option>, + pub mesh_aabb_clamp_vertices: bool, } /// All arguments that can be supplied to the surface reconstruction tool converted to useful types pub struct ReconstructionRunnerArgs { + /// Parameters passed directly to the surface reconstruction pub params: splashsurf_lib::Parameters, pub use_double_precision: bool, pub io_params: io::FormatParameters, @@ -468,6 +624,7 @@ mod arguments { particle_aabb, enable_multi_threading: args.parallelize_over_particles.into_bool(), spatial_decomposition, + global_neighborhood_list: args.mesh_smoothing_weights.into_bool(), }; // Optionally initialize thread pool @@ -476,11 +633,30 @@ mod arguments { } let postprocessing = ReconstructionRunnerPostprocessingArgs { - check_mesh: args.check_mesh.into_bool(), + check_mesh_closed: args.check_mesh.into_bool() + || args.check_mesh_closed.into_bool(), + check_mesh_manifold: args.check_mesh.into_bool() + || args.check_mesh_manifold.into_bool(), + check_mesh_debug: args.check_mesh_debug.into_bool(), + mesh_cleanup: args.mesh_cleanup.into_bool(), + decimate_barnacles: args.decimate_barnacles.into_bool(), + keep_vertices: args.keep_verts.into_bool(), compute_normals: args.normals.into_bool(), sph_normals: args.sph_normals.into_bool(), + normals_smoothing_iters: args.normals_smoothing_iters, interpolate_attributes: args.interpolate_attributes.clone(), + mesh_smoothing_iters: args.mesh_smoothing_iters, + mesh_smoothing_weights: args.mesh_smoothing_weights.into_bool(), + mesh_smoothing_weights_normalization: args.mesh_smoothing_weights_normalization, + generate_quads: args.generate_quads.into_bool(), + quad_max_edge_diag_ratio: args.quad_max_edge_diag_ratio, + quad_max_normal_angle: args.quad_max_normal_angle, + quad_max_interior_angle: args.quad_max_interior_angle, + output_mesh_smoothing_weights: args.output_smoothing_weights.into_bool(), + output_raw_normals: args.output_raw_normals.into_bool(), + output_raw_mesh: args.output_raw_mesh.into_bool(), mesh_aabb, + mesh_aabb_clamp_vertices: args.mesh_aabb_clamp_verts.into_bool(), }; Ok(ReconstructionRunnerArgs { @@ -863,84 +1039,294 @@ pub(crate) fn reconstruction_pipeline_generic( // Perform the surface reconstruction let reconstruction = - splashsurf_lib::reconstruct_surface::(particle_positions.as_slice(), ¶ms)?; + splashsurf_lib::reconstruct_surface::(particle_positions.as_slice(), params)?; let grid = reconstruction.grid(); - let mesh = reconstruction.mesh(); + let mut mesh_with_data = MeshWithData::new(Cow::Borrowed(reconstruction.mesh())); - let mesh = if let Some(aabb) = &postprocessing.mesh_aabb { - profile!("clamp mesh to aabb"); - info!("Post-processing: Clamping mesh to AABB..."); + if postprocessing.output_raw_mesh { + profile!("write surface mesh to file"); - let mut mesh = mesh.clone(); - mesh.clamp_with_aabb( - &aabb - .try_convert() - .ok_or_else(|| anyhow!("Failed to convert mesh AABB"))?, + let output_path = paths + .output_file + .parent() + // Add a trailing separator if the parent is non-empty + .map(|p| p.join("")) + .unwrap_or_else(PathBuf::new); + let output_filename = format!( + "raw_{}", + paths.output_file.file_name().unwrap().to_string_lossy() ); - mesh - } else { - mesh.clone() - }; - - // Add normals to mesh if requested - let mesh = if postprocessing.compute_normals || !attributes.is_empty() { - profile!("compute normals"); + let raw_output_file = output_path.join(output_filename); info!( - "Constructing global acceleration structure for SPH interpolation to {} vertices...", - mesh.vertices.len() + "Writing unprocessed surface mesh to \"{}\"...", + raw_output_file.display() ); - let particle_rest_density = params.rest_density; - let particle_rest_volume = R::from_f64((4.0 / 3.0) * std::f64::consts::PI).unwrap() - * params.particle_radius.powi(3); - let particle_rest_mass = particle_rest_volume * particle_rest_density; - - let particle_densities = reconstruction - .particle_densities() - .ok_or_else(|| anyhow::anyhow!("Particle densities were not returned by surface reconstruction but are required for SPH normal computation"))? - .as_slice(); - assert_eq!( - particle_positions.len(), - particle_densities.len(), - "There has to be one density value per particle" - ); + io::write_mesh(&mesh_with_data, raw_output_file, &io_params.output).with_context(|| { + anyhow!( + "Failed to write raw output mesh to file \"{}\"", + paths.output_file.display() + ) + })?; + } - let interpolator = SphInterpolator::new( - &particle_positions, - particle_densities, - particle_rest_mass, - params.compact_support_radius, - ); + // Perform post-processing + { + profile!("postprocessing"); + + let mut vertex_connectivity = None; + + if postprocessing.mesh_cleanup { + info!("Post-processing: Performing mesh cleanup"); + let tris_before = mesh_with_data.mesh.triangles.len(); + let verts_before = mesh_with_data.mesh.vertices.len(); + vertex_connectivity = Some(splashsurf_lib::postprocessing::marching_cubes_cleanup( + mesh_with_data.mesh.to_mut(), + reconstruction.grid(), + 5, + postprocessing.keep_vertices, + )); + let tris_after = mesh_with_data.mesh.triangles.len(); + let verts_after = mesh_with_data.mesh.vertices.len(); + info!("Post-processing: Cleanup reduced number of vertices to {:.2}% and number of triangles to {:.2}% of original mesh.", (verts_after as f64 / verts_before as f64) * 100.0, (tris_after as f64 / tris_before as f64) * 100.0) + } + + // Decimate mesh if requested + if postprocessing.decimate_barnacles { + info!("Post-processing: Performing decimation"); + vertex_connectivity = Some(splashsurf_lib::postprocessing::decimation( + mesh_with_data.mesh.to_mut(), + postprocessing.keep_vertices, + )); + } + + // Initialize SPH interpolator if required later + let interpolator_required = postprocessing.mesh_smoothing_weights + || postprocessing.sph_normals + || !attributes.is_empty(); + let interpolator = if interpolator_required { + profile!("initialize interpolator"); + info!("Post-processing: Initializing interpolator..."); + + info!( + "Constructing global acceleration structure for SPH interpolation to {} vertices...", + mesh_with_data.vertices().len() + ); + + let particle_rest_density = params.rest_density; + let particle_rest_volume = R::from_f64((4.0 / 3.0) * std::f64::consts::PI).unwrap() + * params.particle_radius.powi(3); + let particle_rest_mass = particle_rest_volume * particle_rest_density; + + let particle_densities = reconstruction + .particle_densities() + .ok_or_else(|| anyhow::anyhow!("Particle densities were not returned by surface reconstruction but are required for SPH normal computation"))? + .as_slice(); + assert_eq!( + particle_positions.len(), + particle_densities.len(), + "There has to be one density value per particle" + ); + + Some(SphInterpolator::new( + &particle_positions, + particle_densities, + particle_rest_mass, + params.compact_support_radius, + )) + } else { + None + }; - let mut mesh_with_data = MeshWithData::new(mesh); - let mesh = &mesh_with_data.mesh; + // Compute mesh vertex-vertex connectivity map if required later + let vertex_connectivity_required = postprocessing.normals_smoothing_iters.is_some() + || postprocessing.mesh_smoothing_iters.is_some(); + if vertex_connectivity.is_none() && vertex_connectivity_required { + vertex_connectivity = Some(mesh_with_data.mesh.vertex_vertex_connectivity()); + } - // Compute normals if requested + // Compute smoothing weights if requested + let smoothing_weights = if postprocessing.mesh_smoothing_weights { + profile!("compute smoothing weights"); + info!("Post-processing: Computing smoothing weights..."); + + // TODO: Switch between parallel/single threaded + // TODO: Re-use data from reconstruction? + + // Global neighborhood search + let nl = reconstruction + .particle_neighbors() + .map(|nl| Cow::Borrowed(nl)) + .unwrap_or_else(|| + { + let search_radius = params.compact_support_radius; + + let mut domain = Aabb3d::from_points(particle_positions.as_slice()); + domain.grow_uniformly(search_radius); + + let mut nl = Vec::new(); + splashsurf_lib::neighborhood_search::neighborhood_search_spatial_hashing_parallel::( + &domain, + particle_positions.as_slice(), + search_radius, + &mut nl, + ); + assert_eq!(nl.len(), particle_positions.len()); + Cow::Owned(nl) + } + ); + + // Compute weighted neighbor count + let squared_r = params.compact_support_radius * params.compact_support_radius; + let weighted_ncounts = nl + .par_iter() + .enumerate() + .map(|(i, nl)| { + nl.iter() + .copied() + .map(|j| { + let dist = + (particle_positions[i] - particle_positions[j]).norm_squared(); + let weight = R::one() - (dist / squared_r).clamp(R::zero(), R::one()); + return weight; + }) + .fold(R::zero(), R::add) + }) + .collect::>(); + + let vertex_weighted_num_neighbors = { + profile!("interpolate weighted neighbor counts"); + interpolator + .as_ref() + .expect("interpolator is required") + .interpolate_scalar_quantity( + weighted_ncounts.as_slice(), + &mesh_with_data.vertices(), + true, + ) + }; + + let smoothing_weights = { + let offset = R::zero(); + let normalization = + R::from_f64(postprocessing.mesh_smoothing_weights_normalization).expect( + "smoothing weight normalization value cannot be represented as Real type", + ) - offset; + + // Normalize number of neighbors + let smoothing_weights = vertex_weighted_num_neighbors + .par_iter() + .copied() + .map(|n| (n - offset).max(R::zero())) + .map(|n| (n / normalization).min(R::one())) + // Smooth-Step function + .map(|x| x.powi(5).times(6) - x.powi(4).times(15) + x.powi(3).times(10)) + .collect::>(); + + if postprocessing.output_mesh_smoothing_weights { + // Raw distance-weighted number of neighbors value per vertex (can be used to determine normalization value) + mesh_with_data.point_attributes.push(MeshAttribute::new( + "wnn".to_string(), + AttributeData::ScalarReal(vertex_weighted_num_neighbors), + )); + // Final smoothing weights per vertex + mesh_with_data.point_attributes.push(MeshAttribute::new( + "sw".to_string(), + AttributeData::ScalarReal(smoothing_weights.clone()), + )); + } + + smoothing_weights + }; + + Some(smoothing_weights) + } else { + None + }; + + // Perform smoothing if requested + if let Some(mesh_smoothing_iters) = postprocessing.mesh_smoothing_iters { + profile!("mesh smoothing"); + info!("Post-processing: Smoothing mesh..."); + + // TODO: Switch between parallel/single threaded + + let smoothing_weights = smoothing_weights + .unwrap_or_else(|| vec![R::one(); mesh_with_data.vertices().len()]); + + splashsurf_lib::postprocessing::par_laplacian_smoothing_inplace( + mesh_with_data.mesh.to_mut(), + vertex_connectivity + .as_ref() + .expect("vertex connectivity is required"), + mesh_smoothing_iters, + R::one(), + &smoothing_weights, + ); + } + + // Add normals to mesh if requested if postprocessing.compute_normals { + profile!("compute normals"); + info!("Post-processing: Computing surface normals..."); + + // Compute normals let normals = if postprocessing.sph_normals { info!("Using SPH interpolation to compute surface normals"); - let sph_normals = interpolator.interpolate_normals(mesh.vertices()); + let sph_normals = interpolator + .as_ref() + .expect("interpolator is required") + .interpolate_normals(mesh_with_data.vertices()); bytemuck::allocation::cast_vec::>, Vector3>(sph_normals) } else { info!("Using area weighted triangle normals for surface normals"); profile!("mesh.par_vertex_normals"); - let tri_normals = mesh.par_vertex_normals(); + let tri_normals = mesh_with_data.mesh.par_vertex_normals(); // Convert unit vectors to plain vectors bytemuck::allocation::cast_vec::>, Vector3>(tri_normals) }; - mesh_with_data.point_attributes.push(MeshAttribute::new( - "normals".to_string(), - AttributeData::Vector3Real(normals), - )); + // Smooth normals + if let Some(smoothing_iters) = postprocessing.normals_smoothing_iters { + info!("Post-processing: Smoothing normals..."); + + let mut smoothed_normals = normals.clone(); + splashsurf_lib::postprocessing::par_laplacian_smoothing_normals_inplace( + &mut smoothed_normals, + vertex_connectivity + .as_ref() + .expect("vertex connectivity is required"), + smoothing_iters, + ); + + mesh_with_data.point_attributes.push(MeshAttribute::new( + "normals".to_string(), + AttributeData::Vector3Real(smoothed_normals), + )); + if postprocessing.output_raw_normals { + mesh_with_data.point_attributes.push(MeshAttribute::new( + "raw_normals".to_string(), + AttributeData::Vector3Real(normals), + )); + } + } else { + mesh_with_data.point_attributes.push(MeshAttribute::new( + "normals".to_string(), + AttributeData::Vector3Real(normals), + )); + } } // Interpolate attributes if requested if !attributes.is_empty() { + profile!("interpolate attributes"); + info!("Post-processing: Interpolating attributes..."); + let interpolator = interpolator.as_ref().expect("interpolator is required"); + for attribute in attributes.into_iter() { info!("Interpolating attribute \"{}\"...", attribute.name); @@ -948,7 +1334,7 @@ pub(crate) fn reconstruction_pipeline_generic( AttributeData::ScalarReal(values) => { let interpolated_values = interpolator.interpolate_scalar_quantity( values.as_slice(), - mesh.vertices(), + mesh_with_data.vertices(), true, ); mesh_with_data.point_attributes.push(MeshAttribute::new( @@ -959,7 +1345,7 @@ pub(crate) fn reconstruction_pipeline_generic( AttributeData::Vector3Real(values) => { let interpolated_values = interpolator.interpolate_vector_quantity( values.as_slice(), - mesh.vertices(), + mesh_with_data.vertices(), true, ); mesh_with_data.point_attributes.push(MeshAttribute::new( @@ -971,10 +1357,43 @@ pub(crate) fn reconstruction_pipeline_generic( } } } + } + + // Remove and clamp cells outside of AABB + let mesh_with_data = if let Some(mesh_aabb) = &postprocessing.mesh_aabb { + profile!("clamp mesh to aabb"); + info!("Post-processing: Clamping mesh to AABB..."); + mesh_with_data.par_clamp_with_aabb( + &mesh_aabb + .try_convert() + .ok_or_else(|| anyhow!("Failed to convert mesh AABB"))?, + postprocessing.mesh_aabb_clamp_vertices, + postprocessing.keep_vertices, + ) + } else { mesh_with_data + }; + + // Convert triangles to quads + let (tri_mesh, tri_quad_mesh) = if postprocessing.generate_quads { + info!("Post-processing: Convert triangles to quads..."); + let non_squareness_limit = R::from_f64(postprocessing.quad_max_edge_diag_ratio).unwrap(); + let normal_angle_limit_rad = + R::from_f64(postprocessing.quad_max_normal_angle.to_radians()).unwrap(); + let max_interior_angle = + R::from_f64(postprocessing.quad_max_interior_angle.to_radians()).unwrap(); + + let tri_quad_mesh = splashsurf_lib::postprocessing::convert_tris_to_quads( + &mesh_with_data.mesh, + non_squareness_limit, + normal_angle_limit_rad, + max_interior_angle, + ); + + (None, Some(mesh_with_data.with_mesh(tri_quad_mesh))) } else { - MeshWithData::new(mesh) + (Some(mesh_with_data), None) }; // Store the surface mesh @@ -985,7 +1404,17 @@ pub(crate) fn reconstruction_pipeline_generic( paths.output_file.display() ); - io::write_mesh(&mesh, paths.output_file.clone(), &io_params.output).with_context(|| { + match (&tri_mesh, &tri_quad_mesh) { + (Some(mesh), None) => { + io::write_mesh(mesh, paths.output_file.clone(), &io_params.output) + } + (None, Some(mesh)) => { + io::write_mesh(mesh, paths.output_file.clone(), &io_params.output) + } + + _ => unreachable!(), + } + .with_context(|| { anyhow!( "Failed to write output mesh to file \"{}\"", paths.output_file.display() @@ -998,11 +1427,7 @@ pub(crate) fn reconstruction_pipeline_generic( if let Some(output_octree_file) = &paths.output_octree_file { info!("Writing octree to \"{}\"...", output_octree_file.display()); io::vtk_format::write_vtk( - reconstruction - .octree() - .unwrap() - .hexmesh(grid, true) - .to_unstructured_grid(), + reconstruction.octree().unwrap().hexmesh(grid, true), output_octree_file, "mesh", ) @@ -1061,20 +1486,32 @@ pub(crate) fn reconstruction_pipeline_generic( output_density_map_grid_file.display() ); - io::vtk_format::write_vtk( - density_mesh.to_unstructured_grid(), - output_density_map_grid_file, - "density_map", - )?; + io::vtk_format::write_vtk(density_mesh, output_density_map_grid_file, "density_map")?; info!("Done."); } - if postprocessing.check_mesh { - if let Err(err) = splashsurf_lib::marching_cubes::check_mesh_consistency(grid, &mesh.mesh) { + if postprocessing.check_mesh_closed + || postprocessing.check_mesh_manifold + || postprocessing.check_mesh_debug + { + if let Err(err) = match (&tri_mesh, &tri_quad_mesh) { + (Some(mesh), None) => splashsurf_lib::marching_cubes::check_mesh_consistency( + grid, + &mesh.mesh, + postprocessing.check_mesh_closed, + postprocessing.check_mesh_manifold, + postprocessing.check_mesh_debug, + ), + (None, Some(_mesh)) => { + info!("Checking for mesh consistency not implemented for quad mesh at the moment."); + return Ok(()); + } + _ => unreachable!(), + } { return Err(anyhow!("{}", err)); } else { - info!("Checked mesh for problems (holes, etc.), no problems were found."); + info!("Checked mesh for problems (holes: {}, non-manifold edges/vertices: {}), no problems were found.", postprocessing.check_mesh_closed, postprocessing.check_mesh_manifold); } } diff --git a/splashsurf_lib/Cargo.toml b/splashsurf_lib/Cargo.toml index 5a6ba83..8e2b7ae 100644 --- a/splashsurf_lib/Cargo.toml +++ b/splashsurf_lib/Cargo.toml @@ -51,7 +51,7 @@ fxhash = "0.2" bitflags = "2.4" smallvec = { version = "1.11", features = ["union"] } arrayvec = "0.7" -bytemuck = "1.9" +bytemuck = { version = "1.9", features = ["extern_crate_alloc"] } bytemuck_derive = "1.3" numeric_literals = "0.2" rstar = "0.11" diff --git a/splashsurf_lib/benches/benches/bench_full.rs b/splashsurf_lib/benches/benches/bench_full.rs index 1550c21..8b8cf2d 100644 --- a/splashsurf_lib/benches/benches/bench_full.rs +++ b/splashsurf_lib/benches/benches/bench_full.rs @@ -104,6 +104,7 @@ pub fn surface_reconstruction_dam_break(c: &mut Criterion) { particle_aabb: None, enable_multi_threading: true, spatial_decomposition: None, + global_neighborhood_list: false, }; let mut group = c.benchmark_group("full surface reconstruction"); @@ -202,6 +203,7 @@ pub fn surface_reconstruction_double_dam_break(c: &mut Criterion) { particle_aabb: None, enable_multi_threading: true, spatial_decomposition: None, + global_neighborhood_list: false, }; let mut group = c.benchmark_group("full surface reconstruction"); @@ -300,6 +302,7 @@ pub fn surface_reconstruction_double_dam_break_inplace(c: &mut Criterion) { particle_aabb: None, enable_multi_threading: true, spatial_decomposition: None, + global_neighborhood_list: false, }; let mut group = c.benchmark_group("full surface reconstruction"); diff --git a/splashsurf_lib/benches/benches/bench_mesh.rs b/splashsurf_lib/benches/benches/bench_mesh.rs index 36951e2..6df2448 100644 --- a/splashsurf_lib/benches/benches/bench_mesh.rs +++ b/splashsurf_lib/benches/benches/bench_mesh.rs @@ -33,6 +33,7 @@ fn reconstruct_particles>(particle_file: P) -> SurfaceReconstruct ParticleDensityComputationStrategy::SynchronizeSubdomains, }, )), + global_neighborhood_list: false, }; reconstruct_surface::(particle_positions.as_slice(), ¶meters).unwrap() diff --git a/splashsurf_lib/benches/benches/bench_subdomain_grid.rs b/splashsurf_lib/benches/benches/bench_subdomain_grid.rs index 24dec38..c853d69 100644 --- a/splashsurf_lib/benches/benches/bench_subdomain_grid.rs +++ b/splashsurf_lib/benches/benches/bench_subdomain_grid.rs @@ -25,6 +25,7 @@ fn parameters_canyon() -> Parameters { subdomain_num_cubes_per_dim: 32, }, )), + global_neighborhood_list: false, }; parameters diff --git a/splashsurf_lib/src/aabb.rs b/splashsurf_lib/src/aabb.rs index c149c2d..18db882 100644 --- a/splashsurf_lib/src/aabb.rs +++ b/splashsurf_lib/src/aabb.rs @@ -6,7 +6,7 @@ use std::fmt::Debug; use nalgebra::SVector; use rayon::prelude::*; -use crate::{Real, ThreadSafe}; +use crate::{Real, RealConvert, ThreadSafe}; /// Type representing an axis aligned bounding box in arbitrary dimensions #[derive(Clone, Eq, PartialEq)] @@ -119,8 +119,8 @@ where T: Real, { Some(AxisAlignedBoundingBox::new( - T::try_convert_vec_from(&self.min)?, - T::try_convert_vec_from(&self.max)?, + self.min.try_convert()?, + self.max.try_convert()?, )) } diff --git a/splashsurf_lib/src/dense_subdomains.rs b/splashsurf_lib/src/dense_subdomains.rs index e04b0b5..b82c40c 100644 --- a/splashsurf_lib/src/dense_subdomains.rs +++ b/splashsurf_lib/src/dense_subdomains.rs @@ -19,7 +19,7 @@ use crate::neighborhood_search::{ neighborhood_search_spatial_hashing_flat_filtered, neighborhood_search_spatial_hashing_parallel, FlatNeighborhoodList, }; -use crate::uniform_grid::{EdgeIndex, UniformCartesianCubeGrid3d}; +use crate::uniform_grid::{EdgeIndex, GridConstructionError, UniformCartesianCubeGrid3d}; use crate::{ new_map, new_parallel_map, profile, Aabb3d, MapType, Parameters, SpatialDecomposition, SurfaceReconstruction, @@ -72,6 +72,25 @@ pub(crate) struct ParametersSubdomainGrid { subdomain_grid: UniformCartesianCubeGrid3d, /// Chunk size for chunked parallel processing chunk_size: usize, + /// Whether to return the global particle neighborhood list instead of only using per-domain lists internally + global_neighborhood_list: bool, +} + +impl ParametersSubdomainGrid { + pub(crate) fn global_marching_cubes_grid( + &self, + ) -> Result, GridConstructionError> { + let n_cells = self.global_marching_cubes_grid.cells_per_dim(); + UniformCartesianCubeGrid3d::new( + self.global_marching_cubes_grid.aabb().min(), + &[ + I::from(n_cells[0]).ok_or(GridConstructionError::IndexTypeTooSmallCellsPerDim)?, + I::from(n_cells[1]).ok_or(GridConstructionError::IndexTypeTooSmallCellsPerDim)?, + I::from(n_cells[2]).ok_or(GridConstructionError::IndexTypeTooSmallCellsPerDim)?, + ], + self.global_marching_cubes_grid.cell_size(), + ) + } } /// Result of the subdomain decomposition procedure @@ -238,6 +257,7 @@ pub(crate) fn initialize_parameters<'a, I: Index, R: Real>( global_marching_cubes_grid: global_mc_grid, subdomain_grid, chunk_size, + global_neighborhood_list: parameters.global_neighborhood_list, }) } @@ -493,15 +513,16 @@ pub(crate) fn decomposition< }) } -pub(crate) fn compute_global_density_vector( +pub(crate) fn compute_global_densities_and_neighbors( parameters: &ParametersSubdomainGrid, global_particles: &[Vector3], subdomains: &Subdomains, -) -> Vec { +) -> (Vec, Vec>) { profile!(parent, "compute_global_density_vector"); info!("Starting computation of global density vector."); let global_particle_densities = Mutex::new(vec![R::zero(); global_particles.len()]); + let global_neighbors = Mutex::new(vec![Vec::new(); global_particles.len()]); #[derive(Default)] struct SubdomainWorkspace { @@ -610,9 +631,35 @@ pub(crate) fn compute_global_density_vector( global_particle_densities[particle_idx] = density; }); } + + // Write particle neighbor lists into global storage + if parameters.global_neighborhood_list { + profile!("update global neighbor list"); + // Lock global vector while this subdomain writes into it + let mut global_neighbors = global_neighbors.lock(); + is_inside + .iter() + .copied() + .zip( + subdomain_particle_indices + .iter() + .copied() + .zip(neighborhood_lists.iter()), + ) + // Update density values only for particles inside of the subdomain (ghost particles have wrong values) + .filter(|(is_inside, _)| *is_inside) + .for_each(|(_, (particle_idx, neighbors))| { + global_neighbors[particle_idx] = neighbors + .iter() + .copied() + .map(|local| subdomain_particle_indices[local]) + .collect(); + }); + } }); let global_particle_densities = global_particle_densities.into_inner(); + let global_neighbors = global_neighbors.into_inner(); /* { @@ -625,7 +672,7 @@ pub(crate) fn compute_global_density_vector( } */ - global_particle_densities + (global_particle_densities, global_neighbors) } pub(crate) struct SurfacePatch { diff --git a/splashsurf_lib/src/density_map.rs b/splashsurf_lib/src/density_map.rs index b136b07..9aebac8 100644 --- a/splashsurf_lib/src/density_map.rs +++ b/splashsurf_lib/src/density_map.rs @@ -13,7 +13,7 @@ //! "flat point indices". These are computed from the background grid point coordinates `(i,j,k)` //! analogous to multidimensional array index flattening. That means for a grid with dimensions //! `[n_x, n_y, n_z]`, the flat point index is given by the expression `i*n_x + j*n_y + k*n_z`. -//! For these point index operations, the [`UniformGrid`](crate::UniformGrid) is used. +//! For these point index operations, the [`UniformGrid`] is used. //! //! Note that all density mapping functions always use the global background grid for flat point //! indices, even if the density map is only generated for a smaller subdomain. diff --git a/splashsurf_lib/src/halfedge_mesh.rs b/splashsurf_lib/src/halfedge_mesh.rs new file mode 100644 index 0000000..74bcde4 --- /dev/null +++ b/splashsurf_lib/src/halfedge_mesh.rs @@ -0,0 +1,586 @@ +//! Basic implementation of a half-edge based triangle mesh +//! +//! See [`HalfEdgeTriMesh`] for more information. + +use crate::mesh::{Mesh3d, TriMesh3d, TriMesh3dExt}; +use crate::{profile, Real, SetType}; +use nalgebra::Vector3; +use rayon::prelude::*; +use thiserror::Error as ThisError; + +impl TriMesh3dExt for HalfEdgeTriMesh { + fn tri_vertices(&self) -> &[Vector3] { + &self.vertices + } +} + +/// A half-edge in a [`HalfEdgeTriMesh`] +#[derive(Copy, Clone, Debug, Default)] +pub struct HalfEdge { + /// Unique global index of this half-edge in the mesh + pub idx: usize, + /// Vertex this half-edge points to + pub to: usize, + /// Enclosed face of this half-edge loop (or `None` if boundary) + pub face: Option, + /// The next half-edge along the half-edge loop (or `None` if boundary) + pub next: Option, + /// Index of the half-edge going into the opposite direction + pub opposite: usize, +} + +impl HalfEdge { + /// Returns whether the given half-edge is a boundary edge + pub fn is_boundary(&self) -> bool { + self.face.is_none() + } +} + +/// A half-edge based triangle mesh data structure +/// +/// The main purpose of this data structure is to provide methods to perform consistent collapses of +/// half-edges for decimation procedures. +/// +/// As [`splashsurf_lib`](crate) is focused on closed meshes, handling of holes is not specifically tested. +/// In particular, it is not directly possible to walk along a mesh boundary using the half-edges of +/// this implementation. +/// +/// A [`HalfEdgeTriMesh`] can be easily constructed from a [`TriMesh3d`] using a [`From`](HalfEdgeTriMesh::from::) implementation. +/// +/// Note that affected vertex/face/half-edge indices become "invalid" after half-edge collapse is performed. +/// The corresponding data still exist (i.e. they can be retrieved from the mesh) but following these +/// indices amounts to following outdated connectivity. +/// Therefore, it should be checked if an index was marked as removed after a collapse using the +/// [`is_valid_vertex`](HalfEdgeTriMesh::is_valid_vertex)/[`is_valid_triangle`](HalfEdgeTriMesh::is_valid_triangle)/[`is_valid_half_edge`](HalfEdgeTriMesh::is_valid_half_edge) +/// methods. +#[derive(Clone, Debug, Default)] +pub struct HalfEdgeTriMesh { + /// All vertices in the mesh + pub vertices: Vec>, + /// All triangles in the mesh + pub triangles: Vec<[usize; 3]>, + /// All half-edges in the mesh + pub half_edges: Vec, + /// Connectivity map of all vertices to their connected neighbors + vertex_half_edge_map: Vec>, + /// Set of all vertices marked for removal + removed_vertices: SetType, + /// Set of all triangles marked for removal + removed_triangles: SetType, + /// Set of all half edges marked for removal + removed_half_edges: SetType, +} + +/// Error indicating why a specific half-edge collapse is illegal +#[derive(Copy, Clone, Debug, Eq, PartialEq, ThisError)] +pub enum IllegalHalfEdgeCollapse { + /// Trying to collapse an edge with boundary vertices at both ends + #[error("trying to collapse an edge with boundary vertices at both ends")] + BoundaryCollapse, + /// Trying to collapse an edge with vertices that share incident vertices other than the vertices directly opposite to the edge + #[error("trying to collapse an edge with vertices that share incident vertices other than the vertices directly opposite to the edge")] + IntersectionOfOneRing, + /// Trying to collapse an edge without faces + #[error("trying to collapse an edge without faces")] + FacelessEdge, +} + +impl HalfEdgeTriMesh { + /// Converts this mesh into a simple triangle mesh and a vertex-vertex connectivity map + pub fn into_parts(mut self, keep_vertices: bool) -> (TriMesh3d, Vec>) { + Self::compute_vertex_vertex_connectivity(&mut self.vertex_half_edge_map, &self.half_edges); + self.garbage_collection_for_trimesh(keep_vertices); + let mesh = TriMesh3d { + vertices: self.vertices, + triangles: self.triangles, + }; + (mesh, self.vertex_half_edge_map) + } + + /// Returns the valence of a vertex (size of its one-ring) + pub fn vertex_one_ring_len(&self, vertex: usize) -> usize { + self.vertex_half_edge_map[vertex].len() + } + + /// Returns the index of the `i`-th vertex from the one-ring of the given vertex + pub fn vertex_one_ring_ith(&self, vertex: usize, i: usize) -> usize { + self.half_edges[self.vertex_half_edge_map[vertex][i]].to + } + + /// Iterator over the one-ring vertex neighbors of the given vertex + pub fn vertex_one_ring<'a>(&'a self, vertex: usize) -> impl Iterator + 'a { + self.vertex_half_edge_map[vertex] + .iter() + .copied() + .map(|he_i| self.half_edges[he_i].to) + } + + /// Iterator over the outgoing half-edges of the given vertex + pub fn outgoing_half_edges<'a>(&'a self, vertex: usize) -> impl Iterator + 'a { + self.vertex_half_edge_map[vertex] + .iter() + .copied() + .map(|he_i| self.half_edges[he_i].clone()) + } + + /// Iterator over all incident faces of the given vertex + pub fn incident_faces<'a>(&'a self, vertex: usize) -> impl Iterator + 'a { + self.outgoing_half_edges(vertex).filter_map(|he| he.face) + } + + /// Returns the half-edge between the "from" and "to" vertex if it exists in the mesh + pub fn half_edge(&self, from: usize, to: usize) -> Option { + let from_edges = self + .vertex_half_edge_map + .get(from) + .expect("vertex must be part of the mesh"); + for &he_idx in from_edges { + let he = &self.half_edges[he_idx]; + if he.to == to { + return Some(he.clone()); + } + } + + None + } + + /// Returns whether the given half-edge or its opposite half-edge is a boundary edge + pub fn is_boundary_edge(&self, half_edge: HalfEdge) -> bool { + return half_edge.is_boundary() || self.opposite(half_edge).is_boundary(); + } + + /// Returns whether the given vertex is a boundary vertex + pub fn is_boundary_vertex(&self, vert_idx: usize) -> bool { + let hes = self + .vertex_half_edge_map + .get(vert_idx) + .expect("vertex must be part of the mesh"); + hes.iter() + .copied() + .any(|he_idx| self.half_edges[he_idx].is_boundary()) + } + + /// Returns whether the given triangle is valid (i.e. not marked as removed) + pub fn is_valid_triangle(&self, triangle_idx: usize) -> bool { + !self.removed_triangles.contains(&triangle_idx) + } + + /// Returns whether the given vertex is valid (i.e. not marked as removed) + pub fn is_valid_vertex(&self, vertex_idx: usize) -> bool { + !self.removed_vertices.contains(&vertex_idx) + } + + /// Returns whether the given vertex is valid (i.e. not marked as removed) + pub fn is_valid_half_edge(&self, half_edge_idx: usize) -> bool { + !self.removed_half_edges.contains(&half_edge_idx) + } + + /// Returns the next half-edge in the loop of the given half-edge, panics if there is none + pub fn next(&self, half_edge: HalfEdge) -> HalfEdge { + self.half_edges[half_edge + .next + .expect("half edge must have a next reference")] + } + + /// Returns the next half-edge in the loop of the given half-edge if it exists + pub fn try_next(&self, half_edge: HalfEdge) -> Option { + half_edge.next.map(|n| self.half_edges[n]) + } + + /// Returns the opposite half-edge of the given half-edge + pub fn opposite(&self, half_edge: HalfEdge) -> HalfEdge { + self.half_edges[half_edge.opposite] + } + + /// Returns a mutable reference to the opposite half-edge of the given half-edge + pub fn opposite_mut(&mut self, half_edge: usize) -> &mut HalfEdge { + let opp_idx = self.half_edges[half_edge].opposite; + &mut self.half_edges[opp_idx] + } + + /// Checks if the collapse of the given half-edge is topologically legal + pub fn is_collapse_ok(&self, half_edge: HalfEdge) -> Result<(), IllegalHalfEdgeCollapse> { + // Based on PMP library: + // https://github.com/pmp-library/pmp-library/blob/86099e4e274c310d23e8c46c4829f881242814d3/src/pmp/SurfaceMesh.cpp#L755 + + let v0v1 = half_edge; + let v1v0 = self.opposite(v0v1); + + let v0 = v1v0.to; // From vertex + let v1 = v0v1.to; // To vertex + + // Checks if edges to opposite vertex of half-edge are boundary edges and returns opposite vertex + let check_opposite_vertex = + |he: HalfEdge| -> Result, IllegalHalfEdgeCollapse> { + if !he.is_boundary() { + let h1 = self.next(he); + let h2 = self.next(h1); + + if self.opposite(h1).is_boundary() && self.opposite(h2).is_boundary() { + return Err(IllegalHalfEdgeCollapse::BoundaryCollapse); + } + + // Return the opposite vertex + Ok(Some(h1.to)) + } else { + Ok(None) + } + }; + + // Notation: + // v_pos -> vertex opposite to the half-edge to collapse (v0v1) + // v_neg -> vertex opposite to the opposite half-edge to collapse (v1v0) + let v_pos = check_opposite_vertex(v0v1)?; + let v_neg = check_opposite_vertex(v1v0)?; + + if v_pos.is_none() || v_neg.is_none() { + return Err(IllegalHalfEdgeCollapse::FacelessEdge); + } + + // Test intersection of the one-rings of v0 and v1 + for &he in &self.vertex_half_edge_map[v0] { + let he = &self.half_edges[he]; + let vv = he.to; + if vv != v1 && Some(vv) != v_pos && Some(vv) != v_neg { + if let Some(_) = self.half_edge(vv, v1) { + return Err(IllegalHalfEdgeCollapse::IntersectionOfOneRing); + } + } + } + + Ok(()) + } + + pub fn try_half_edge_collapse( + &mut self, + half_edge: HalfEdge, + ) -> Result<(), IllegalHalfEdgeCollapse> { + self.is_collapse_ok(half_edge)?; + + self.half_edge_collapse(half_edge); + Ok(()) + } + + pub fn half_edge_collapse(&mut self, half_edge: HalfEdge) { + let he = half_edge; + let he_o = self.opposite(he); + + let v_from = he_o.to; + let v_to = he.to; + + // TODO: Support collapse of boundary edges + + let he_n = self + .try_next(he) + .expect("encountered boundary (missing opposite vertex)"); + let he_nn = self.next(he_n); + + let he_on = self + .try_next(he_o) + .expect("encountered boundary (missing opposite vertex)"); + let he_onn = self.next(he_on); + + // Vertices opposite to the edge to collapse + let v_pos = he_n.to; + let v_neg = he_on.to; + + let conn_from = self.vertex_half_edge_map[v_from].clone(); + let mut conn_to = self.vertex_half_edge_map[v_to].clone(); + + // Mark faces and vertex for removal + { + he.face.map(|f| self.removed_triangles.insert(f)); + he_o.face.map(|f| self.removed_triangles.insert(f)); + self.removed_vertices.insert(v_from); + } + + // Collect half-edges to delete (inside collapsed triangles) + self.removed_half_edges.insert(he.idx); + self.removed_half_edges.insert(he_n.idx); + self.removed_half_edges.insert(he_nn.idx); + self.removed_half_edges.insert(he_o.idx); + self.removed_half_edges.insert(he_on.idx); + self.removed_half_edges.insert(he_onn.idx); + + // Handle case of two opposite but coincident faces + if v_pos == v_neg { + // Faces were already marked for removal above + // Half-edges were already marked for removal above + + // Mark other vertices of triangles for removal + self.removed_vertices.insert(v_to); + self.removed_vertices.insert(v_pos); + // Clear all connectivity of removed vertices + self.vertex_half_edge_map[v_from].clear(); + self.vertex_half_edge_map[v_to].clear(); + self.vertex_half_edge_map[v_pos].clear(); + + return; + } + + // Update the faces referencing the removed vertex + for &he_idx in &conn_from { + self.half_edges[he_idx].face.map(|f| { + self.triangles[f].iter_mut().for_each(|i| { + if *i == v_from { + *i = v_to; + } + }) + }); + } + + // Update the half-edges around the collapsed triangles (they become opposites) + { + let he_no = self.opposite(he_n); + let he_nno = self.opposite(he_nn); + + self.half_edges[he_no.idx].opposite = he_nno.idx; + self.half_edges[he_nno.idx].opposite = he_no.idx; + + let he_ono = self.opposite(he_on); + let he_onno = self.opposite(he_onn); + + self.half_edges[he_ono.idx].opposite = he_onno.idx; + self.half_edges[he_onno.idx].opposite = he_ono.idx; + } + + // Remove collapsed half-edges from connectivity of target vertex + conn_to.retain(|he_i| *he_i != he_n.idx && *he_i != he_o.idx); + // Transfer half-edge connectivity from collapsed to target vertex + for &he_i in &conn_from { + if he_i != he.idx && he_i != he_on.idx { + conn_to.push(he_i); + } + } + // Update the targets of half-edges pointing to collapsed vertex + for &he_i in &conn_to { + let opp = self.opposite_mut(he_i); + if opp.to == v_from { + opp.to = v_to; + } + } + self.vertex_half_edge_map[v_to] = conn_to; + // Clear all connectivity of the collapsed vertex + self.vertex_half_edge_map[v_from].clear(); + + // Remove collapsed half-edges from connectivity of vertices opposite to collapsed edge + self.vertex_half_edge_map[v_pos].retain(|he_i| *he_i != he_nn.idx); + self.vertex_half_edge_map[v_neg].retain(|he_i| *he_i != he_onn.idx); + } + + /// Computes the largest angle in radians by which a face normals rotates of triangles affect by the given half edge collapse, assumes that the given half edge is valid + pub fn half_edge_collapse_max_normal_change(&self, half_edge: HalfEdge) -> R { + let he = half_edge; + let he_o = self.opposite(he); + + let v_to = he.to; + let v_from = he_o.to; + + let mut max_normal_change_angle = R::zero(); + for face in self.incident_faces(v_from) { + let tri_old = self.triangles[face]; + let tri_new = tri_old.map(|i| if i == v_from { v_to } else { i }); + + // Skip faces that will be collapsed + if tri_new.iter().copied().filter(|i| *i == v_to).count() > 1 { + continue; + } + + let new_area = self.tri_area_ijk::(&tri_new); + if new_area > R::default_epsilon() { + let old_normal = self.tri_normal_ijk::(&tri_old); + let new_normal = self.tri_normal_ijk::(&tri_new); + + let alpha = old_normal.dot(&new_normal).acos(); + max_normal_change_angle = max_normal_change_angle.max(alpha); + } + } + + max_normal_change_angle + } + + /// Computes the largest ratio (`new/old`) of triangle aspect ratio of triangles affect by the given half edge collapse, assumes that the given half edge is valid + pub fn half_edge_collapse_max_aspect_ratio_change(&self, half_edge: HalfEdge) -> R { + let he = half_edge; + let he_o = self.opposite(he); + + let v_to = he.to; + let v_from = he_o.to; + + let mut max_aspect_ratio_change = R::one(); + for face in self.incident_faces(v_from) { + let tri_old = self.triangles[face]; + let tri_new = tri_old.map(|i| if i == v_from { v_to } else { i }); + + // Skip faces that will be collapsed + if tri_new.iter().copied().filter(|i| *i == v_to).count() > 1 { + continue; + } + + let old_aspect_ratio = self.tri_aspect_ratio::(&tri_old); + let new_aspect_ratio = self.tri_aspect_ratio::(&tri_new); + + max_aspect_ratio_change = + max_aspect_ratio_change.max(new_aspect_ratio / old_aspect_ratio) + } + + max_aspect_ratio_change + } + + fn compute_vertex_vertex_connectivity( + vertex_half_edge_map: &mut [Vec], + half_edges: &[HalfEdge], + ) { + vertex_half_edge_map.par_iter_mut().for_each(|hes| { + for he in hes { + *he = half_edges[*he].to; + } + }); + } + + /// Clean mesh of deleted elements (vertices, faces) for conversion into a triangle mesh (does not clean-up half-edges and their references) + fn garbage_collection_for_trimesh(&mut self, keep_vertices: bool) { + // Filter and update triangles + let filtered_triangles = self + .triangles + .par_iter() + .copied() + .enumerate() + .filter(|(i, _)| !self.removed_triangles.contains(i)) + .map(|(_, tri)| tri) + .collect(); + self.triangles = filtered_triangles; + + if !keep_vertices { + let mut new_vertex_indices = vec![0; self.vertices.len()]; + let mut filtered_vertices = + Vec::with_capacity(self.vertices.len() - self.removed_vertices.len()); + let mut filtered_vertex_map = + Vec::with_capacity(self.vertices.len() - self.removed_vertices.len()); + + // Filter vertices and assign new indices + let mut index_counter = 0; + for (i, new_index) in new_vertex_indices.iter_mut().enumerate() { + if !self.removed_vertices.contains(&i) { + *new_index = index_counter; + index_counter += 1; + filtered_vertices.push(self.vertices[i]); + filtered_vertex_map.push(std::mem::take(&mut self.vertex_half_edge_map[i])); + } + } + + // Update vertex maps + filtered_vertex_map.iter_mut().for_each(|m| { + for v in m { + *v = new_vertex_indices[*v]; + } + }); + + self.vertices = filtered_vertices; + self.vertex_half_edge_map = filtered_vertex_map; + + // Update triangles + self.triangles.par_iter_mut().for_each(|tri| { + tri[0] = new_vertex_indices[tri[0]]; + tri[1] = new_vertex_indices[tri[1]]; + tri[2] = new_vertex_indices[tri[2]]; + }); + + self.removed_vertices.clear(); + } + + self.removed_triangles.clear(); + } +} + +impl From> for HalfEdgeTriMesh { + fn from(mesh: TriMesh3d) -> Self { + profile!("construct_half_edge_mesh"); + + let mut he_mesh = HalfEdgeTriMesh::default(); + he_mesh.vertex_half_edge_map = vec![Vec::with_capacity(5); mesh.vertices().len()]; + he_mesh.vertices = mesh.vertices; + + for (tri_idx, tri) in mesh.triangles.iter().copied().enumerate() { + // Storage for inner half-edge indices + let mut tri_hes = [0, 0, 0]; + + // Loop over inner-half edges + for i in 0..3 { + let from = tri[i]; + let to = tri[(i + 1) % 3]; + + // Check if half-edge exists already + if let Some(he) = he_mesh.half_edge(from, to) { + // Store the current half-edge for later use + tri_hes[i] = he.idx; + // Update the face of the half-edge + he_mesh.half_edges[he.idx].face = Some(tri_idx); + } else { + let he_idx = he_mesh.half_edges.len(); + // Inner (counter-clockwise) edge + let he_ccw = HalfEdge { + idx: he_idx, + to, + face: Some(tri_idx), + next: None, + opposite: he_idx + 1, + }; + // Outer (counter-clockwise) edge + let he_cw = HalfEdge { + idx: he_idx + 1, + to: from, + face: None, + next: None, + opposite: he_idx, + }; + tri_hes[i] = he_idx; + + // Store half-edges + he_mesh.half_edges.push(he_ccw); + he_mesh.half_edges.push(he_cw); + // Update vertex connectivity + he_mesh.vertex_half_edge_map[from].push(he_idx); + he_mesh.vertex_half_edge_map[to].push(he_idx + 1); + } + } + + // Update inner half-edge next pointers + for i in 0..3 { + let j = (i + 1) % 3; + he_mesh.half_edges[tri_hes[i]].next = Some(tri_hes[j]); + } + } + + he_mesh.triangles = mesh.triangles; + he_mesh + } +} + +#[test] +fn test_half_edge_mesh() { + let tri_mesh = TriMesh3d:: { + vertices: vec![ + Vector3::new(0.0, 1.0, 0.0), + Vector3::new(1.0, 0.0, 0.0), + Vector3::new(1.0, 2.0, 0.0), + Vector3::new(2.0, 1.0, 0.0), + ], + triangles: vec![[0, 1, 2], [1, 3, 2]], + }; + + let he_mesh = crate::halfedge_mesh::HalfEdgeTriMesh::from(tri_mesh.clone()); + + let vert_map = tri_mesh.vertex_vertex_connectivity(); + let (_new_tri_mesh, new_vert_map) = he_mesh.into_parts(true); + + let mut a = vert_map; + let mut b = new_vert_map; + + a.par_iter_mut().for_each(|l| l.sort_unstable()); + b.par_iter_mut().for_each(|l| l.sort_unstable()); + + for (la, lb) in a.iter().zip(b.iter()) { + assert_eq!(la, lb); + } +} diff --git a/splashsurf_lib/src/io/json_format.rs b/splashsurf_lib/src/io/json_format.rs index 93b9a5f..6493768 100644 --- a/splashsurf_lib/src/io/json_format.rs +++ b/splashsurf_lib/src/io/json_format.rs @@ -1,7 +1,7 @@ //! Helper functions for the JSON file format use crate::utils::IteratorExt; -use crate::Real; +use crate::{Real, RealConvert}; use anyhow::{anyhow, Context}; use nalgebra::Vector3; use std::fs::{File, OpenOptions}; diff --git a/splashsurf_lib/src/io/obj_format.rs b/splashsurf_lib/src/io/obj_format.rs index 8c611cd..98420f6 100644 --- a/splashsurf_lib/src/io/obj_format.rs +++ b/splashsurf_lib/src/io/obj_format.rs @@ -1,11 +1,15 @@ //! Helper functions for the OBJ file format -use crate::mesh::{AttributeData, CellConnectivity, Mesh3d, MeshWithData}; -use crate::Real; +use crate::mesh::{ + AttributeData, CellConnectivity, Mesh3d, MeshAttribute, MeshWithData, TriMesh3d, +}; +use crate::{utils, Real}; use anyhow::Context; +use nalgebra::Vector3; use std::fs; -use std::io::{BufWriter, Write}; +use std::io::{BufRead, BufReader, BufWriter, Write}; use std::path::Path; +use std::str::FromStr; // TODO: Support for other mesh data (interpolated fields)? @@ -48,16 +52,141 @@ pub fn mesh_to_obj, P: AsRef>( if normals.is_some() { for f in mesh_vertices.cells() { write!(writer, "f")?; - f.try_for_each_vertex(|v| write!(writer, " {}//{}", v + 1, v + 1))?; + f.vertices() + .iter() + .copied() + .try_for_each(|v| write!(writer, " {}//{}", v + 1, v + 1))?; write!(writer, "\n")?; } } else { for f in mesh_vertices.cells() { write!(writer, "f")?; - f.try_for_each_vertex(|v| write!(writer, " {}", v + 1))?; + f.vertices() + .iter() + .copied() + .try_for_each(|v| write!(writer, " {}", v + 1))?; write!(writer, "\n")?; } } Ok(()) } + +pub fn surface_mesh_from_obj>( + obj_path: P, +) -> Result>, anyhow::Error> { + let file = fs::File::open(obj_path).context("Failed to open file for reading")?; + let mut reader = BufReader::with_capacity(1000000, file); + + let mut vertices = Vec::new(); + let mut triangles = Vec::new(); + let mut normals = Vec::new(); + + let buffer_to_vec3 = |buffer: &[&str]| -> Result, anyhow::Error> { + Ok(Vector3::new( + R::from_f64(f64::from_str(buffer[0])?).unwrap(), + R::from_f64(f64::from_str(buffer[1])?).unwrap(), + R::from_f64(f64::from_str(buffer[2])?).unwrap(), + )) + }; + + let mut outer_buffer: Vec<&'static str> = Vec::new(); + let mut buffer_string = String::new(); + + loop { + let mut buffer = utils::recycle(outer_buffer); + + let read = reader.read_line(&mut buffer_string)?; + if read == 0 { + break; + } + + let line = buffer_string.trim(); + + if let Some(vert_string) = line.strip_prefix("v ") { + buffer.extend(vert_string.split(' ')); + assert_eq!(buffer.len(), 3, "expected three coordinates per vertex"); + vertices.push(buffer_to_vec3(&buffer)?); + } else if let Some(face_string) = line.strip_prefix("f ") { + // TODO: Support mixed tri/quad meshes? + buffer.extend( + face_string + .split(' ') + // Support "v1/vt1", "v1/vt1/vn1" and "v1//vn1" formats (ignore everything after '/') + .map(|f| f.split_once('/').map(|(f, _)| f).unwrap_or(f)), + ); + assert_eq!( + buffer.len(), + 3, + "expected three indices per faces (only triangles supported at the moment)" + ); + let tri = [ + usize::from_str(buffer[0])? - 1, + usize::from_str(buffer[1])? - 1, + usize::from_str(buffer[2])? - 1, + ]; + triangles.push(tri); + } else if let Some(normal_string) = line.strip_prefix("vn ") { + buffer.extend(normal_string.split(' ')); + assert_eq!( + buffer.len(), + 3, + "expected three normal components per vertex" + ); + normals.push(buffer_to_vec3(&buffer)?); + } + + outer_buffer = utils::recycle(buffer); + buffer_string.clear(); + } + + let mut mesh = MeshWithData::new(TriMesh3d { + vertices, + triangles, + }); + + if !normals.is_empty() { + assert_eq!( + mesh.vertices().len(), + normals.len(), + "length of vertex and vertex normal array doesn't match" + ); + mesh.point_attributes.push(MeshAttribute::new( + "normals", + AttributeData::Vector3Real(normals), + )); + } + + Ok(mesh) +} + +#[cfg(test)] +pub mod test { + use super::*; + + #[test] + fn test_obj_read_icosphere() -> Result<(), anyhow::Error> { + let mesh = surface_mesh_from_obj::("../data/icosphere.obj")?; + + assert_eq!(mesh.vertices().len(), 42); + assert_eq!(mesh.cells().len(), 80); + + Ok(()) + } + + #[test] + fn test_obj_read_icosphere_with_normals() -> Result<(), anyhow::Error> { + let mesh = surface_mesh_from_obj::("../data/icosphere.obj")?; + + assert_eq!(mesh.vertices().len(), 42); + assert_eq!(mesh.cells().len(), 80); + let normals = mesh.point_attributes.iter().find(|a| a.name == "normals"); + if let Some(MeshAttribute { data, .. }) = normals { + if let AttributeData::Vector3Real(normals) = data { + assert_eq!(normals.len(), 42) + } + } + + Ok(()) + } +} diff --git a/splashsurf_lib/src/io/ply_format.rs b/splashsurf_lib/src/io/ply_format.rs index 591b1f2..fffe4ae 100644 --- a/splashsurf_lib/src/io/ply_format.rs +++ b/splashsurf_lib/src/io/ply_format.rs @@ -202,7 +202,7 @@ pub fn mesh_to_ply, P: AsRef>( .truncate(true) .open(filename) .context("Failed to open file handle for writing PLY file")?; - let mut writer = BufWriter::with_capacity(100000, file); + let mut writer = BufWriter::with_capacity(1000000, file); write!(&mut writer, "ply\n")?; write!(&mut writer, "format binary_little_endian 1.0\n")?; @@ -257,9 +257,9 @@ pub fn mesh_to_ply, P: AsRef>( } for c in mesh.cells() { - let num_verts = M::Cell::num_vertices().to_u8().expect("failed to convert cell vertex count to u8"); + let num_verts = c.num_vertices().to_u8().expect("failed to convert cell vertex count to u8"); writer.write_all(&num_verts.to_le_bytes())?; - c.try_for_each_vertex(|v| { + c.vertices().iter().copied().try_for_each(|v| { let idx = v.to_u32().expect("failed to convert vertex index to u32"); writer.write_all(&idx.to_le_bytes()) })?; @@ -273,8 +273,8 @@ pub mod test { use super::*; #[test] - fn test_ply_read_cube_with_normals() -> Result<(), anyhow::Error> { - let input_file = Path::new("../data/cube_normals.ply"); + fn test_ply_read_cube() -> Result<(), anyhow::Error> { + let input_file = Path::new("../data/cube.ply"); let mesh: MeshWithData = surface_mesh_from_ply(input_file).with_context(|| { format!( @@ -285,19 +285,13 @@ pub mod test { assert_eq!(mesh.mesh.vertices.len(), 24); assert_eq!(mesh.mesh.triangles.len(), 12); - let normals = mesh.point_attributes.iter().find(|a| a.name == "normals"); - if let Some(MeshAttribute { data, .. }) = normals { - if let AttributeData::Vector3Real(normals) = data { - assert_eq!(normals.len(), 24) - } - } Ok(()) } #[test] - fn test_ply_read_cube() -> Result<(), anyhow::Error> { - let input_file = Path::new("../data/cube.ply"); + fn test_ply_read_cube_with_normals() -> Result<(), anyhow::Error> { + let input_file = Path::new("../data/cube_normals.ply"); let mesh: MeshWithData = surface_mesh_from_ply(input_file).with_context(|| { format!( @@ -308,6 +302,12 @@ pub mod test { assert_eq!(mesh.mesh.vertices.len(), 24); assert_eq!(mesh.mesh.triangles.len(), 12); + let normals = mesh.point_attributes.iter().find(|a| a.name == "normals"); + if let Some(MeshAttribute { data, .. }) = normals { + if let AttributeData::Vector3Real(normals) = data { + assert_eq!(normals.len(), 24) + } + } Ok(()) } diff --git a/splashsurf_lib/src/io/vtk_format.rs b/splashsurf_lib/src/io/vtk_format.rs index fdd83b3..d3acfa8 100644 --- a/splashsurf_lib/src/io/vtk_format.rs +++ b/splashsurf_lib/src/io/vtk_format.rs @@ -1,8 +1,8 @@ //! Helper functions for the VTK file format -use crate::mesh::{AttributeData, MeshAttribute, MeshWithData, TriMesh3d}; +use crate::mesh::{AttributeData, IntoVtkDataSet, MeshAttribute, MeshWithData, TriMesh3d}; use crate::utils::IteratorExt; -use crate::Real; +use crate::{Real, RealConvert}; use anyhow::{anyhow, Context}; use nalgebra::Vector3; use std::borrow::Cow; @@ -183,7 +183,7 @@ pub fn surface_mesh_from_vtk>( /// Tries to write `data` that is convertible to a VTK `DataSet` into a big endian VTK file pub fn write_vtk>( - data: impl Into, + data: impl IntoVtkDataSet, filename: P, title: &str, ) -> Result<(), anyhow::Error> { @@ -192,7 +192,7 @@ pub fn write_vtk>( title: title.to_string(), file_path: None, byte_order: ByteOrder::BigEndian, - data: data.into(), + data: data.into_dataset(), }; let filename = filename.as_ref(); @@ -279,6 +279,10 @@ fn surface_mesh_from_unstructured_grid( } }; + // Sometimes VTK files from paraview start with an empty cell + let has_empty_cell = cell_verts.first().map(|c| *c == 0).unwrap_or(false); + let cell_verts = &cell_verts[cell_verts.len().min(has_empty_cell as usize)..]; + if cell_verts.len() % 4 != 0 { return Err(anyhow!("Length of cell vertex array is invalid. Expected 4 values per cell (3 for each triangle vertex index + 1 for vertex count). There are {} values for {} cells.", cell_verts.len(), num_cells)); } diff --git a/splashsurf_lib/src/kernel.rs b/splashsurf_lib/src/kernel.rs index 4dae19e..42c8965 100644 --- a/splashsurf_lib/src/kernel.rs +++ b/splashsurf_lib/src/kernel.rs @@ -1,6 +1,6 @@ //! SPH kernel function implementations -use crate::Real; +use crate::{Real, RealConvert}; use nalgebra::Vector3; use numeric_literals::replace_float_literals; diff --git a/splashsurf_lib/src/lib.rs b/splashsurf_lib/src/lib.rs index 1c53368..20ac58e 100644 --- a/splashsurf_lib/src/lib.rs +++ b/splashsurf_lib/src/lib.rs @@ -9,7 +9,7 @@ //! The following features are all non-default features to reduce the amount of additional dependencies. //! //! - **`vtk_extras`**: Enables helper functions and trait implementations to export meshes using [`vtkio`](https://github.com/elrnv/vtkio). -//! In particular it adds `From` impls for the [mesh](crate::mesh) types used by this crate to convert them to +//! In particular it adds `From` impls for the [mesh] types used by this crate to convert them to //! [`vtkio::model::UnstructuredGridPiece`](https://docs.rs/vtkio/0.6.*/vtkio/model/struct.UnstructuredGridPiece.html) and [`vtkio::model::DataSet`](https://docs.rs/vtkio/0.6.*/vtkio/model/enum.DataSet.html) //! types. If the feature is enabled, The crate exposes its `vtkio` dependency as `splashsurflib::vtkio`. //! - **`io`**: Enables the [`io`] module, containing functions to load and store particle and mesh files @@ -37,7 +37,7 @@ pub use vtkio; pub use crate::aabb::{Aabb2d, Aabb3d, AxisAlignedBoundingBox}; pub use crate::density_map::DensityMap; pub use crate::octree::SubdivisionCriterion; -pub use crate::traits::{Index, Real, ThreadSafe}; +pub use crate::traits::{Index, Real, RealConvert, ThreadSafe}; pub use crate::uniform_grid::UniformGrid; use crate::density_map::DensityMapError; @@ -57,6 +57,7 @@ mod aabb; pub(crate) mod dense_subdomains; pub mod density_map; pub mod generic_tree; +pub mod halfedge_mesh; #[cfg(feature = "io")] #[cfg_attr(doc_cfg, doc(cfg(feature = "io")))] pub mod io; @@ -65,6 +66,7 @@ pub mod marching_cubes; pub mod mesh; pub mod neighborhood_search; pub mod octree; +pub mod postprocessing; pub(crate) mod reconstruction; mod reconstruction_octree; pub mod sph_interpolation; @@ -88,8 +90,10 @@ pub(crate) mod workspace; pub(crate) type HashState = fxhash::FxBuildHasher; pub(crate) type MapType = std::collections::HashMap; +pub(crate) type SetType = std::collections::HashSet; pub(crate) fn new_map() -> MapType { - MapType::with_hasher(HashState::default()) + // TODO: Remove this function + Default::default() } /* @@ -265,10 +269,14 @@ pub struct Parameters { /// Parameters for the spatial decomposition of the surface reconstruction /// If not provided, no spatial decomposition is performed and a global approach is used instead. pub spatial_decomposition: Option>, + /// Whether to return the global particle neighborhood list from the reconstruction. + /// Depending on the settings of the reconstruction, neighborhood lists are only computed locally + /// in subdomains. Enabling this flag joins this data over all particles which can add a small overhead. + pub global_neighborhood_list: bool, } impl Parameters { - /// Tries to convert the parameters from one [Real] type to another [Real] type, returns None if conversion fails + /// Tries to convert the parameters from one [Real] type to another [Real] type, returns `None` if conversion fails pub fn try_convert(&self) -> Option> { Some(Parameters { particle_radius: self.particle_radius.try_convert()?, @@ -279,6 +287,7 @@ impl Parameters { particle_aabb: map_option!(&self.particle_aabb, aabb => aabb.try_convert()?), enable_multi_threading: self.enable_multi_threading, spatial_decomposition: map_option!(&self.spatial_decomposition, sd => sd.try_convert()?), + global_neighborhood_list: self.global_neighborhood_list, }) } } @@ -296,6 +305,8 @@ pub struct SurfaceReconstruction { particle_densities: Option>, /// If an AABB was specified to restrict the reconstruction, this stores per input particle whether they were inside particle_inside_aabb: Option>, + /// Per particles neighbor lists + particle_neighbors: Option>>, /// Surface mesh that is the result of the surface reconstruction mesh: TriMesh3d, /// Workspace with allocated memory for subsequent surface reconstructions @@ -310,6 +321,7 @@ impl Default for SurfaceReconstruction { octree: None, density_map: None, particle_densities: None, + particle_neighbors: None, particle_inside_aabb: None, mesh: TriMesh3d::default(), workspace: ReconstructionWorkspace::default(), @@ -338,6 +350,11 @@ impl SurfaceReconstruction { self.particle_densities.as_ref() } + /// Returns a reference to the global particle density vector if it was computed during the reconstruction (always `None` when using octree based domain decomposition) + pub fn particle_neighbors(&self) -> Option<&Vec>> { + self.particle_neighbors.as_ref() + } + /// Returns a reference to the virtual background grid that was used as a basis for discretization of the density map for marching cubes, can be used to convert the density map to a hex mesh (using [`density_map::sparse_density_map_to_hex_mesh`]) pub fn grid(&self) -> &UniformGrid { &self.grid diff --git a/splashsurf_lib/src/marching_cubes.rs b/splashsurf_lib/src/marching_cubes.rs index 0c4c8d2..81d0007 100644 --- a/splashsurf_lib/src/marching_cubes.rs +++ b/splashsurf_lib/src/marching_cubes.rs @@ -1,4 +1,4 @@ -//! Triangulation of [`DensityMap`](crate::density_map::DensityMap)s using marching cubes +//! Triangulation of [`DensityMap`]s using marching cubes use crate::marching_cubes::narrow_band_extraction::{ construct_mc_input, construct_mc_input_with_stitching_data, @@ -195,21 +195,39 @@ pub(crate) fn triangulate_density_map_to_surface_patch( }) } -/// Checks the consistency of the mesh (currently only checks for holes) and returns a string with debug information in case of problems +/// Checks the consistency of the mesh (currently checks for holes, non-manifold edges and vertices) and returns a string with debug information in case of problems pub fn check_mesh_consistency( grid: &UniformGrid, mesh: &TriMesh3d, + check_closed: bool, + check_manifold: bool, + debug: bool, ) -> Result<(), String> { profile!("check_mesh_consistency"); - let boundary_edges = mesh.find_boundary_edges(); - - if boundary_edges.is_empty() { + let edge_info = mesh.compute_edge_information(); + + let boundary_edges = edge_info + .iter() + .filter(|ei| ei.incident_faces == 1) + .map(|ei| (ei.edge, ei.face, ei.local_edge_index)) + .collect::>(); + let non_manifold_edges = edge_info + .iter() + .filter(|ei| ei.incident_faces > 2) + .map(|ei| (ei.edge, ei.face, ei.local_edge_index)) + .collect::>(); + + let non_manifold_vertices = mesh.find_non_manifold_vertices(); + + if (!check_closed || boundary_edges.is_empty()) + && (!check_manifold || (non_manifold_edges.is_empty() && non_manifold_vertices.is_empty())) + { return Ok(()); } - let mut error_string = String::new(); - error_string += &format!("Mesh is not closed. It has {} boundary edges (edges that are connected to only one triangle):", boundary_edges.len()); - for (edge, tri_idx, _) in boundary_edges { + let add_edge_errors = |error_string: &mut String, edge: ([usize; 2], usize, usize)| { + let (edge, tri_idx, _) = edge; + let v0 = mesh.vertices[edge[0]]; let v1 = mesh.vertices[edge[1]]; let center = (v0 + v1) / (R::one() + R::one()); @@ -221,13 +239,43 @@ pub fn check_mesh_consistency( let cell_center = grid.point_coordinates(&point_index) + &Vector3::repeat(grid.cell_size().times_f64(0.5)); - error_string += &format!("\n\tTriangle {}, boundary edge {:?} is located in cell with {:?} with center coordinates {:?} and edge length {}.", tri_idx, edge, cell_index, cell_center, grid.cell_size()); + *error_string += &format!("\n\tTriangle {}, boundary edge {:?} is located in cell with {:?} with center coordinates {:?} and edge length {}.", tri_idx, edge, cell_index, cell_center, grid.cell_size()); } else { - error_string += &format!( - "\n\tCannot get cell index for boundary edge {:?} of triangle {}", + *error_string += &format!( + "\n\tCannot get cell index for edge {:?} of triangle {}", edge, tri_idx ); } + }; + + let mut error_string = String::new(); + + if check_closed && !boundary_edges.is_empty() { + error_string += &format!("Mesh is not closed. It has {} boundary edges (edges that are connected to only one triangle).", boundary_edges.len()); + if debug { + for e in boundary_edges { + add_edge_errors(&mut error_string, e); + } + } + error_string += &format!("\n"); + } + + if check_manifold && !non_manifold_edges.is_empty() { + error_string += &format!("Mesh is not manifold. It has {} non-manifold edges (edges that are connected to more than twi triangles).", non_manifold_edges.len()); + if debug { + for e in non_manifold_edges { + add_edge_errors(&mut error_string, e); + } + } + error_string += &format!("\n"); + } + + if check_manifold && !non_manifold_vertices.is_empty() { + error_string += &format!("Mesh is not manifold. It has {} non-manifold vertices (vertices with more than one triangle fan).", non_manifold_vertices.len()); + if debug { + error_string += &format!("\n\t{:?}", non_manifold_vertices); + } + error_string += &format!("\n"); } Err(error_string) @@ -240,7 +288,13 @@ fn check_mesh_with_cell_data( marching_cubes_data: &MarchingCubesInput, mesh: &TriMesh3d, ) -> Result<(), String> { - let boundary_edges = mesh.find_boundary_edges(); + let edge_info = mesh.compute_edge_information(); + + let boundary_edges = edge_info + .iter() + .filter(|ei| ei.incident_faces == 1) + .map(|ei| (ei.edge, ei.face, ei.local_edge_index)) + .collect::>(); if boundary_edges.is_empty() { return Ok(()); @@ -263,7 +317,7 @@ fn check_mesh_with_cell_data( let cell_data = marching_cubes_data .cell_data .get(&grid.flatten_cell_index(&cell_index)) - .expect("Unabel to get cell data of cell"); + .expect("Unable to get cell data of cell"); error_string += &format!("\n\tTriangle {}, boundary edge {:?} is located in cell with {:?} with center coordinates {:?} and edge length {}. {:?}", tri_idx, edge, cell_index, cell_center, grid.cell_size(), cell_data); } else { diff --git a/splashsurf_lib/src/mesh.rs b/splashsurf_lib/src/mesh.rs index af27088..97e0207 100644 --- a/splashsurf_lib/src/mesh.rs +++ b/splashsurf_lib/src/mesh.rs @@ -1,32 +1,158 @@ //! Basic mesh types used by the library and implementation of VTK export //! -//! This modules provides three basic types of meshes embedded in three dimensional spaces used +//! This modules provides four basic types of meshes embedded in three dimensional spaces used //! by the library: -//! - [`TriMesh3d`] -//! - [`HexMesh3d`] -//! - [`PointCloud3d`] +//! - [`TriMesh3d`]: triangle surface mesh in 3D space +//! - [`MixedTriQuadMesh3d`]: surface mesh in 3D space with triangle and/or quadrilateral cells +//! - [`PointCloud3d`]: points without connectivity in 3D space +//! - [`HexMesh3d`]: mesh with volumetric hexahedral cells //! -//! Furthermore, it provides the [`MeshWithData`] type that is used when additional attributes are -//! attached to the vertices (e.g. normals) or cells (e.g. some identifiers) of the mesh. +//! Furthermore, it provides the [`MeshWithData`] type that can be used to attach additional +//! attributes to vertices (e.g. normals) or cells (e.g. areas/aspect ratios) of the mesh. //! -//! If the `vtk_extras` feature is enabled, this module also provides features for conversion of these -//! meshes to [`vtkio`](https://docs.rs/vtkio/0.6.*/vtkio/index.html) data structures. For example: -//! - [`MeshWithData::to_unstructured_grid`] to convert a mesh together with all attached attributes -//! - [`vtk_helper::mesh_to_unstructured_grid`] to convert a basic mesh without additional data -//! - `From for UnstructuredGridPiece` implementations for the basic mesh types -//! - `Into` implementations for the basic mesh types - -use crate::{new_map, Aabb3d, MapType, Real}; +//! If the `vtk_extras` feature is enabled, this module also provides traits to convert these +//! meshes to [`vtkio`] data types for serialization to `VTK` files. For example: +//! - [`IntoVtkUnstructuredGridPiece`] to convert basic meshes and meshes with attached attributes to the +//! - [`IntoVtkDataSet`] for all meshes implementing [`IntoVtkUnstructuredGridPiece`] to directly save a mesh as a VTK file + +use crate::{new_map, profile, Aabb3d, MapType, Real, RealConvert}; use bytemuck_derive::{Pod, Zeroable}; use nalgebra::{Unit, Vector3}; use rayon::prelude::*; use std::cell::RefCell; +use std::collections::BTreeSet; use std::fmt::Debug; use thread_local::ThreadLocal; #[cfg(feature = "vtk_extras")] -use vtkio::model::{Attribute, DataSet, UnstructuredGridPiece}; +use vtkio::model::{Attribute, UnstructuredGridPiece}; + +#[cfg(feature = "vtk_extras")] +pub use crate::mesh::vtk_helper::{IntoVtkDataSet, IntoVtkUnstructuredGridPiece}; + +/// Computes the unsigned area of the given triangle +pub fn tri_area( + a: &Vector3, + b: &Vector3, + c: &Vector3, +) -> RComp { + let a = a.convert::(); + let b = b.convert::(); + let c = c.convert::(); + ((b - a).cross(&(c - a))) + .norm() + .unscale(RComp::one() + RComp::one()) +} + +/// Computes the face normal of the given triangle +pub fn tri_normal( + a: &Vector3, + b: &Vector3, + c: &Vector3, +) -> Vector3 { + let a = a.convert::(); + let b = b.convert::(); + let c = c.convert::(); + ((b - a).cross(&(c - a))).normalize() +} + +/// Computes the angle at vertex `b` of the given triangle +pub fn angle_between( + a: &Vector3, + b: &Vector3, + c: &Vector3, +) -> RComp { + let a = a.convert::(); + let b = b.convert::(); + let c = c.convert::(); + ((a - b).dot(&(c - b)) / ((a - b).norm() * (c - b).norm())).acos() +} + +/// Computes the minimum and maximum angle in the given triangle +pub fn tri_min_max_angles( + a: &Vector3, + b: &Vector3, + c: &Vector3, +) -> (RComp, RComp) { + let a = a.convert::(); + let b = b.convert::(); + let c = c.convert::(); + let alpha1: RComp = angle_between(&a, &b, &c); + let alpha2: RComp = angle_between(&b, &c, &a); + let alpha3 = RComp::pi() - alpha1 - alpha2; + + ( + alpha1.min(alpha2.min(alpha3)), + alpha1.max(alpha2.max(alpha3)), + ) +} + +/// Computes the aspect ratio of the given triangle +/// +/// The aspect ratio is computed as the inverse of the "stretch ratio" `S` given by +/// ```txt +/// S = sqrt(12) * (r_in / l_max) +/// ``` +/// where `r_in` is the radius of the in-circle and `l_max` is the longest edge of the triangle. +/// See e.g.: . +pub fn tri_aspect_ratio( + a: &Vector3, + b: &Vector3, + c: &Vector3, +) -> RComp { + let two = RComp::from_i32(2).unwrap(); + let sqrt_twelve = RComp::from_i32(12).unwrap().sqrt(); + + let a = a.convert::(); + let b = b.convert::(); + let c = c.convert::(); + + let l0 = (a - b).norm(); + let l1 = (b - c).norm(); + let l2 = (c - a).norm(); + let s = (l0 + l1 + l2) / two; + + let area: RComp = tri_area(&a, &b, &c); + let r_in = area / s; + let l_max = l0.max(l1.max(l2)); + + l_max / (sqrt_twelve * r_in) +} + +/// Utility functions for triangles meshes +pub trait TriMesh3dExt { + /// Returns the slice of all triangle vertices of the mesh + fn tri_vertices(&self) -> &[Vector3]; -// TODO: Rename/restructure VTK helper implementations + /// Computes the area of the triangle with the given vertices + fn tri_area_ijk(&self, ijk: &[usize; 3]) -> RComp { + let v = self.tri_vertices(); + tri_area(&v[ijk[0]], &v[ijk[1]], &v[ijk[2]]) + } + + /// Computes the face normal of the triangle with the given vertices + fn tri_normal_ijk(&self, ijk: &[usize; 3]) -> Vector3 { + let v = self.tri_vertices(); + tri_normal(&v[ijk[0]], &v[ijk[1]], &v[ijk[2]]) + } + + /// Computes the minimum and maximum angle in the triangle with the given vertices + fn tri_min_max_angles_ijk(&self, ijk: &[usize; 3]) -> (RComp, RComp) { + let v = self.tri_vertices(); + tri_min_max_angles(&v[ijk[0]], &v[ijk[1]], &v[ijk[2]]) + } + + /// Computes the aspect ratio of the triangle with the given vertices + fn tri_aspect_ratio(&self, ijk: &[usize; 3]) -> RComp { + let v = self.tri_vertices(); + tri_aspect_ratio(&v[ijk[0]], &v[ijk[1]], &v[ijk[2]]) + } +} + +impl TriMesh3dExt for TriMesh3d { + fn tri_vertices(&self) -> &[Vector3] { + &self.vertices + } +} /// A named attribute with data that can be attached to the vertices or cells of a mesh #[derive(Clone, Debug)] @@ -56,6 +182,50 @@ pub struct TriMesh3d { pub triangles: Vec<[usize; 3]>, } +/// Cell type for [`MixedTriQuadMesh3d`] +#[derive(Clone, Debug)] +pub enum TriangleOrQuadCell { + /// Vertex indices representing a triangle + Tri([usize; 3]), + /// Vertex indices representing a quadrilateral + Quad([usize; 4]), +} + +impl TriangleOrQuadCell { + /// Returns the number of actual number of vertices of this cell (3 if triangle, 4 if quad) + fn num_vertices(&self) -> usize { + match self { + TriangleOrQuadCell::Tri(_) => 3, + TriangleOrQuadCell::Quad(_) => 4, + } + } + + /// Returns a reference to the vertex indices of this cell + fn vertices(&self) -> &[usize] { + match self { + TriangleOrQuadCell::Tri(v) => v, + TriangleOrQuadCell::Quad(v) => v, + } + } + + /// Returns a mutable reference to the vertex indices of this cell + fn vertices_mut(&mut self) -> &mut [usize] { + match self { + TriangleOrQuadCell::Tri(v) => v, + TriangleOrQuadCell::Quad(v) => v, + } + } +} + +/// A surface mesh in 3D containing cells representing either triangles or quadrilaterals +#[derive(Clone, Debug, Default)] +pub struct MixedTriQuadMesh3d { + /// Coordinates of all vertices of the mesh + pub vertices: Vec>, + /// All triangle cells of the mesh + pub cells: Vec, +} + /// A hexahedral (volumetric) mesh in 3D #[derive(Clone, Debug, Default)] pub struct HexMesh3d { @@ -86,72 +256,269 @@ impl PointCloud3d { /// /// Meshes consist of vertices and cells. Cells identify their associated vertices using indices /// into the mesh's slice of vertices. -pub trait Mesh3d { +pub trait Mesh3d +where + Self: Sized, +{ /// The cell connectivity type of the mesh - type Cell: CellConnectivity; + type Cell: CellConnectivity + Clone; /// Returns a slice of all vertices of the mesh fn vertices(&self) -> &[Vector3]; + /// Returns a mutable slice of all vertices of the mesh + fn vertices_mut(&mut self) -> &mut [Vector3]; /// Returns a slice of all cells of the mesh fn cells(&self) -> &[Self::Cell]; + + /// Constructs a mesh from the given vertices and connectivity (does not check inputs for validity) + fn from_vertices_and_connectivity( + vertices: Vec>, + connectivity: Vec, + ) -> Self; + + /// Returns a mapping of all mesh vertices to the set of their connected neighbor vertices + fn vertex_vertex_connectivity(&self) -> Vec> { + profile!("vertex_vertex_connectivity"); + + let mut connectivity_map: Vec> = + vec![Vec::with_capacity(4); self.vertices().len()]; + for cell in self.cells() { + for &i in cell.vertices() { + for &j in cell.vertices() { + if i != j && !connectivity_map[i].contains(&j) { + connectivity_map[i].push(j); + } + } + } + } + + connectivity_map + } + + /// Returns a mapping of all mesh vertices to the set of the cells they belong to + fn vertex_cell_connectivity(&self) -> Vec> { + profile!("vertex_cell_connectivity"); + let mut connectivity_map: Vec> = vec![Vec::new(); self.vertices().len()]; + for (cell_idx, cell) in self.cells().iter().enumerate() { + for &v_i in cell.vertices() { + if !connectivity_map[v_i].contains(&cell_idx) { + connectivity_map[v_i].push(cell_idx); + } + } + } + + connectivity_map + } + + /// Returns a new mesh containing only the specified cells and removes all unreferenced vertices + fn keep_cells(&self, cell_indices: &[usize], keep_vertices: bool) -> Self { + if keep_vertices { + keep_cells_impl(self, cell_indices, &[]) + } else { + let vertex_keep_table = vertex_keep_table(self, cell_indices); + keep_cells_impl(self, cell_indices, &vertex_keep_table) + } + } + + /// Removes all cells from the mesh that are completely outside of the given AABB and clamps the remaining cells to the boundary + fn par_clamp_with_aabb( + &self, + aabb: &Aabb3d, + clamp_vertices: bool, + keep_vertices: bool, + ) -> Self + where + Self::Cell: Sync, + { + // Find all triangles with at least one vertex inside of AABB + let vertices = self.vertices(); + let cells_to_keep = self + .cells() + .par_iter() + .enumerate() + .filter(|(_, cell)| { + cell.vertices() + .iter() + .copied() + .any(|v| aabb.contains_point(&vertices[v])) + }) + .map(|(i, _)| i) + .collect::>(); + // Remove all other cells from mesh + let mut new_mesh = self.keep_cells(&cells_to_keep, keep_vertices); + // Clamp remaining vertices to AABB + if clamp_vertices { + new_mesh.vertices_mut().par_iter_mut().for_each(|v| { + let min = aabb.min(); + let max = aabb.max(); + v.x = v.x.clamp(min.x, max.x); + v.y = v.y.clamp(min.y, max.y); + v.z = v.z.clamp(min.z, max.z); + }); + } + + new_mesh + } +} + +/// Returns the list of vertices that should remain in the given mesh after keeping only the given cells +fn vertex_keep_table>(mesh: &MeshT, cell_indices: &[usize]) -> Vec { + let vertices = mesh.vertices(); + let cells = mesh.cells(); + + // Each entry is true if this vertex should be kept, false otherwise + let vertex_keep_table = { + let mut table = vec![false; vertices.len()]; + for cell in cell_indices.iter().copied().map(|c_i| &cells[c_i]) { + for &vertex_index in cell.vertices() { + table[vertex_index] = true; + } + } + table + }; + + vertex_keep_table +} + +/// Returns a new mesh keeping only the given cells and vertices in the mesh +fn keep_cells_impl>( + mesh: &MeshT, + cell_indices: &[usize], + vertex_keep_table: &[bool], +) -> MeshT { + let vertices = mesh.vertices(); + let cells = mesh.cells(); + + if vertex_keep_table.is_empty() { + MeshT::from_vertices_and_connectivity( + mesh.vertices().to_vec(), + cell_indices.iter().map(|&i| &cells[i]).cloned().collect(), + ) + } else { + assert_eq!(mesh.vertices().len(), vertex_keep_table.len()); + + let old_to_new_label_map = { + let mut label_map = MapType::default(); + let mut next_label = 0; + for (i, keep) in vertex_keep_table.iter().enumerate() { + if *keep { + label_map.insert(i, next_label); + next_label += 1; + } + } + label_map + }; + + let relabeled_cells: Vec<_> = cell_indices + .iter() + .map(|&i| &cells[i]) + .cloned() + .map(|mut cell| { + for index in cell.vertices_mut() { + *index = *old_to_new_label_map + .get(index) + .expect("Index must be in map"); + } + cell + }) + .collect(); + + let relabeled_vertices: Vec<_> = vertex_keep_table + .iter() + .copied() + .enumerate() + .filter_map(|(i, should_keep)| if should_keep { Some(i) } else { None }) + .map(|index| vertices[index].clone()) + .collect(); + + MeshT::from_vertices_and_connectivity(relabeled_vertices, relabeled_cells) + } } /// Basic interface for mesh cells consisting of a collection of vertex indices pub trait CellConnectivity { - /// Returns the number of vertices per cell - fn num_vertices() -> usize; - /// Calls the given closure with each vertex index that is part of this cell, stopping at the first error and returning that error - fn try_for_each_vertex Result<(), E>>(&self, f: F) -> Result<(), E>; - /// Calls the given closure with each vertex index that is part of this cell - fn for_each_vertex(&self, mut f: F) { - self.try_for_each_vertex::<(), _>(move |i| { - f(i); - Ok(()) - }) - .unwrap(); + /// Returns the number of vertices per cell (may vary between cells) + fn num_vertices(&self) -> usize { + Self::expected_num_vertices() } + /// Returns the expected number of vertices per cell (helpful for connectivities with a constant or upper bound on the number of vertices to reserve storage) + fn expected_num_vertices() -> usize; + /// Returns a reference to the vertex indices connected by this cell + fn vertices(&self) -> &[usize]; + /// Returns a reference to the vertex indices connected by this cell + fn vertices_mut(&mut self) -> &mut [usize]; } -/// Cell type for the [`TriMesh3d`] +/// Cell type for [`TriMesh3d`] #[derive(Copy, Clone, Pod, Zeroable)] #[repr(transparent)] pub struct TriangleCell(pub [usize; 3]); -/// Cell type for the [`HexMesh3d`] +/// Cell type for [`HexMesh3d`] #[derive(Copy, Clone, Pod, Zeroable)] #[repr(transparent)] pub struct HexCell(pub [usize; 8]); -/// Cell type for the [`PointCloud3d`] +/// Cell type for [`PointCloud3d`] #[derive(Copy, Clone, Pod, Zeroable)] #[repr(transparent)] pub struct PointCell(pub usize); impl CellConnectivity for TriangleCell { - fn num_vertices() -> usize { + fn expected_num_vertices() -> usize { 3 } - fn try_for_each_vertex Result<(), E>>(&self, f: F) -> Result<(), E> { - self.0.iter().copied().try_for_each(f) + fn vertices(&self) -> &[usize] { + &self.0[..] + } + + fn vertices_mut(&mut self) -> &mut [usize] { + &mut self.0[..] } } impl CellConnectivity for HexCell { - fn num_vertices() -> usize { + fn expected_num_vertices() -> usize { 8 } - fn try_for_each_vertex Result<(), E>>(&self, f: F) -> Result<(), E> { - self.0.iter().copied().try_for_each(f) + fn vertices(&self) -> &[usize] { + &self.0[..] + } + + fn vertices_mut(&mut self) -> &mut [usize] { + &mut self.0[..] + } +} + +impl CellConnectivity for TriangleOrQuadCell { + fn expected_num_vertices() -> usize { + 4 + } + + fn num_vertices(&self) -> usize { + return self.num_vertices(); + } + + fn vertices(&self) -> &[usize] { + TriangleOrQuadCell::vertices(self) + } + + fn vertices_mut(&mut self) -> &mut [usize] { + TriangleOrQuadCell::vertices_mut(self) } } impl CellConnectivity for PointCell { - fn num_vertices() -> usize { + fn expected_num_vertices() -> usize { 1 } - fn try_for_each_vertex Result<(), E>>(&self, mut f: F) -> Result<(), E> { - f(self.0) + fn vertices(&self) -> &[usize] { + std::slice::from_ref(&self.0) + } + + fn vertices_mut(&mut self) -> &mut [usize] { + std::slice::from_mut(&mut self.0) } } @@ -162,8 +529,22 @@ impl Mesh3d for TriMesh3d { self.vertices.as_slice() } + fn vertices_mut(&mut self) -> &mut [Vector3] { + self.vertices.as_mut_slice() + } + fn cells(&self) -> &[TriangleCell] { - bytemuck::cast_slice::<[usize; 3], TriangleCell>(self.triangles.as_slice()) + self.triangle_cells() + } + + fn from_vertices_and_connectivity( + vertices: Vec>, + triangles: Vec, + ) -> Self { + Self { + vertices, + triangles: bytemuck::cast_vec::(triangles), + } } } @@ -174,9 +555,43 @@ impl Mesh3d for HexMesh3d { self.vertices.as_slice() } + fn vertices_mut(&mut self) -> &mut [Vector3] { + self.vertices.as_mut_slice() + } + fn cells(&self) -> &[HexCell] { bytemuck::cast_slice::<[usize; 8], HexCell>(self.cells.as_slice()) } + + fn from_vertices_and_connectivity(vertices: Vec>, cells: Vec) -> Self { + Self { + vertices, + cells: bytemuck::cast_vec::(cells), + } + } +} + +impl Mesh3d for MixedTriQuadMesh3d { + type Cell = TriangleOrQuadCell; + + fn vertices(&self) -> &[Vector3] { + self.vertices.as_slice() + } + + fn vertices_mut(&mut self) -> &mut [Vector3] { + self.vertices.as_mut_slice() + } + + fn cells(&self) -> &[TriangleOrQuadCell] { + &self.cells + } + + fn from_vertices_and_connectivity( + vertices: Vec>, + cells: Vec, + ) -> Self { + Self { vertices, cells } + } } impl Mesh3d for PointCloud3d { @@ -186,12 +601,144 @@ impl Mesh3d for PointCloud3d { self.points.as_slice() } + fn vertices_mut(&mut self) -> &mut [Vector3] { + self.points.as_mut_slice() + } + fn cells(&self) -> &[PointCell] { bytemuck::cast_slice::(self.indices.as_slice()) } + + fn from_vertices_and_connectivity(points: Vec>, cells: Vec) -> Self { + Self { + points, + indices: bytemuck::cast_vec::(cells), + } + } +} + +impl<'a, R: Real, MeshT: Mesh3d + Clone> Mesh3d for std::borrow::Cow<'a, MeshT> { + type Cell = MeshT::Cell; + + fn vertices(&self) -> &[Vector3] { + (*self.as_ref()).vertices() + } + + fn vertices_mut(&mut self) -> &mut [Vector3] { + (self.to_mut()).vertices_mut() + } + + fn cells(&self) -> &[MeshT::Cell] { + (*self.as_ref()).cells() + } + + fn from_vertices_and_connectivity(vertices: Vec>, cells: Vec) -> Self { + std::borrow::Cow::Owned(MeshT::from_vertices_and_connectivity(vertices, cells).to_owned()) + } +} + +impl TriangleCell { + /// Returns an iterator over all edges of this triangle + pub fn edges<'a>(&'a self) -> impl Iterator + 'a { + (0..3).map(|i| (self.0[i], self.0[(i + 1) % 3])) + } +} + +pub struct MeshEdgeInformation { + /// Map from sorted edge to `(edge_idx, edge_count)` + edge_counts: MapType<[usize; 2], (usize, usize)>, + /// For each edge_idx: (edge, face_idx, local_edge_idx) + edge_info: Vec<([usize; 2], usize, usize)>, +} + +pub struct EdgeInformation { + /// The vertices of the edge + pub edge: [usize; 2], + /// The vertices of the edge in ascending order + pub edge_sorted: [usize; 2], + /// Total number of incident faces to the edge + pub incident_faces: usize, + /// One representative face that contains the edge with the given index + pub face: usize, + /// Local index of this edge in the representative face + pub local_edge_index: usize, +} + +impl MeshEdgeInformation { + /// Iterator over all edge information stored in this struct + pub fn iter<'a>(&'a self) -> impl Iterator + 'a { + self.edge_counts.iter().map(|(e, (edge_idx, count))| { + let info = &self.edge_info[*edge_idx]; + EdgeInformation { + edge: info.0, + edge_sorted: *e, + incident_faces: *count, + face: info.1, + local_edge_index: info.2, + } + }) + } + + /// Returns the number of boundary edges (i.e. edges with 1 incident face) + pub fn count_boundary_edges(&self) -> usize { + self.edge_counts + .values() + .filter(|(_, count)| *count == 1) + .count() + } + + /// Returns the number of non-manifold edges (i.e. edges with more than 2 incident face) + pub fn count_non_manifold_edges(&self) -> usize { + self.edge_counts + .values() + .filter(|(_, count)| *count > 2) + .count() + } + + /// Iterator over all boundary edges + pub fn boundary_edges<'a>(&'a self) -> impl Iterator + 'a { + self.edge_counts + .values() + .filter(|(_, count)| *count == 1) + .map(move |(edge_idx, _)| self.edge_info[*edge_idx].0) + } + + /// Iterator over all non-manifold edges + pub fn non_manifold_edges<'a>(&'a self) -> impl Iterator + 'a { + self.edge_counts + .values() + .filter(|(_, count)| *count > 2) + .map(move |(edge_idx, _)| self.edge_info[*edge_idx].0) + } +} + +pub struct MeshManifoldInformation { + /// List of all edges with only one incident face + pub boundary_edges: Vec<[usize; 2]>, + /// List of all non-manifold edges (edges with more than two incident faces) + pub non_manifold_edges: Vec<[usize; 2]>, + /// List of all non-manifold vertices (vertices with more than one fan of faces) + pub non_manifold_vertices: Vec, +} + +impl MeshManifoldInformation { + /// Returns whether the associated mesh is closed (has no boundary edges) + pub fn is_closed(&self) -> bool { + self.boundary_edges.is_empty() + } + + /// Returns whether the associated mesh is a 2-manifold (no non-manifold edges and no non-manifold vertices) + pub fn is_manifold(&self) -> bool { + self.non_manifold_edges.is_empty() && self.non_manifold_vertices.is_empty() + } } impl TriMesh3d { + /// Returns a slice of all triangles of the mesh as `TriangleCell`s + pub fn triangle_cells(&self) -> &[TriangleCell] { + bytemuck::cast_slice::<[usize; 3], TriangleCell>(self.triangles.as_slice()) + } + /// Clears the vertex and triangle storage, preserves allocated memory pub fn clear(&mut self) { self.vertices.clear(); @@ -223,89 +770,6 @@ impl TriMesh3d { } } - /// Returns a new triangle mesh containing only the specified triangles and removes all unreferenced vertices - pub fn keep_tris(&self, cell_indices: &[usize]) -> Self { - // Each entry is true if this vertex should be kept, false otherwise - let vertex_keep_table = { - let mut table = vec![false; self.vertices.len()]; - for &cell_index in cell_indices { - let cell_connectivity = &self.triangles[cell_index]; - - for &vertex_index in cell_connectivity { - table[vertex_index] = true; - } - } - table - }; - - let old_to_new_label_map = { - let mut label_map = MapType::default(); - let mut next_label = 0; - for (i, keep) in vertex_keep_table.iter().enumerate() { - if *keep { - label_map.insert(i, next_label); - next_label += 1; - } - } - label_map - }; - - let relabeled_cells: Vec<_> = cell_indices - .iter() - .map(|&i| self.triangles[i].clone()) - .map(|mut cell| { - for index in &mut cell { - *index = *old_to_new_label_map - .get(index) - .expect("Index must be in map"); - } - cell - }) - .collect(); - - let relabeled_vertices: Vec<_> = vertex_keep_table - .iter() - .enumerate() - .filter_map(|(i, should_keep)| if *should_keep { Some(i) } else { None }) - .map(|index| self.vertices[index].clone()) - .collect(); - - Self { - vertices: relabeled_vertices, - triangles: relabeled_cells, - } - } - - /// Removes all triangles from the mesh that are completely outside of the given AABB and clamps the remaining triangles to the boundary - pub fn clamp_with_aabb(&mut self, aabb: &Aabb3d) { - // Find all triangles with at least one vertex inside of AABB - let triangles_to_keep = self - .triangles - .par_iter() - .enumerate() - .filter(|(_, tri)| tri.iter().any(|&v| aabb.contains_point(&self.vertices[v]))) - .map(|(i, _)| i) - .collect::>(); - // Remove all other triangles from mesh - let new_mesh = self.keep_tris(&triangles_to_keep); - // Replace current mesh - let TriMesh3d { - vertices, - triangles, - } = new_mesh; - self.vertices = vertices; - self.triangles = triangles; - - // Clamp remaining vertices to AABB - self.vertices.par_iter_mut().for_each(|v| { - let min = aabb.min(); - let max = aabb.max(); - v.x = v.x.clamp(min.x, max.x); - v.y = v.y.clamp(min.y, max.y); - v.z = v.z.clamp(min.z, max.z); - }) - } - /// Same as [`Self::vertex_normal_directions_inplace`] but assumes that the output is already zeroed fn vertex_normal_directions_inplace_assume_zeroed(&self, normal_directions: &mut [Vector3]) { assert_eq!(normal_directions.len(), self.vertices.len()); @@ -478,13 +942,8 @@ impl TriMesh3d { normals } - /// Returns all boundary edges of the mesh - /// - /// Returns edges which are only connected to exactly one triangle, along with the connected triangle - /// index and the local index of the edge within that triangle. - /// - /// Note that the output order is not necessarily deterministic due to the internal use of hashmaps. - pub fn find_boundary_edges(&self) -> Vec<([usize; 2], usize, usize)> { + /// Computes a helper struct with information about all edges in the mesh (i.e. number of incident triangles etc.) + pub fn compute_edge_information(&self) -> MeshEdgeInformation { let mut sorted_edges = Vec::new(); let mut edge_info = Vec::new(); @@ -522,39 +981,236 @@ impl TriMesh3d { .or_insert((edge_idx, 1)); } - // Take only the faces which have a count of 1, which correspond to boundary faces - edge_counts - .into_iter() - .map(|(_edge, value)| value) - .filter(|&(_, count)| count == 1) - .map(move |(edge_idx, _)| edge_info[edge_idx].clone()) - .collect() - } -} - -#[test] -fn test_find_boundary() { - // TODO: Needs a test with a real mesh - let mesh = TriMesh3d:: { - vertices: vec![ - Vector3::new_random(), - Vector3::new_random(), - Vector3::new_random(), - ], - triangles: vec![[0, 1, 2]], - }; + MeshEdgeInformation { + edge_counts, + edge_info, + } + } + + /// Returns all non-manifold vertices of this mesh + /// + /// A non-manifold vertex is generated by pinching two surface sheets together at that vertex + /// such that the vertex is incident to more than one fan of triangles. + /// + /// Note: This function assumes that all edges in the mesh are manifold edges! If there are non- + /// manifold edges, it is possible to connect two triangle fans using a third fan which is not + /// detected by this function. + pub fn find_non_manifold_vertices(&self) -> Vec { + let mut non_manifold_verts = Vec::new(); + let mut tri_fan = Vec::new(); + + let sort_edge = |edge: (usize, usize)| -> (usize, usize) { + if edge.0 > edge.1 { + (edge.1, edge.0) + } else { + edge + } + }; + + let is_fan_triangle = + |vert: usize, next_tri: &TriangleCell, current_fan: &[TriangleCell]| -> bool { + let mut is_fan_tri = false; + + // Check each edge of the tri against all triangles in the fan + 'edge_loop: for edge in next_tri.edges().map(sort_edge) { + // Only edges connected to the current vertex are relevant + if edge.0 == vert || edge.1 == vert { + // Check against all triangles of the current fan + for fan_tri in current_fan { + for fan_edge in fan_tri.edges().map(sort_edge) { + if edge == fan_edge { + // Triangle is part of the current fan + is_fan_tri = true; + break 'edge_loop; + } + } + } + } + } + + is_fan_tri + }; + + let tris = self.triangle_cells(); + let vert_face_connectivity = self.vertex_cell_connectivity(); + for vert in 0..self.vertices.len() { + let mut remaining_faces = vert_face_connectivity[vert] + .iter() + .cloned() + .collect::>(); + + if remaining_faces.len() > 1 { + // Pick an arbitrary first face of the fan + tri_fan.push(tris[remaining_faces.pop_first().unwrap()]); + // Try to match all other faces + while !remaining_faces.is_empty() { + let mut found_next = None; + + // Check all remaining faces against the current fan + for &next_candidate in &remaining_faces { + if is_fan_triangle(vert, &tris[next_candidate], &tri_fan) { + found_next = Some(next_candidate); + break; + } + } + + if let Some(next) = found_next { + // New fan triangle found + tri_fan.push(tris[next]); + remaining_faces.remove(&next); + } else { + // None of the remaining faces are part of the fan + break; + } + } + + if !remaining_faces.is_empty() { + // At least one triangle is not part of the current fan + // -> Non-manifold vertex was found + non_manifold_verts.push(vert); + } + + tri_fan.clear(); + } + } + + non_manifold_verts + } + + /// Returns a struct with lists of all boundary edges, non-manifold edges and non-manifold vertices + /// + /// Note that the output order is not necessarily deterministic due to the internal use of hashmaps. + pub fn compute_manifold_information(&self) -> MeshManifoldInformation { + let edges = self.compute_edge_information(); + let boundary_edges = edges.boundary_edges().collect(); + let non_manifold_edges = edges.non_manifold_edges().collect(); - let mut boundary = mesh.find_boundary_edges(); - boundary.sort_unstable(); + let non_manifold_vertices = self.find_non_manifold_vertices(); - assert_eq!( - boundary, - vec![ - ([0usize, 1usize], 0, 0), - ([1usize, 2usize], 0, 1), - ([2usize, 0usize], 0, 2), - ] - ); + MeshManifoldInformation { + boundary_edges, + non_manifold_edges, + non_manifold_vertices, + } + } +} + +#[cfg(test)] +mod tri_mesh_tests { + use super::*; + + fn mesh_one_tri() -> TriMesh3d { + TriMesh3d:: { + vertices: vec![ + Vector3::new_random(), + Vector3::new_random(), + Vector3::new_random(), + ], + triangles: vec![[0, 1, 2]], + } + } + + fn mesh_non_manifold_edge() -> TriMesh3d { + TriMesh3d:: { + vertices: vec![ + Vector3::new(0.0, 0.0, 0.0), + Vector3::new(1.0, 0.0, 0.0), + Vector3::new(0.0, 1.0, 0.0), + Vector3::new(1.0, 1.0, 0.0), + Vector3::new(0.0, 0.0, 1.0), + ], + triangles: vec![[0, 1, 2], [1, 3, 2], [1, 2, 4]], + } + } + + fn mesh_non_manifold_edge_double() -> TriMesh3d { + TriMesh3d:: { + vertices: vec![ + Vector3::new(0.0, 0.0, 0.0), + Vector3::new(1.0, 0.0, 0.0), + Vector3::new(0.0, 1.0, 0.0), + Vector3::new(1.0, 1.0, 0.0), + Vector3::new(0.0, 0.0, 1.0), + ], + triangles: vec![[0, 1, 2], [1, 3, 2], [1, 2, 4], [4, 2, 1]], + } + } + + fn mesh_non_manifold_vertex() -> TriMesh3d { + TriMesh3d:: { + vertices: vec![ + Vector3::new(1.0, 0.0, 0.0), + Vector3::new(0.0, 1.0, 0.0), + Vector3::new(1.0, 1.0, 0.0), + Vector3::new(2.0, 1.0, 1.0), + Vector3::new(1.0, 2.0, 1.0), + ], + triangles: vec![[0, 2, 1], [2, 3, 4]], + } + } + + #[test] + fn test_tri_mesh_edge_info() { + let mesh = mesh_non_manifold_edge(); + let edges = mesh.compute_edge_information(); + + for ei in edges.iter() { + if ei.edge_sorted == [1, 2] { + assert_eq!(ei.incident_faces, 3); + } else { + assert_eq!(ei.incident_faces, 1); + } + } + + assert_eq!(edges.count_boundary_edges(), 6); + assert_eq!(edges.count_non_manifold_edges(), 1); + } + + #[test] + fn test_tri_mesh_non_manifold_vertex_info() { + let mesh = mesh_non_manifold_vertex(); + let non_manifold_vertes = mesh.find_non_manifold_vertices(); + assert_eq!(non_manifold_vertes, [2]); + } + + #[test] + fn test_tri_mesh_manifold_info() { + { + let mesh = mesh_one_tri(); + let info = mesh.compute_manifold_information(); + assert!(!info.is_closed()); + assert!(info.is_manifold()); + assert_eq!(info.non_manifold_edges.len(), 0); + assert_eq!(info.non_manifold_vertices.len(), 0); + } + + { + let mesh = mesh_non_manifold_edge(); + let info = mesh.compute_manifold_information(); + assert!(!info.is_closed()); + assert!(!info.is_manifold()); + assert_eq!(info.non_manifold_edges.len(), 1); + assert_eq!(info.non_manifold_vertices.len(), 0); + } + + { + let mesh = mesh_non_manifold_edge_double(); + let info = mesh.compute_manifold_information(); + assert!(!info.is_closed()); + assert!(!info.is_manifold()); + assert_eq!(info.non_manifold_edges.len(), 1); + assert_eq!(info.non_manifold_vertices.len(), 0); + } + + { + let mesh = mesh_non_manifold_vertex(); + let info = mesh.compute_manifold_information(); + assert!(!info.is_closed()); + assert!(!info.is_manifold()); + assert_eq!(info.non_manifold_edges.len(), 0); + assert_eq!(info.non_manifold_vertices.len(), 1); + } + } } /// Wrapper type for meshes with attached point or cell data @@ -570,12 +1226,67 @@ pub struct MeshWithData> { impl> Mesh3d for MeshWithData { type Cell = MeshT::Cell; + fn vertices(&self) -> &[Vector3] { self.mesh.vertices() } + + fn vertices_mut(&mut self) -> &mut [Vector3] { + self.mesh.vertices_mut() + } + fn cells(&self) -> &[MeshT::Cell] { self.mesh.cells() } + + fn from_vertices_and_connectivity( + vertices: Vec>, + connectivity: Vec, + ) -> Self { + MeshWithData::new(MeshT::from_vertices_and_connectivity( + vertices, + connectivity, + )) + } + + /// Returns a new mesh containing only the specified cells and removes all unreferenced vertices and attributes + fn keep_cells(&self, cell_indices: &[usize], keep_all_vertices: bool) -> Self { + // Filter internal mesh + let mut new_mesh = if keep_all_vertices { + let mut new_mesh = keep_cells_impl(self, cell_indices, &[]); + new_mesh.point_attributes = self.point_attributes.clone(); + new_mesh + } else { + let vertex_keep_table = vertex_keep_table(self, cell_indices); + let mut new_mesh = keep_cells_impl(self, cell_indices, &vertex_keep_table); + + let vertex_indices = vertex_keep_table + .iter() + .copied() + .enumerate() + .filter_map(|(i, should_keep)| if should_keep { Some(i) } else { None }) + .collect::>(); + + // Filter the point attributes + new_mesh.point_attributes = self + .point_attributes + .iter() + .map(|attr| attr.keep_indices(&vertex_indices)) + .collect(); + + new_mesh + }; + + // Filter the cell attributes + let cell_attributes = self + .cell_attributes + .iter() + .map(|attr| attr.keep_indices(&cell_indices)) + .collect(); + new_mesh.cell_attributes = cell_attributes; + + new_mesh + } } /// Returns an mesh data wrapper with a default mesh and without attached attributes @@ -595,6 +1306,29 @@ impl> MeshWithData { } } + /// Replaces the mesh but keeps the data + pub fn with_mesh>(self, new_mesh: NewMeshT) -> MeshWithData { + if !self.point_attributes.is_empty() { + assert_eq!( + self.mesh.vertices().len(), + new_mesh.vertices().len(), + "number of vertices should match if there are point attributes" + ); + } + if !self.cell_attributes.is_empty() { + assert_eq!( + self.mesh.cells().len(), + new_mesh.cells().len(), + "number of cells should match if there are cell attributes" + ) + } + MeshWithData { + mesh: new_mesh, + point_attributes: self.point_attributes, + cell_attributes: self.cell_attributes, + } + } + /// Attaches an attribute to the points of the mesh, panics if the length of the data does not match the mesh's number of points pub fn with_point_data(mut self, point_attribute: impl Into>) -> Self { let point_attribute = point_attribute.into(); @@ -621,7 +1355,7 @@ impl MeshAttribute { } } - /// Creates a new named mesh attribute with scalar values implementing the [`Real`](crate::Real) trait + /// Creates a new named mesh attribute with scalar values implementing the [`Real`] trait pub fn new_real_scalar>(name: S, data: impl Into>) -> Self { Self { name: name.into(), @@ -629,7 +1363,7 @@ impl MeshAttribute { } } - /// Creates a new named mesh attribute with scalar values implementing the [`Real`](crate::Real) trait + /// Creates a new named mesh attribute with scalar values implementing the [`Real`] trait pub fn new_real_vector3>(name: S, data: impl Into>>) -> Self { Self { name: name.into(), @@ -637,7 +1371,7 @@ impl MeshAttribute { } } - /// Converts the mesh attribute to a [`vtkio::model::Attribute`](https://docs.rs/vtkio/0.6.*/vtkio/model/enum.Attribute.html) + /// Converts the mesh attribute to a [`vtkio::model::Attribute`]) #[cfg(feature = "vtk_extras")] #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] fn to_vtk_attribute(&self) -> Attribute { @@ -652,6 +1386,26 @@ impl MeshAttribute { .with_data(vec3r_vec.iter().flatten().copied().collect::>()), } } + + /// Returns a new attribute keeping only the entries with the given index + fn keep_indices(&self, indices: &[usize]) -> Self { + let data = match &self.data { + AttributeData::ScalarU64(d) => { + AttributeData::ScalarU64(indices.iter().copied().map(|i| d[i].clone()).collect()) + } + AttributeData::ScalarReal(d) => { + AttributeData::ScalarReal(indices.iter().copied().map(|i| d[i].clone()).collect()) + } + AttributeData::Vector3Real(d) => { + AttributeData::Vector3Real(indices.iter().copied().map(|i| d[i].clone()).collect()) + } + }; + + Self { + name: self.name.clone(), + data, + } + } } impl AttributeData { @@ -676,12 +1430,12 @@ impl MeshWithData where R: Real, MeshT: Mesh3d, - for<'a> &'a MeshT: Into, + for<'a> &'a MeshT: IntoVtkUnstructuredGridPiece, { - /// Creates a [`vtkio::model::UnstructuredGridPiece`](https://docs.rs/vtkio/0.6.*/vtkio/model/struct.UnstructuredGridPiece.html) representing this mesh including its attached [`MeshAttribute`]s + /// Creates a [`vtkio::model::UnstructuredGridPiece`] representing this mesh including its attached [`MeshAttribute`]s #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] - pub fn to_unstructured_grid(&self) -> UnstructuredGridPiece { - let mut grid_piece: UnstructuredGridPiece = (&self.mesh).into(); + fn unstructured_grid(&self) -> UnstructuredGridPiece { + let mut grid_piece: UnstructuredGridPiece = (&self.mesh).into_unstructured_grid(); for point_attribute in &self.point_attributes { grid_piece .data @@ -695,21 +1449,74 @@ where } } -/// Creates a [`vtkio::model::UnstructuredGridPiece`](https://docs.rs/vtkio/0.6.*/vtkio/model/struct.UnstructuredGridPiece.html) representing this mesh with all its attributes and wraps it into a [`vtkio::model::DataSet`](https://docs.rs/vtkio/0.6.*/vtkio/model/enum.DataSet.html) -#[cfg(feature = "vtk_extras")] -#[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] -impl Into for &MeshWithData -where - R: Real, - MeshT: Mesh3d, - for<'a> &'a MeshT: Into, -{ - fn into(self) -> DataSet { - DataSet::inline(self.to_unstructured_grid()) - } +macro_rules! impl_into_vtk { + ($name:tt) => { + #[cfg(feature = "vtk_extras")] + #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] + impl IntoVtkUnstructuredGridPiece for $name { + fn into_unstructured_grid(self) -> UnstructuredGridPiece { + vtk_helper::mesh_to_unstructured_grid(&self) + } + } + + #[cfg(feature = "vtk_extras")] + #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] + impl IntoVtkUnstructuredGridPiece for &$name { + fn into_unstructured_grid(self) -> UnstructuredGridPiece { + vtk_helper::mesh_to_unstructured_grid(self) + } + } + + #[cfg(feature = "vtk_extras")] + #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] + impl<'a, R: Real> IntoVtkUnstructuredGridPiece for std::borrow::Cow<'a, $name> { + fn into_unstructured_grid(self) -> UnstructuredGridPiece { + vtk_helper::mesh_to_unstructured_grid(&self) + } + } + + #[cfg(feature = "vtk_extras")] + #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] + impl<'a, R: Real> IntoVtkUnstructuredGridPiece for &std::borrow::Cow<'a, $name> { + fn into_unstructured_grid(self) -> UnstructuredGridPiece { + vtk_helper::mesh_to_unstructured_grid(self) + } + } + + #[cfg(feature = "vtk_extras")] + #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] + impl IntoVtkUnstructuredGridPiece for &MeshWithData> { + fn into_unstructured_grid(self) -> UnstructuredGridPiece { + self.unstructured_grid() + } + } + + #[cfg(feature = "vtk_extras")] + #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] + impl IntoVtkUnstructuredGridPiece for MeshWithData> { + fn into_unstructured_grid(self) -> UnstructuredGridPiece { + self.unstructured_grid() + } + } + + #[cfg(feature = "vtk_extras")] + #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] + impl<'a, R: Real> IntoVtkUnstructuredGridPiece + for &MeshWithData>> + { + fn into_unstructured_grid(self) -> UnstructuredGridPiece { + self.unstructured_grid() + } + } + }; } -/// Trait implementations to convert meshes into types supported by [`vtkio`](https://github.com/elrnv/vtkio) +impl_into_vtk!(TriMesh3d); +impl_into_vtk!(MixedTriQuadMesh3d); +impl_into_vtk!(HexMesh3d); +impl_into_vtk!(PointCloud3d); + +/// Trait implementations to convert meshes into types supported by [`vtkio`] #[cfg(feature = "vtk_extras")] #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] pub mod vtk_helper { @@ -719,39 +1526,78 @@ pub mod vtk_helper { use vtkio::IOBuffer; use super::{ - CellConnectivity, HexCell, HexMesh3d, Mesh3d, PointCell, PointCloud3d, Real, TriMesh3d, - TriangleCell, + CellConnectivity, HexCell, Mesh3d, PointCell, Real, TriangleCell, TriangleOrQuadCell, }; - /// Trait that can be implemented by mesh cells to return the corresponding [`vtkio::model::CellType`](https://docs.rs/vtkio/0.6.*/vtkio/model/enum.CellType.html) + /// Trait that can be implemented by mesh cells to return the corresponding [`vtkio::model::CellType`] #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] pub trait HasVtkCellType { - /// Returns the corresponding [`vtkio::model::CellType`](https://docs.rs/vtkio/0.6.*/vtkio/model/enum.CellType.html) of the cell - fn vtk_cell_type() -> CellType; + /// Returns the corresponding [`vtkio::model::CellType`] of the cell + fn vtk_cell_type(&self) -> CellType; } #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] impl HasVtkCellType for TriangleCell { - fn vtk_cell_type() -> CellType { + fn vtk_cell_type(&self) -> CellType { CellType::Triangle } } + #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] + impl HasVtkCellType for TriangleOrQuadCell { + fn vtk_cell_type(&self) -> CellType { + match self { + TriangleOrQuadCell::Tri(_) => CellType::Triangle, + TriangleOrQuadCell::Quad(_) => CellType::Quad, + } + } + } + #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] impl HasVtkCellType for HexCell { - fn vtk_cell_type() -> CellType { + fn vtk_cell_type(&self) -> CellType { CellType::Hexahedron } } #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] impl HasVtkCellType for PointCell { - fn vtk_cell_type() -> CellType { + fn vtk_cell_type(&self) -> CellType { CellType::Vertex } } - /// Converts any supported mesh to a [`vtkio::model::UnstructuredGridPiece`](https://docs.rs/vtkio/0.6.*/vtkio/model/struct.UnstructuredGridPiece.html) + /// Conversion of meshes into a [`vtkio::model::UnstructuredGridPiece`] + #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] + pub trait IntoVtkUnstructuredGridPiece { + fn into_unstructured_grid(self) -> UnstructuredGridPiece; + } + + /// Direct conversion of meshes into a full [`vtkio::model::DataSet`] + #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] + pub trait IntoVtkDataSet { + fn into_dataset(self) -> DataSet; + } + + impl IntoVtkUnstructuredGridPiece for UnstructuredGridPiece { + fn into_unstructured_grid(self) -> UnstructuredGridPiece { + self + } + } + + impl IntoVtkDataSet for T { + fn into_dataset(self) -> DataSet { + DataSet::inline(self.into_unstructured_grid()) + } + } + + impl IntoVtkDataSet for DataSet { + fn into_dataset(self) -> DataSet { + self + } + } + + /// Converts any supported mesh to a [`vtkio::model::UnstructuredGridPiece`] #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] pub fn mesh_to_unstructured_grid<'a, R, MeshT>(mesh: &'a MeshT) -> UnstructuredGridPiece where @@ -765,78 +1611,25 @@ pub mod vtk_helper { points }; - let vertices_per_cell = MeshT::Cell::num_vertices(); let vertices = { - let mut vertices = Vec::with_capacity(mesh.cells().len() * (vertices_per_cell + 1)); + let mut vertices = + Vec::with_capacity(mesh.cells().len() * (MeshT::Cell::expected_num_vertices() + 1)); for cell in mesh.cells().iter() { - vertices.push(vertices_per_cell as u32); - cell.for_each_vertex(|v| vertices.push(v as u32)); + vertices.push(cell.num_vertices() as u32); + cell.vertices() + .iter() + .copied() + .for_each(|v| vertices.push(v as u32)); } vertices }; - let cell_types = vec![::vtk_cell_type(); mesh.cells().len()]; + let mut cell_types = Vec::with_capacity(mesh.cells().len()); + cell_types.extend(mesh.cells().iter().map(|c| c.vtk_cell_type())); new_unstructured_grid_piece(points, vertices, cell_types) } - /// Creates a [`vtkio::model::UnstructuredGridPiece`](https://docs.rs/vtkio/0.6.*/vtkio/model/struct.UnstructuredGridPiece.html) representing this mesh - #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] - impl From<&TriMesh3d> for UnstructuredGridPiece - where - R: Real, - { - fn from(mesh: &TriMesh3d) -> Self { - mesh_to_unstructured_grid(mesh) - } - } - - /// Creates a [`vtkio::model::UnstructuredGridPiece`](https://docs.rs/vtkio/0.6.*/vtkio/model/struct.UnstructuredGridPiece.html) representing this mesh - #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] - impl<'a, R> From<&'a HexMesh3d> for UnstructuredGridPiece - where - R: Real, - { - fn from(mesh: &'a HexMesh3d) -> Self { - mesh_to_unstructured_grid(mesh) - } - } - - /// Creates a [`vtkio::model::UnstructuredGridPiece`](https://docs.rs/vtkio/0.6.*/vtkio/model/struct.UnstructuredGridPiece.html) representing this point cloud - #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] - impl<'a, R> From<&'a PointCloud3d> for UnstructuredGridPiece - where - R: Real, - { - fn from(mesh: &'a PointCloud3d) -> Self { - mesh_to_unstructured_grid(mesh) - } - } - - /// Creates a [`vtkio::model::UnstructuredGridPiece`](https://docs.rs/vtkio/0.6.*/vtkio/model/struct.UnstructuredGridPiece.html) representing this mesh and wraps it into a [`vtkio::model::DataSet`](https://docs.rs/vtkio/0.6.*/vtkio/model/enum.DataSet.html) - #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] - impl Into for &TriMesh3d { - fn into(self) -> DataSet { - DataSet::inline(UnstructuredGridPiece::from(self)) - } - } - - /// Creates a [`vtkio::model::UnstructuredGridPiece`](https://docs.rs/vtkio/0.6.*/vtkio/model/struct.UnstructuredGridPiece.html) representing this mesh and wraps it into a [`vtkio::model::DataSet`](https://docs.rs/vtkio/0.6.*/vtkio/model/enum.DataSet.html) - #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] - impl Into for &HexMesh3d { - fn into(self) -> DataSet { - DataSet::inline(UnstructuredGridPiece::from(self)) - } - } - - /// Creates a [`vtkio::model::UnstructuredGridPiece`](https://docs.rs/vtkio/0.6.*/vtkio/model/struct.UnstructuredGridPiece.html) representing this point cloud and wraps it into a [`vtkio::model::DataSet`](https://docs.rs/vtkio/0.6.*/vtkio/model/enum.DataSet.html) - #[cfg_attr(doc_cfg, doc(cfg(feature = "vtk_extras")))] - impl Into for &PointCloud3d { - fn into(self) -> DataSet { - DataSet::inline(UnstructuredGridPiece::from(self)) - } - } - fn new_unstructured_grid_piece>( points: B, vertices: Vec, diff --git a/splashsurf_lib/src/neighborhood_search.rs b/splashsurf_lib/src/neighborhood_search.rs index f26a717..c064b56 100644 --- a/splashsurf_lib/src/neighborhood_search.rs +++ b/splashsurf_lib/src/neighborhood_search.rs @@ -252,6 +252,12 @@ impl FlatNeighborhoodList { self.neighbor_ptr.len() - 1 } + pub fn iter<'a>(&'a self) -> impl Iterator + 'a { + (0..self.neighbor_ptr.len()) + .into_iter() + .flat_map(|i| self.get_neighbors(i)) + } + /// Returns a slice containing the neighborhood list of the given particle pub fn get_neighbors(&self, particle_i: usize) -> Option<&[usize]> { let range = self diff --git a/splashsurf_lib/src/octree.rs b/splashsurf_lib/src/octree.rs index 1b81748..b8679cf 100644 --- a/splashsurf_lib/src/octree.rs +++ b/splashsurf_lib/src/octree.rs @@ -397,7 +397,7 @@ impl OctreeNode { &self.aabb } - /// Constructs a [`UniformGrid`](crate::UniformGrid) that represents the domain of this octree node + /// Constructs a [`UniformGrid`] that represents the domain of this octree node pub fn grid( &self, min: &Vector3, diff --git a/splashsurf_lib/src/postprocessing.rs b/splashsurf_lib/src/postprocessing.rs new file mode 100644 index 0000000..2665d75 --- /dev/null +++ b/splashsurf_lib/src/postprocessing.rs @@ -0,0 +1,885 @@ +//! Functions for post-processing of surface meshes (decimation, smoothing, etc.) + +use crate::halfedge_mesh::{HalfEdgeTriMesh, IllegalHalfEdgeCollapse}; +use crate::mesh::{Mesh3d, MixedTriQuadMesh3d, TriMesh3d, TriMesh3dExt, TriangleOrQuadCell}; +use crate::topology::{Axis, DirectedAxis, Direction}; +use crate::uniform_grid::UniformCartesianCubeGrid3d; +use crate::{profile, Index, MapType, Real, SetType}; +use log::{info, warn}; +use nalgebra::Vector3; +use rayon::prelude::*; + +/// Laplacian Smoothing with feature weights +/// +/// Move each vertex towards the mean position of its neighbors. +/// Factor beta in \[0;1] proportional to amount of smoothing (for beta=1 each vertex is placed at the mean position). +/// Additionally, feature weights can be specified to apply a varying amount of smoothing over the mesh. +pub fn par_laplacian_smoothing_inplace( + mesh: &mut TriMesh3d, + vertex_connectivity: &[Vec], + iterations: usize, + beta: R, + weights: &[R], +) { + profile!("laplacian_smoothing"); + + let mut vertex_buffer = mesh.vertices.clone(); + + for _ in 0..iterations { + profile!("laplacian_smoothing iter"); + + std::mem::swap(&mut vertex_buffer, &mut mesh.vertices); + + mesh.vertices + .par_iter_mut() + .enumerate() + .for_each(|(i, vertex_i)| { + let beta_eff = beta * weights[i]; + + // Compute mean position of neighboring vertices + let mut vertex_sum = Vector3::zeros(); + for j in vertex_connectivity[i].iter() { + vertex_sum += vertex_buffer[j.clone()]; + } + if vertex_connectivity[i].len() > 0 { + let n = R::from_usize(vertex_connectivity[i].len()).unwrap(); + vertex_sum /= n; + } + + *vertex_i = vertex_i.scale(R::one() - beta_eff) + vertex_sum.scale(beta_eff); + }); + } +} + +/// Laplacian smoothing of a normal field +pub fn par_laplacian_smoothing_normals_inplace( + normals: &mut Vec>, + vertex_connectivity: &[Vec], + iterations: usize, +) { + profile!("par_laplacian_smoothing_normals_inplace"); + + let mut normal_buffer = normals.clone(); + + for _ in 0..iterations { + profile!("smoothing iteration"); + + std::mem::swap(&mut normal_buffer, normals); + + normals + .par_iter_mut() + .enumerate() + .for_each(|(i, normal_i)| { + *normal_i = Vector3::zeros(); + for j in vertex_connectivity[i].iter().copied() { + let normal_j = normal_buffer[j]; + *normal_i += normal_j; + } + normal_i.normalize_mut(); + }); + } +} + +/// Mesh simplification designed for marching cubes surfaces meshes inspired by the "Compact Contouring"/"Mesh displacement" approach by Doug Moore and Joe Warren +/// +/// See Moore and Warren: ["Mesh Displacement: An Improved Contouring Method for Trivariate Data"](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49.5214&rep=rep1&type=pdf) (1991) +/// or Moore and Warren: "Compact Isocontours from Sampled Data" in "Graphics Gems III" (1992). +pub fn marching_cubes_cleanup( + mesh: &mut TriMesh3d, + grid: &UniformCartesianCubeGrid3d, + max_iter: usize, + keep_vertices: bool, +) -> Vec> { + profile!("marching_cubes_cleanup"); + + let half_dx = grid.cell_size() / (R::one() + R::one()); + + let nearest_grid_point = { + profile!("determine nearest grid points"); + mesh.vertices + .par_iter() + .enumerate() + .map(|(_, v)| { + // TODO: Move this to uniform grid + let cell_ijk = grid.enclosing_cell(v); + let min_point = grid.get_point(cell_ijk).unwrap(); + let min_coord = grid.point_coordinates(&min_point); + + let mut nearest_point = min_point; + if (v.x - min_coord.x) > half_dx { + nearest_point = grid + .get_point_neighbor( + &nearest_point, + DirectedAxis::new(Axis::X, Direction::Positive), + ) + .unwrap() + } + if (v.y - min_coord.y) > half_dx { + nearest_point = grid + .get_point_neighbor( + &nearest_point, + DirectedAxis::new(Axis::Y, Direction::Positive), + ) + .unwrap() + } + if (v.z - min_coord.z) > half_dx { + nearest_point = grid + .get_point_neighbor( + &nearest_point, + DirectedAxis::new(Axis::Z, Direction::Positive), + ) + .unwrap() + } + + grid.flatten_point_index(&nearest_point) + }) + .collect::>() + }; + + let (tri_mesh, vertex_map) = { + profile!("mesh displacement"); + let mut mesh = HalfEdgeTriMesh::from(std::mem::take(mesh)); + + // Tracks per vertex how many collapsed vertices contributed to its position + let mut vertex_sum_count = vec![1_usize; mesh.vertices.len()]; + // Buffer for vertices that should get collapsed + let mut vertex_buffer = Vec::new(); + + for _ in 0..max_iter { + profile!("mesh displacement iteration"); + + let mut collapse_count = 0; + for v0 in 0..mesh.vertices.len() { + if !mesh.is_valid_vertex(v0) { + continue; + } + + for he in mesh.outgoing_half_edges(v0) { + let v1 = he.to; + if nearest_grid_point[v0] == nearest_grid_point[v1] { + vertex_buffer.push(v1); + } + } + + for &v1 in vertex_buffer.iter() { + if mesh.is_valid_vertex(v1) { + if let Some(he) = mesh.half_edge(v1, v0) { + if let Ok(_) = mesh.try_half_edge_collapse(he) { + collapse_count += 1; + + // Move to averaged position + let pos_v0 = mesh.vertices[v0]; + let pos_v1 = mesh.vertices[v1]; + + let n0 = vertex_sum_count[v0]; + let n1 = vertex_sum_count[v1]; + + let n_new = n0 + n1; + let pos_new = (pos_v0.scale(R::from_usize(n0).unwrap()) + + pos_v1.scale(R::from_usize(n1).unwrap())) + .unscale(R::from_usize(n_new).unwrap()); + + vertex_sum_count[v0] = n_new; + mesh.vertices[v0] = pos_new; + } + } + } + } + + vertex_buffer.clear(); + } + + if collapse_count == 0 { + break; + } + } + + mesh.into_parts(keep_vertices) + }; + + *mesh = tri_mesh; + vertex_map +} + +pub fn decimation(mesh: &mut TriMesh3d, keep_vertices: bool) -> Vec> { + profile!("decimation"); + + let mut half_edge_mesh = HalfEdgeTriMesh::from(std::mem::take(mesh)); + merge_barnacle_configurations(&mut half_edge_mesh); + + { + profile!("convert mesh back"); + let (new_mesh, vertex_map) = half_edge_mesh.into_parts(keep_vertices); + *mesh = new_mesh; + return vertex_map; + } +} + +pub fn merge_barnacle_configurations(mesh: &mut HalfEdgeTriMesh) { + profile!("merge_barnacle_configurations"); + + merge_single_barnacle_configurations_he(mesh); + merge_double_barnacle_configurations_he(mesh); +} + +#[allow(unused)] +fn find_small_triangles(mesh: &HalfEdgeTriMesh, area_limit: R) -> Vec { + profile!("find_small_triangles"); + let zero_sized_triangles = mesh + .triangles + .par_iter() + .enumerate() + .filter(|(i, _)| mesh.is_valid_triangle(*i)) + .filter(|(_, tri)| mesh.tri_area_ijk::(tri) <= area_limit) + .map(|(i, _)| i) + .collect::>(); + info!( + "Found {} small sized triangles (area <= {})", + zero_sized_triangles.len(), + area_limit + ); + zero_sized_triangles +} + +#[allow(unused)] +fn find_bad_triangles(mesh: &HalfEdgeTriMesh, max_aspect_ratio: R) -> Vec { + profile!("find_bad_triangles"); + let mut ars = mesh + .triangles + .par_iter() + .enumerate() + .filter(|(i, _)| mesh.is_valid_triangle(*i)) + .filter(|(_, tri)| mesh.tri_area_ijk::(tri) > R::default_epsilon()) + .map(|(i, tri)| { + let aspect_ratio = mesh.tri_aspect_ratio::(tri); + (i, aspect_ratio) + }) + .filter(|(_, ar)| *ar >= max_aspect_ratio) + .collect::>(); + ars.par_sort_unstable_by(|a, b| { + let a: R = a.1; + let b: R = b.1; + a.to_f32() + .unwrap() + .total_cmp(&b.to_f32().unwrap()) + .reverse() + }); + dbg!(&ars[0..10.min(ars.len())]); + println!( + "Found {} triangles with bad aspect ratio (>= {})", + ars.len(), + max_aspect_ratio + ); + ars.into_iter().map(|(i, _)| i).collect() +} + +#[allow(unused)] +fn process_triangle_collapse_queue( + mesh: &mut HalfEdgeTriMesh, + triangles: impl Iterator, +) -> (Vec, usize) { + let mut processed = 0; + let remaining = triangles + .flat_map(|tri_idx| { + if !mesh.is_valid_triangle(tri_idx) { + return None; + } + + let tri = mesh.triangles[tri_idx]; + + let mut last_res = None; + let mut from = 0; + let mut to = 0; + // Try to find an edge of the triangle that can be collapsed + for i in 0..3 { + from = tri[i]; + to = tri[(i + 1) % 3]; + + if let Some(he) = mesh.half_edge(from, to) { + last_res = Some(mesh.try_half_edge_collapse(he)); + match last_res { + Some(Ok(_)) => { + processed += 1; + return None; + } + _ => {} + } + } else { + warn!( + "Invalid collapse: Half-edge missing (from {} to {})", + from, to + ); + return None; + } + } + + match last_res { + Some(Err(IllegalHalfEdgeCollapse::IntersectionOfOneRing)) => Some(tri_idx), + Some(Err(e)) => { + warn!("Invalid collapse: {:?} (from {} to {})", e, from, to); + None + } + _ => return None, + } + }) + .collect(); + + (remaining, processed) +} + +#[allow(unused)] +fn process_triangle_collapse_queue_iterative( + mesh: &mut HalfEdgeTriMesh, + triangles: impl Iterator, +) -> usize { + profile!("process_triangle_collapse_queue_iterative"); + let (mut remaining, mut processed) = process_triangle_collapse_queue(mesh, triangles); + let mut iter = 1; + info!( + "{} collapse operations remaining after pass {}", + remaining.len(), + iter + ); + while !remaining.is_empty() && iter < 5 { + iter += 1; + let (remaining_new, processed_new) = + process_triangle_collapse_queue(mesh, remaining.into_iter()); + remaining = remaining_new; + processed += processed_new; + info!( + "{} collapse operations remaining after pass {}", + remaining.len(), + iter + ); + } + + processed +} + +fn process_collapse_queue( + mesh: &mut HalfEdgeTriMesh, + collapses: impl Iterator, +) -> SetType<(usize, usize)> { + collapses + .flat_map(|(from, to)| { + if let Some(he) = mesh.half_edge(from, to) { + match mesh.try_half_edge_collapse(he) { + Ok(_) => None, + Err(IllegalHalfEdgeCollapse::IntersectionOfOneRing) => Some((from, to)), + Err(e) => { + warn!("Invalid collapse: {:?} (from {} to {})", e, from, to); + None + } + } + } else { + warn!( + "Invalid collapse: Half-edge missing (from {} to {})", + from, to + ); + None + } + }) + .collect() +} + +fn process_collapse_queue_iterative( + mesh: &mut HalfEdgeTriMesh, + collapses: impl Iterator, +) { + profile!("process_collapse_queue_iterative"); + let mut remaining = process_collapse_queue(mesh, collapses); + let mut iter = 1; + info!( + "{} collapse operations remaining after pass {}", + remaining.len(), + iter + ); + while !remaining.is_empty() && iter < 5 { + iter += 1; + remaining = process_collapse_queue(mesh, remaining.into_iter()); + info!( + "{} collapse operations remaining after pass {}", + remaining.len(), + iter + ); + } +} + +pub fn merge_single_barnacle_configurations_he(mesh: &mut HalfEdgeTriMesh) { + profile!("merge_single_barnacle_configurations"); + + //let vertex_map = &mesh.vertex_map; + + let half_edge_collapses = { + profile!("find candidates"); + + let mut candidates = mesh + .vertices + .par_iter() + .enumerate() + .filter_map(|(i, _v_i)| { + (mesh.vertex_one_ring_len(i) == 4 + && mesh + .vertex_one_ring(i) + .map(|j| mesh.vertex_one_ring_len(j)) + .all(|len| len >= 4 && len <= 6) + && mesh + .vertex_one_ring(i) + .map(|j| mesh.vertex_one_ring_len(j)) + .sum::() + == 20) + .then_some(i) + }) + .collect::>(); + + info!("Found {} single barnacle candidates", candidates.len()); + + let invalid_candidates = candidates + .par_iter() + .copied() + .filter_map(|c| { + mesh.vertex_one_ring(c) + .any(|i| candidates.contains(&i)) + .then_some(c) + }) + .collect::>(); + info!( + "Filtered out {} adjacent candidates", + invalid_candidates.len() + ); + invalid_candidates.into_iter().for_each(|c| { + candidates.remove(&c); + }); + + /* + let mut max_angles = candidates + .iter() + .copied() + .map(|c| { + let mut max_angle = R::zero(); + for j in mesh.vertex_one_ring(c) { + if let Some(he) = mesh.half_edge(j, c) { + if let Ok(_) = mesh.is_collapse_ok(he) { + max_angle = + max_angle.max(mesh.half_edge_collapse_max_normal_change(he)); + println!("max_angle: {}", max_angle); + } + } + } + println!( + "max angle change for config {}: {}\n\n", + c, + max_angle.to_f32().unwrap().to_degrees() + ); + (c, max_angle.to_f32().unwrap().to_degrees()) + }) + .collect::>(); + max_angles.sort_by(|a, b| a.1.partial_cmp(&b.1).unwrap()); + dbg!(max_angles); + */ + + let half_edge_collapses = candidates + .iter() + .copied() + .flat_map(|c| mesh.vertex_one_ring(c).map(move |i| (i, c))) + .collect::>(); + + info!("Enqueued {} collapse operations", half_edge_collapses.len()); + + half_edge_collapses + }; + + process_collapse_queue_iterative(mesh, half_edge_collapses.iter().map(|(i, j)| (*i, *j))); +} + +pub fn merge_double_barnacle_configurations_he(mesh: &mut HalfEdgeTriMesh) { + profile!("merge_double_barnacle_configurations"); + + let is_center_candidate = |i: usize| -> bool { + if mesh.vertex_one_ring_len(i) == 5 { + let mut neighbor_count = + std::array::from_fn(|j| mesh.vertex_one_ring_len(mesh.vertex_one_ring_ith(i, j))); + neighbor_count.sort_unstable(); + neighbor_count == [5, 5, 5, 6, 6] + } else { + false + } + }; + + let half_edge_collapses = { + profile!("find candidates"); + + let sorted_pair = |i: usize, j: usize| -> (usize, usize) { + let a = i.min(j); + let b = j.max(i); + (a, b) + }; + + let mut candidate_pairs = mesh + .vertices + .par_iter() + .enumerate() + .filter_map(|(i, _v_i)| { + if is_center_candidate(i) { + let mut count = 0; + let mut other = 0; + + // Identify the second center vertex of the double config + for j in mesh.vertex_one_ring(i) { + if is_center_candidate(j) { + count += 1; + other = j; + } + } + + // Sort the candidate pair to avoid duplicates + (count == 1).then_some(sorted_pair(i, other)) + } else { + None + } + }) + .collect::>(); + + info!("Found {} double barnacle candidates", candidate_pairs.len(),); + + // First filter out any candidate pair with one vertex of the pair being part of another pair + // This allows to make a unique vertex -> pair mapping in the next step + { + let is_double_candidate_overlapping = |i: usize, j: usize| { + let pair = sorted_pair(i, j); + mesh.vertex_one_ring(i).any(|k| { + let other_pair = sorted_pair(i, k); + // Only filter out one side of the adjacency + let is_smaller = other_pair < pair; + k != j && is_smaller && candidate_pairs.contains(&other_pair) + }) + }; + + let invalid_candidates = candidate_pairs + .par_iter() + .copied() + .filter_map(|(i, j)| { + (is_double_candidate_overlapping(i, j) || is_double_candidate_overlapping(j, i)) + .then_some((i, j)) + }) + .collect::>(); + info!( + "Filtered out {} overlapping candidates", + invalid_candidates.len() + ); + + invalid_candidates.into_iter().for_each(|c| { + candidate_pairs.remove(&c); + }); + } + + // Filter out any candidate pairs whose neighbors overlap with another candidate pair + { + // Collect unique mapping of all center vertices to their candidate pair + let mut candidate_to_pair_map = MapType::default(); + for (i, j) in candidate_pairs.iter().copied() { + candidate_to_pair_map.insert(i, sorted_pair(i, j)); + candidate_to_pair_map.insert(j, sorted_pair(i, j)); + } + + // Checks if any neighbor of `i` has a neighbor that belongs to a different candidate pair than `(i,j)` + let is_double_candidate_adjacent = |i: usize, j: usize| -> bool { + let pair = sorted_pair(i, j); + mesh.vertex_one_ring(i) + .filter(|k| *k != j) + .flat_map(|k| mesh.vertex_one_ring(k).filter(move |&l| l != i && l != j)) + .any(|l| { + if let Some(other_pair) = candidate_to_pair_map.get(&l) { + return *other_pair < pair; + } + return false; + }) + }; + + let invalid_candidates = candidate_pairs + .par_iter() + .copied() + .filter_map(|(i, j)| { + (is_double_candidate_adjacent(i, j) || is_double_candidate_adjacent(j, i)) + .then_some((i, j)) + }) + .collect::>(); + info!( + "Filtered out {} adjacent neighbor candidates", + invalid_candidates.len() + ); + + invalid_candidates.into_iter().for_each(|c| { + candidate_pairs.remove(&c); + }); + } + + // Collect the actual half-edge collapses to perform + let mut half_edge_collapses = MapType::default(); + for (i, j) in candidate_pairs { + let mut insert_replacement = |i: usize, j: usize, k: usize| { + if k != j { + if mesh.vertex_one_ring(k).all(|l| l != j) { + half_edge_collapses.insert(k, i); + } else { + if (mesh.vertices[k] - mesh.vertices[i]).norm() + <= (mesh.vertices[k] - mesh.vertices[j]).norm() + { + half_edge_collapses.insert(k, i); + } else { + half_edge_collapses.insert(k, j); + } + } + } + }; + + for k in mesh.vertex_one_ring(i) { + insert_replacement(i, j, k); + } + + for k in mesh.vertex_one_ring(j) { + insert_replacement(j, i, k); + } + } + + info!("Enqueued {} collapse operations", half_edge_collapses.len()); + + half_edge_collapses + }; + + process_collapse_queue_iterative(mesh, half_edge_collapses.iter().map(|(i, j)| (*i, *j))); +} + +/// Merges triangles sharing an edge to quads if they fulfill the given criteria +pub fn convert_tris_to_quads( + mesh: &TriMesh3d, + non_squareness_limit: R, + normal_angle_limit_rad: R, + max_interior_angle: R, +) -> MixedTriQuadMesh3d { + profile!("tri_to_quad"); + + let vert_tri_map = mesh.vertex_cell_connectivity(); + let tri_normals = mesh + .triangles + .par_iter() + .map(|tri| { + let v0 = &mesh.vertices[tri[0]]; + let v1 = &mesh.vertices[tri[1]]; + let v2 = &mesh.vertices[tri[2]]; + (v1 - v0).cross(&(v2 - v1)).normalize() + }) + .collect::>(); + + let min_dot = normal_angle_limit_rad.cos(); + let max_non_squareness = non_squareness_limit; + let sqrt_two = R::from_f64(2.0_f64.sqrt()).unwrap(); + + let tris_to_quad = |tri_i: &[usize; 3], tri_j: &[usize; 3]| -> [usize; 4] { + let mut quad = [0, 0, 0, 0]; + let missing_vertex: usize = tri_j.iter().copied().find(|v| !tri_i.contains(v)).unwrap(); + + quad[0] = tri_i[0]; + if tri_j.contains(&tri_i[0]) { + if tri_j.contains(&tri_i[1]) { + quad[1] = missing_vertex; + quad[2] = tri_i[1]; + quad[3] = tri_i[2]; + } else { + quad[1] = tri_i[1]; + quad[2] = tri_i[2]; + quad[3] = missing_vertex; + } + } else { + if tri_j.contains(&tri_i[1]) { + quad[1] = tri_i[1]; + quad[2] = missing_vertex; + quad[3] = tri_i[2]; + } else { + panic!("this should not happen"); + } + } + + quad + }; + + /// Computes the interior angle of a quad at vertex v_center with the quad given by [v_prev, v_center, v_next, v_opp] + fn quad_interior_angle( + v_center: &Vector3, + v_prev: &Vector3, + v_next: &Vector3, + v_opp: &Vector3, + ) -> R { + let d_prev = v_prev - v_center; + let d_middle = v_opp - v_center; + let d_next = v_next - v_center; + + let l_prev = d_prev.norm(); + let l_middle = d_middle.norm(); + let l_next = d_next.norm(); + + let angle_prev = (d_prev.dot(&d_middle)).unscale(l_prev * l_middle).acos(); + let angle_next = (d_middle.dot(&d_next)).unscale(l_middle * l_next).acos(); + + angle_prev + angle_next + } + + let quad_candidates = mesh + .triangles + .par_iter() + .enumerate() + .filter_map(|(i, tri_i)| { + for &vert in tri_i { + // Loop over all triangles of all vertices of the triangle from iterator chain + for &j in &vert_tri_map[vert] { + // Skip triangle from iterator chain and make sure that we process every triangle pair only once + if j <= i { + continue; + } else { + let tri_j = &mesh.triangles[j]; + let mut recurring_verts = 0; + let mut other_vert = 0; + for &v in tri_j { + if tri_i.contains(&v) { + recurring_verts += 1; + if v != vert { + other_vert = v; + } + } + } + + // Found triangle pair with shared edge + if recurring_verts == 2 { + let mut convert = false; + let mut quality = R::one(); + + // Check "squareness" of the two triangles + { + let quad = tris_to_quad(tri_i, tri_j); + + // Compute diagonal edge length + let diag = (mesh.vertices[vert] - mesh.vertices[other_vert]).norm(); + let max_length = (diag / sqrt_two) * max_non_squareness; + let min_length = (diag / sqrt_two) * max_non_squareness.recip(); + + let v0 = mesh.vertices[quad[0]]; + let v1 = mesh.vertices[quad[1]]; + let v2 = mesh.vertices[quad[2]]; + let v3 = mesh.vertices[quad[3]]; + + let d0 = v1 - v0; + let d1 = v2 - v1; + let d2 = v3 - v2; + let d3 = v0 - v3; + + let edge_ls = [d0.norm(), d1.norm(), d2.norm(), d3.norm()]; + + let angles = [ + quad_interior_angle(&v0, &v3, &v1, &v2), + quad_interior_angle(&v1, &v0, &v2, &v3), + quad_interior_angle(&v2, &v3, &v1, &v0), + quad_interior_angle(&v3, &v2, &v0, &v1), + ]; + + if edge_ls + .iter() + .all(|&edge_l| edge_l <= max_length && edge_l >= min_length) + && angles.iter().all(|&angle| angle <= max_interior_angle) + { + convert = true; + + let mut longest = edge_ls[0]; + let mut shortest = edge_ls[0]; + + for l in edge_ls { + longest = longest.max(l); + shortest = shortest.min(l); + } + + quality = shortest / longest; + } + } + + // Check normal deviation of triangles + if convert { + let dot_i_j = tri_normals[i].dot(&tri_normals[j]); + if dot_i_j >= min_dot { + convert = convert; + } else { + convert = false; + } + } + + if convert { + // Return the two triangle indices that should be merged + return Some(((i, j), quality)); + } + } + } + } + } + + return None; + }) + .collect::>(); + + info!( + "Number of quad conversion candidates: {}", + quad_candidates.len() + ); + + let mut triangles_to_remove = SetType::default(); + let mut filtered_candidates = SetType::default(); + for ((i, j), _q) in quad_candidates { + // TODO: If triangle already exists in list, compare quality + + if !triangles_to_remove.contains(&i) && !triangles_to_remove.contains(&j) { + triangles_to_remove.insert(i); + triangles_to_remove.insert(j); + + filtered_candidates.insert((i, j)); + } + } + + let quads: Vec<[usize; 4]> = filtered_candidates + .par_iter() + .copied() + .map(|(i, j)| { + let tri_i = &mesh.triangles[i]; + let tri_j = &mesh.triangles[j]; + tris_to_quad(tri_i, tri_j) + }) + .collect::<_>(); + + let filtered_triangles = mesh + .triangles + .par_iter() + .copied() + .enumerate() + .filter_map(|(i, tri)| (!triangles_to_remove.contains(&i)).then_some(tri)) + .collect::>(); + + info!("Before conversion: {} triangles", mesh.triangles.len()); + info!( + "After conversion: {} triangles, {} quads, {} total cells ({:.2}% fewer)", + filtered_triangles.len(), + quads.len(), + filtered_triangles.len() + quads.len(), + (((mesh.triangles.len() - (filtered_triangles.len() + quads.len())) as f64) + / (mesh.triangles.len() as f64)) + * 100.0 + ); + + let mut cells = Vec::with_capacity(filtered_triangles.len() + quads.len()); + cells.extend( + filtered_triangles + .into_iter() + .map(|tri| TriangleOrQuadCell::Tri(tri)), + ); + cells.extend(quads.into_iter().map(|quad| TriangleOrQuadCell::Quad(quad))); + + MixedTriQuadMesh3d { + vertices: mesh.vertices.clone(), + cells, + } +} diff --git a/splashsurf_lib/src/reconstruction.rs b/splashsurf_lib/src/reconstruction.rs index a3b6d2e..56b5764 100644 --- a/splashsurf_lib/src/reconstruction.rs +++ b/splashsurf_lib/src/reconstruction.rs @@ -1,9 +1,10 @@ +use anyhow::Context; use log::info; use nalgebra::Vector3; use crate::dense_subdomains::{ - compute_global_density_vector, decomposition, initialize_parameters, reconstruction, stitching, - subdomain_classification::GhostMarginClassifier, + compute_global_densities_and_neighbors, decomposition, initialize_parameters, reconstruction, + stitching, subdomain_classification::GhostMarginClassifier, }; use crate::{profile, Index, Parameters, Real, SurfaceReconstruction}; @@ -13,51 +14,58 @@ pub(crate) fn reconstruct_surface_subdomain_grid<'a, I: Index, R: Real>( parameters: &Parameters, output_surface: &'a mut SurfaceReconstruction, ) -> Result<(), anyhow::Error> { - let mesh = { - profile!("surface reconstruction subdomain-grid"); - - let parameters = initialize_parameters(parameters, &particle_positions, output_surface)?; - - // Filter "narrow band" - /* - let narrow_band_particles = extract_narrow_band(¶meters, &particles); - let particles = narrow_band_particles; - */ - - let subdomains = - decomposition::>(¶meters, &particle_positions)?; - - /* - { - use super::dense_subdomains::debug::*; - subdomain_stats(¶meters, &particle_positions, &subdomains); - info!( - "Number of subdomains with only ghost particles: {}", - count_no_owned_particles_subdomains(¶meters, &particle_positions, &subdomains) - ); - } - */ - - let particle_densities = - compute_global_density_vector(¶meters, &particle_positions, &subdomains); - - let surface_patches = reconstruction( - ¶meters, - &particle_positions, - &particle_densities, - &subdomains, - ); + profile!("surface reconstruction subdomain-grid"); + + let internal_parameters = + initialize_parameters(parameters, &particle_positions, output_surface)?; + output_surface.grid = internal_parameters + .global_marching_cubes_grid() + .context("failed to convert global marching cubes grid")?; + + // Filter "narrow band" + /* + let narrow_band_particles = extract_narrow_band(¶meters, &particles); + let particles = narrow_band_particles; + */ - let global_mesh = stitching(surface_patches); + let subdomains = + decomposition::>(&internal_parameters, &particle_positions)?; + + /* + { + use super::dense_subdomains::debug::*; + subdomain_stats(¶meters, &particle_positions, &subdomains); info!( - "Global mesh has {} vertices and {} triangles.", - global_mesh.vertices.len(), - global_mesh.triangles.len() + "Number of subdomains with only ghost particles: {}", + count_no_owned_particles_subdomains(¶meters, &particle_positions, &subdomains) ); + } + */ + + let (particle_densities, particle_neighbors) = compute_global_densities_and_neighbors( + &internal_parameters, + &particle_positions, + &subdomains, + ); + + let surface_patches = reconstruction( + &internal_parameters, + &particle_positions, + &particle_densities, + &subdomains, + ); - global_mesh - }; + let global_mesh = stitching(surface_patches); + info!( + "Global mesh has {} vertices and {} triangles.", + global_mesh.vertices.len(), + global_mesh.triangles.len() + ); - let _ = std::mem::replace(&mut output_surface.mesh, mesh); + output_surface.mesh = global_mesh; + output_surface.particle_densities = Some(particle_densities); + if parameters.global_neighborhood_list { + output_surface.particle_neighbors = Some(particle_neighbors); + } Ok(()) } diff --git a/splashsurf_lib/src/traits.rs b/splashsurf_lib/src/traits.rs index 8bdb185..7c7a145 100644 --- a/splashsurf_lib/src/traits.rs +++ b/splashsurf_lib/src/traits.rs @@ -3,7 +3,7 @@ use std::hash::Hash; use std::ops::{AddAssign, MulAssign, SubAssign}; use bytemuck::Pod; -use nalgebra::{RealField, SVector}; +use nalgebra::{RealField, SMatrix}; use num_integer::Integer; use num_traits::{ Bounded, CheckedAdd, CheckedMul, CheckedSub, FromPrimitive, NumCast, SaturatingSub, ToPrimitive, @@ -116,23 +116,6 @@ RealField + Pod + ThreadSafe { - /// Tries to convert this value to another [`Real`] type `T` by converting first to `f64` followed by `T::from_f64`. If the value cannot be represented by the target type, `None` is returned. - fn try_convert(self) -> Option { - Some(T::from_f64(self.to_f64()?)?) - } - - /// Tries to convert the values of a statically sized `nalgebra::SVector` to another type, same behavior as [`Real::try_convert`] - fn try_convert_vec_from(vec: &SVector) -> Option> - where - R: Real, - { - let mut converted = SVector::::zeros(); - for i in 0..D { - converted[i] = vec[i].try_convert()? - } - Some(converted) - } - /// Converts this value to the specified [`Index`] type. If the value cannot be represented by the target type, `None` is returned. fn to_index(self) -> Option { I::from_f64(self.to_f64()?) @@ -193,3 +176,41 @@ impl< > Real for T { } + +/// Trait for converting values, matrices, etc. from one `Real` type to another. +pub trait RealConvert: Sized { + type Out + where + To: Real; + + /// Tries to convert this value to the target type, returns `None` if value cannot be represented by the target type + fn try_convert(self) -> Option>; + /// Converts this value to the target type, panics if value cannot be represented by the target type + fn convert(self) -> Self::Out { + self.try_convert().expect("failed to convert") + } +} + +impl RealConvert for &From { + type Out = To where To: Real; + + fn try_convert(self) -> Option { + ::from(*self) + } +} + +impl RealConvert for SMatrix { + type Out = SMatrix where To: Real; + + fn try_convert(self) -> Option> { + let mut m_out: SMatrix = SMatrix::zeros(); + m_out + .iter_mut() + .zip(self.iter()) + .try_for_each(|(x_out, x_in)| { + *x_out = ::from(*x_in)?; + Some(()) + })?; + Some(m_out) + } +} diff --git a/splashsurf_lib/src/utils.rs b/splashsurf_lib/src/utils.rs index 48d4cc3..fdcd96a 100644 --- a/splashsurf_lib/src/utils.rs +++ b/splashsurf_lib/src/utils.rs @@ -4,6 +4,15 @@ use log::info; use rayon::prelude::*; use std::cell::UnsafeCell; +/// "Convert" an empty vector to preserve allocated memory if size and alignment matches +/// See https://users.rust-lang.org/t/pattern-how-to-reuse-a-vec-str-across-loop-iterations/61657/5 +/// See https://github.com/rust-lang/rfcs/pull/2802 +#[allow(unused)] +pub(crate) fn recycle(mut v: Vec) -> Vec { + v.clear(); + v.into_iter().map(|_| unreachable!()).collect() +} + /// Macro version of Option::map that allows using e.g. using the ?-operator in the map expression /// /// For example: diff --git a/splashsurf_lib/tests/integration_tests/mod.rs b/splashsurf_lib/tests/integration_tests/mod.rs index 2319000..9b76c4c 100644 --- a/splashsurf_lib/tests/integration_tests/mod.rs +++ b/splashsurf_lib/tests/integration_tests/mod.rs @@ -1,5 +1,7 @@ #[cfg(feature = "io")] pub mod test_full; +#[cfg(feature = "io")] +pub mod test_mesh; pub mod test_neighborhood_search; #[cfg(feature = "io")] pub mod test_octree; diff --git a/splashsurf_lib/tests/integration_tests/test_full.rs b/splashsurf_lib/tests/integration_tests/test_full.rs index 4133f67..c46d9b1 100644 --- a/splashsurf_lib/tests/integration_tests/test_full.rs +++ b/splashsurf_lib/tests/integration_tests/test_full.rs @@ -39,6 +39,7 @@ fn params_with_aabb( particle_aabb: domain_aabb, enable_multi_threading: false, spatial_decomposition: None, + global_neighborhood_list: false, }; match strategy { @@ -132,10 +133,11 @@ macro_rules! generate_test { let reconstruction = reconstruct_surface::(particle_positions.as_slice(), ¶meters).unwrap(); - write_vtk(reconstruction.mesh(), output_file, "mesh").unwrap(); + write_vtk(reconstruction.mesh(), &output_file, "mesh").unwrap(); println!( - "Reconstructed mesh from particle file '{}' with {} particles has {} triangles.", + "Reconstructed mesh '{}' from particle file '{}' with {} particles has {} triangles.", + output_file.display(), $input_file, particle_positions.len(), reconstruction.mesh().triangles.len() @@ -157,10 +159,10 @@ macro_rules! generate_test { if test_for_boundary(¶meters) { // Ensure that the mesh does not have a boundary - if let Err(e) = check_mesh_consistency(reconstruction.grid(), reconstruction.mesh()) + if let Err(e) = check_mesh_consistency(reconstruction.grid(), reconstruction.mesh(), true, true, true) { eprintln!("{}", e); - panic!("Mesh contains boundary edges"); + panic!("Mesh contains topological/manifold errors"); } } } diff --git a/splashsurf_lib/tests/integration_tests/test_mesh.rs b/splashsurf_lib/tests/integration_tests/test_mesh.rs new file mode 100644 index 0000000..fcfba89 --- /dev/null +++ b/splashsurf_lib/tests/integration_tests/test_mesh.rs @@ -0,0 +1,66 @@ +use splashsurf_lib::halfedge_mesh::HalfEdgeTriMesh; +use splashsurf_lib::io; +use splashsurf_lib::mesh::{Mesh3d, MeshWithData}; + +#[test] +fn test_halfedge_ico() -> Result<(), anyhow::Error> { + let mesh = io::obj_format::surface_mesh_from_obj::("../data/icosphere.obj")?.mesh; + let mut he_mesh = HalfEdgeTriMesh::from(mesh); + + he_mesh.try_half_edge_collapse(he_mesh.half_edge(12, 0).unwrap())?; + he_mesh.try_half_edge_collapse(he_mesh.half_edge(18, 2).unwrap())?; + he_mesh.try_half_edge_collapse(he_mesh.half_edge(2, 17).unwrap())?; + he_mesh.try_half_edge_collapse(he_mesh.half_edge(17, 0).unwrap())?; + he_mesh.try_half_edge_collapse(he_mesh.half_edge(27, 7).unwrap())?; + he_mesh.try_half_edge_collapse(he_mesh.half_edge(34, 7).unwrap())?; + he_mesh.try_half_edge_collapse(he_mesh.half_edge(39, 7).unwrap())?; + he_mesh.try_half_edge_collapse(he_mesh.half_edge(26, 7).unwrap())?; + + he_mesh.try_half_edge_collapse(he_mesh.half_edge(0, 16).unwrap())?; + + let (tri_mesh, _vertex_map) = he_mesh.into_parts(true); + let _tri_vertex_map = tri_mesh.vertex_vertex_connectivity(); + + io::obj_format::mesh_to_obj(&MeshWithData::new(tri_mesh), "../icosphere_new.obj")?; + Ok(()) +} + +#[test] +fn test_halfedge_plane() -> Result<(), anyhow::Error> { + let mut mesh = io::obj_format::surface_mesh_from_obj::("../data/plane.obj")?.mesh; + + // Make mesh curved + for v in mesh.vertices.iter_mut() { + v.y = -0.1 * (v.x * v.x + v.z * v.z); + } + + io::obj_format::mesh_to_obj(&MeshWithData::new(mesh.clone()), "../plane_new_0.obj")?; + + let mut he_mesh = HalfEdgeTriMesh::from(mesh); + + //he_mesh.vertices[246].y = 0.01; + //he_mesh.vertices[267].y = 0.02; + + dbg!(he_mesh.half_edge_collapse_max_normal_change(he_mesh.half_edge(224, 223).unwrap())); + he_mesh.try_half_edge_collapse(he_mesh.half_edge(224, 223).unwrap())?; + + dbg!(he_mesh.half_edge_collapse_max_normal_change(he_mesh.half_edge(223, 225).unwrap())); + he_mesh.try_half_edge_collapse(he_mesh.half_edge(223, 225).unwrap())?; + + dbg!(he_mesh.half_edge_collapse_max_normal_change(he_mesh.half_edge(225, 246).unwrap())); + he_mesh.try_half_edge_collapse(he_mesh.half_edge(225, 246).unwrap())?; + + dbg!(he_mesh.half_edge_collapse_max_normal_change(he_mesh.half_edge(246, 267).unwrap())); + he_mesh.try_half_edge_collapse(he_mesh.half_edge(246, 267).unwrap())?; + + //dbg!(he_mesh.half_edge_collapse_max_normal_change(he_mesh.half_edge(223, 202).unwrap())); + //he_mesh.try_half_edge_collapse(he_mesh.half_edge(223, 202).unwrap())?; + //dbg!(he_mesh.half_edge_collapse_max_normal_change(he_mesh.half_edge(202, 181).unwrap())); + //he_mesh.try_half_edge_collapse(he_mesh.half_edge(202, 181).unwrap())?; + + let (tri_mesh, _vertex_map) = he_mesh.into_parts(true); + let _tri_vertex_map = tri_mesh.vertex_vertex_connectivity(); + + io::obj_format::mesh_to_obj(&MeshWithData::new(tri_mesh), "../plane_new_1.obj")?; + Ok(()) +} diff --git a/splashsurf_lib/tests/integration_tests/test_octree.rs b/splashsurf_lib/tests/integration_tests/test_octree.rs index 27ef429..3773964 100644 --- a/splashsurf_lib/tests/integration_tests/test_octree.rs +++ b/splashsurf_lib/tests/integration_tests/test_octree.rs @@ -25,7 +25,7 @@ fn octree_to_file, I: Index, R: Real>( path: P, ) { let mesh = octree.hexmesh(&grid, false); - io::vtk_format::write_vtk(mesh.to_unstructured_grid(), path.as_ref(), "octree").unwrap(); + io::vtk_format::write_vtk(mesh, path.as_ref(), "octree").unwrap(); } #[test]