Skip to main content
Log in

PuzzleNet: Boundary-Aware Feature Matching for Non-Overlapping 3D Point Clouds Assembly

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

We address the 3D shape assembly of multiple geometric pieces without overlaps, a scenario often encountered in 3D shape design, field archeology, and robotics. Existing methods depend on strong assumptions on the number of shape pieces and coherent geometry or semantics of shape pieces. Despite raising attention to 3D registration with complex or low overlapping patterns, few methods consider shape assembly with rare overlaps. To address this problem, we present a novel framework inspired by solving puzzles, named PuzzleNet, which conducts multi-task learning by leveraging both 3D alignment and boundary information. Specifically, we design an end-to-end neural network based on a point cloud transformer with two-way branches for estimating rigid transformation and predicting boundaries simultaneously. The framework is then naturally extended to reassemble multiple pieces into a full shape by using an iterative greedy approach based on the distance between each pair of candidate-matched pieces. To train and evaluate PuzzleNet, we construct two datasets, named DublinPuzzle and ModelPuzzle, based on a real-world urban scan dataset (DublinCity) and a synthetic CAD dataset (ModelNet40) respectively. Experiments demonstrate our effectiveness in solving 3D shape assembly for multiple pieces with arbitrary geometry and inconsistent semantics. Our method surpasses state-of-the-art algorithms by more than 10 times in rotation metrics and four times in translation metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Funkhouser T, Kazhdan M, Shilane P, Min P, Kiefer W, Tal A, Rusinkiewicz S, Dobkin D. Modeling by example. ACM Trans. Graphics, 2004, 23(3): 652–663. https://doi.org/10.1145/1015706.1015775.

    Article  Google Scholar 

  2. Huang Q X, Flöry S, Gelfand N, Hofer M, Pottmann H. Reassembling fractured objects by geometric matching. ACM Trans. Graphics, 2006, 25(3): 569–578. https://doi.org/10.1145/1141911.1141925.

    Article  Google Scholar 

  3. Wu K, Fu X M, Chen R J, Liu L G. Survey on computational 3D visual optical art design. Visual Computing for Industry, Biomedicine, and Art, 2022, 5(1): Article No. 31. https://doi.org/10.1186/s42492-022-00126-z.

  4. Chen Y C, Li H D, Turpin D, Jacobson A, Garg A. Neural shape mating: Self-supervised object assembly with adversarial shape priors. In Proc. the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2022, pp.12724–12733. https://doi.org/10.1109/CVPR52688.2022.01239.

  5. Huang H, Gong M L, Cohen-Or D, Ouyang Y B, Tan F W, Zhang H. Field-guided registration for feature-conforming shape composition. ACM Trans. Graphics, 2012, 31(6): Article No. 179. https://doi.org/10.1145/2366145.2366198.

  6. Huang J, Zhan G Q, Fan Q N, Mo K C, Shao L, Chen B Q, Guibas L J, Dong H. Generative 3D part assembly via dynamic graph learning. In Proc. the 34th International Conference on Neural Information Processing Systems, Dec. 2020.

  7. Li Y C, Mo K C, Shao L, Sung M, Guibas L. Learning 3D part assembly from a single image. In Proc. the 16th European Conference on Computer Vision, Aug. 2020, pp.664–682. https://doi.org/10.1007/978-3-030-58539-6_40.

  8. Lee Y, Hu E S, Lim J J. IKEA furniture assembly environment for long-horizon complex manipulation tasks. In Proc. the 2021 IEEE International Conference on Robotics and Automation, Jun. 2021, pp.6343–6349. https://doi.org/10.1109/ICRA48506.2021.9560986.

  9. Huang X S, Mei G F, Zhang J, Abbas R. A comprehensive survey on point cloud registration. arXiv: 2103.02690, 2021. https://arxiv.org/abs/2103.02690, Jun. 2023.

  10. Huang S Y, Gojcic Z, Usvyatsov M, Wieser A, Schindler K. Predator: Registration of 3D point clouds with low overlap. In Proc. the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2021, pp.4267–4276. https://doi.org/10.1109/CVPR46437.2021.00425.

  11. Qin Z, Yu H, Wang C J, Guo Y L, Peng Y X, Xu K. Geometric transformer for fast and robust point cloud registration. In Proc. the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2022, pp. 11133–11142. https://doi.org/10.1109/CVPR52688.2022.01086.

  12. Zolanvari S M I, Ruano S, Rana A, Cummins A, da Silva R E, Rahbar M, Smolic A. DublinCity: Annotated LiDAR point cloud and its applications. In Proc. the 30th British Machine Vision Conference, Sept. 2019.

  13. Wu Z R, Song S R, Khosla A, Yu F, Zhang L G, Tang X O, Xiao J X. 3D ShapeNets: A deep representation for volumetric shapes. In Proc. the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2015, pp.1912–1920. https://doi.org/10.1109/CVPR.2015.7298801.

  14. Yin K X, Chen Z Q, Chaudhuri S, Fisher M, Kim V G, Zhang H. COALESCE: Component assembly by learning to synthesize connections. In Proc. the 2020 International Conference on 3D Vision, Nov. 2020. https://doi.org/10.1109/3DV50981.2020.00016.

  15. Willis K D D, Jayaraman P K, Chu H, Tian Y S, Li Y F, Grandi D, Sanghi A, Tran L, Lambourne J G, Solar-Lezama A, Matusik W. JoinABLe: Learning bottom-up assembly of parametric CAD joints. In Proc. the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2022. https://doi.org/10.1109/CVPR52688.2022.01539.

  16. Jones B, Hildreth D, Chen D W, Baran I, Kim V G, Schulz A. AutoMate: A dataset and learning approach for automatic mating of CAD assemblies. ACM Trans. Graphics, 2021, 40(6): Article No. 227. https://doi.org/10.1145/3478513.3480562.

  17. Jones R K, Barton T, Xu X H, Wang K, Jiang E, Guerrero P, Mitra N J, Ritchie D. ShapeAssembly: Learning to generate programs for 3D shape structure synthesis. ACM Trans. Graphics, 2020, 39(6): Article No. 234. https://doi.org/10.1145/3414685.3417812.

  18. Harish A N, Nagar R, Raman S. RGL-NET: A recurrent graph learning framework for progressive part assembly. In Proc. the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, Jan. 2022, pp.647–656. https://doi.org/10.1109/WACV51458.2022.00072.

  19. Mo K C, Zhu S L, Chang A X, Yi L, Tripathi S, Guibas L J, Su H. PartNet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding. In Proc. the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2019, pp.909–918. https://doi.org/10.1109/CVPR.2019.00100.

  20. Hong J H, Yoo S J, Zeeshan M A, Kim Y M, Kim J. Structure-from-sherds: Incremental 3D reassembly of axially symmetric pots from unordered and mixed fragment collections. In Proc. the 2021 IEEE/CVF International Conference on Computer Vision, Oct. 2021, pp.5423–5431. https://doi.org/10.1109/ICCV48922.2021.00539.

  21. Zhang K, Yu W Y, Manhein M, Waggenspack W, Li X. 3D fragment reassembly using integrated template guidance and fracture-region matching. In Proc. the 2015 IEEE International Conference on Computer Vision, Dec. 2015, pp.2138–2146. https://doi.org/10.1109/ICCV.2015.247.

  22. Aiger D, Mitra N J, Cohen-Or D. 4-points congruent sets for robust pairwise surface registration. ACM Trans. Graphics, 2008, 27(3): 1–10. https://doi.org/10.1145/1360612.1360684.

    Article  Google Scholar 

  23. Zhou Q Y, Park J, Koltun V. Fast global registration. In Proc. the 14th European Conference on Computer Vision, Oct. 2016, pp.766–782. https://doi.org/10.1007/978-3-319-46475-6_47.

  24. Mellado N, Aiger D, Mitra N J. Super 4PCS fast global pointcloud registration via smart indexing. Computer Graphics Forum, 2014, 33(5): 205–215. https://doi.org/10.1111/cgf.12446.

    Article  Google Scholar 

  25. Guo J W, Xing X J, Quan W Z, Yan D M, Gu Q Y, Liu Y, Zhang X P. Efficient center voting for object detection and 6D pose estimation in 3D point cloud. IEEE Trans. Image Processing, 2021, 30: 5072–5084. https://doi.org/10.1109/TIP.2021.3078109.

    Article  Google Scholar 

  26. Xing X J, Guo J W, Nan L L, Gu Q Y, Zhang X P, Yan D M. Efficient MSPSO sampling for object detection and 6-D pose estimation in 3-D scenes. IEEE Trans. Industrial Electronics, 2022, 69(10): 10281–10291. https://doi.org/10.1109/TIE.2021.3121721.

    Article  Google Scholar 

  27. Chen S L, Nan L L, Xia R B, Zhao J B, Wonka P. PLADE: A plane-based descriptor for point cloud registration with small overlap. IEEE Trans. Geoscience and Remote Sensing, 2020, 58(4): 2530–2540. https://doi.org/10.1109/TGRS.2019.2952086.

    Article  Google Scholar 

  28. Zhang L, Guo J W, Cheng Z L, Xiao J, Zhang X P. Efficient pairwise 3-D registration of urban scenes via hybrid structural descriptors. IEEE Trans. Geoscience and Remote Sensing, 2022, 60: Article No. 5700717. https://doi.org/10.1109/TGRS.2021.3091380.

  29. Fischler M A, Bolles R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 1981, 24(6): 381–395. https://doi.org/10.1145/358669.358692.

    Article  MathSciNet  Google Scholar 

  30. Besl P J, McKay N D. Method for registration of 3-D shapes. IEEE Trans. Pattern Analysis and Machine Intelligence, 1992, 14(2): 239–256. https://doi.org/10.1109/34.121791.

  31. Gelfand N, Ikemoto L, Rusinkiewicz S, Levoy M. Geometrically stable sampling for the ICP algorithm. In Proc. the 4th International Conference on 3-D Digital Imaging and Modeling, Oct. 2003, pp.260–267. https://doi.org/10.1109/IM.2003.1240258.

  32. Rusinkiewicz S, Levoy M. Efficient variants of the ICP algorithm. In Proc. the 3rd International Conference on 3-D Digital Imaging and Modeling, Jun. 2001, pp.145–152. https://doi.org/10.1109/IM.2001.924423.

  33. Billings S D, Boctor E M, Taylor R H. Iterative most-likely point registration (IMLP): A robust algorithm for computing optimal shape alignment. PLoS One, 2015, 10(3): e0117688. https://doi.org/10.1371/journal.pone.0117688.

  34. Qi C R, Yi L, Su H, Guibas L J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Proc. the 31st International Conference on Adv. Neural Information Processing Systems, Dec. 2017, pp.5105–5114.

  35. Wang Y, Sun Y B, Liu Z W, Sarma S E, Bronstein M M, Solomon J M. Dynamic graph CNN for learning on point clouds. ACM Trans. Graphics, 2019, 38(5): Article No. 146. https://doi.org/10.1145/3326362.

  36. Guo M H, Cai J X, Liu Z N, Mu T J, Martin R R, Hu S M. PCT: Point cloud transformer. Computational Visual Media, 2021, 7(2): 187–199. https://doi.org/10.1007/s41095-021-0229-5.

    Article  Google Scholar 

  37. Hu S M, Liang D, Yang G Y, Yang G W, Zhou W Y. Jittor: A novel deep learning framework with meta-operators and unified graph execution. Science China Information Sciences, 2020, 63(12): Article No. 222103. https://doi.org/10.1007/s11432-020-3097-4.

  38. Zeng A, Song S R, Nießner M, Fisher M, Xiao J X, Funkhouser T. 3DMatch: Learning local geometric descriptors from RGB-D reconstructions. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.1802–1811. https://doi.org/10.1109/CVPR.2017.29.

  39. Yew Z J, Lee G H. 3DFeat-Net: Weakly supervised local 3D features for point cloud registration. In Proc. the 15th European Conference on Computer Vision, Oct. 2018, pp.607–623. https://doi.org/10.1007/978-3-030-01267-0_37.

  40. Deng H W, Birdal T, Ilic S. PPFNet: Global context aware local features for robust 3D point matching. In Proc. the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp.195–205. https://doi.org/10.1109/CVPR.2018.00028.

  41. Gojcic Z, Zhou C F, Wegner J D, Wieser A. The perfect match: 3D point cloud matching with smoothed densities. In Proc. the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2019, pp.5545–5554. https://doi.org/10.1109/CVPR.2019.00569.

  42. Aoki Y, Goforth H, Srivatsan R A, Lucey S. PointNetLK: Robust & efficient point cloud registration using Point-Net. In Proc. the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2019, pp.7163–7172. https://doi.org/10.1109/CVPR.2019.00733.

  43. Wang Y, Solomon J. Deep closest point: Learning representations for point cloud registration. In Proc. the 2019 IEEE/CVF International Conference on Computer Vision, Oct. 27–Nov. 2, 2019, pp.3522–3531. https://doi.org/10.1109/ICCV.2019.00362.

  44. Iglesias J P, Olsson C, Kahl F. Global optimality for point set registration using semidefinite programming. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.8287–8295. https://doi.org/10.1109/CVPR42600.2020.00831.

  45. Zhu L F, Guan H N, Lin C W, Han R M. Neighborhood-aware geometric encoding network for point cloud registration. arXiv: 2201.12094, 2022. https://arxiv.org/abs/2201.12094, Jun. 2023.

  46. Cao A Q, Puy G, Boulch A, Marlet R. PCAM: Product of cross-attention matrices for rigid registration of point clouds. In Proc. the 2021 IEEE/CVF International Conference on Computer Vision, Oct. 2021. https://doi.org/10.1109/ICCV48922.2021.01298.

  47. Xu H, Liu S C, Wang G F, Liu G H, Zeng B. OMNet: Learning overlapping mask for partial-to-partial point cloud registration. In Proc. the 2021 IEEE/CVF International Conference on Computer Vision, Oct. 2021, pp.3132–3141. https://doi.org/10.1109/ICCV48922.2021.00312.

  48. Huang X S, Mei G F, Zhang J. Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences. In Proc. the 2020 IEEE/CVF Computer Vision and Pattern Recognition, Jun. 2020, pp.11366–11374. https://doi.org/10.1109/CVPR42600.2020.01138.

  49. Yew Z J, Lee G H. RPM-Net: Robust point matching using learned features. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.11821–11830. https://doi.org/10.1109/CVPR42600.2020.01184.

  50. Sellán S, Luong J, Da Silva L M, Ramakrishnan A, Yang Y C, Jacobson A. Breaking good: Fracture modes for real-time destruction. ACM Trans. Graphics, 2023, 42(1): Article No. 10. https://doi.org/10.1145/3549540.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jian-Wei Guo.

Supplementary Information

ESM 1

(PDF 670 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, HY., Guo, JW., Jiang, HY. et al. PuzzleNet: Boundary-Aware Feature Matching for Non-Overlapping 3D Point Clouds Assembly. J. Comput. Sci. Technol. 38, 492–509 (2023). https://doi.org/10.1007/s11390-023-3127-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-023-3127-8

Keywords

Navigation