Skip to content

The official implementation for ECCV22 paper: Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point Clouds for Closing Domain Gap

License

Notifications You must be signed in to change notification settings

Gorilla-Lab-SCUT/QS3

Repository files navigation

Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point Clouds for Closing Domain Gap

Paper | Data | [Supplementary Materials]

This repository contains an implementation for the ECCV 2022 paper Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point Clouds for Closing Domain Gap.

We propose an integrated scheme of Quasi-balanced Self-training on Speckle-projected Synthesis (QS3) to cope with shape and density shift between synthetic and real point clouds. Given identical CAD models, we generate a point cloud SpeckleNet dataset simulating realistic noises in stereo imaging and matching; while point clouds in existing ModelNet dataset are sampled from object surface of those models. Moreover, we design a novel quasi-balanced selftraining (QBST) strategy to further boost the UDA performance. With two representative UDA methods (DefRec+PCM, PointDAN) and two representative point cloud classification networks (PointNet++, DGCNN), our integrated QS3 can perform consistently better than others, when evaluating on real world data – an adapted DepthScanNet.

Installation Requirments

The code for Mesh2Point pipeline that generates noisy point clouds is compatible with blender 2.93, which also complile a default python environment. We use our Mesh2Point to scan the ModelNet dataset then get a noisy point cloud dataset--namely SpeckleNet, and the generated datasets are available here (SpeckleNet10 for 10 categories and SpeckleNet40 for 40 categories).

If you want to scan your own 3D models, please download blender blender 2.93 and install the required python libraries in blender's python environment by running:

path_to_blender/blender-2.93.0-linux-x64/2.93/python/bin/pip install -r Mesh2Point_environment.yml

Meanwhile, we also release the code for Quasi-Balanced Self-Training (QBST), which is compatible with python 3.7.11 and pytorch 1.7.1.

You can create an environment called QBST with the required dependencies by running:

pip install -r QBST_environment.yml

Usage

Data

We use our Mesh2Point pipeline to scan ModelNet and generate a new dataset SpeckleNet. Note that blender cannot import ModelNet's original file format, so we convert Object File Format (.off) to Wavefront OBJ Format (.obj). The converted version ModelNet40_OBJ is available here.

You can also scan your own 3D model dataset using:

CUDA_VISIBLE_DEVICES=0 path_to_blender/blender-2.93.0-linux-x64/blender ./blend_file/spot.blend -b --python scan_models.py --  --view=5 --modelnet_dir=path_to_model_dataset --category_list=bed

Notice that you need to organize your own data in the same architecture as ModelNet.

Ordinary Experiments

ScanNet10 (S*) is a realistic dataset generated by PointDAN. It is extracted from a smooth mesh dataset that reconstructed from noisy depth frames. DepthScanNet10 (D) is directly extracted from noisy depth frames sequence, which keep more noisy points and therefore more realistic than ScanNet10. Both two datasets use depth frames sequence from ScanNet.

We train four ordinary models, specifically, Ponitnet++, DGCNN, RSCNN and SimpleView on ModelNet10 (M) and SpeckleNet10 (S) respectively, and test classification accuracy on both realistic datasets DepthScanNet10 (D) and ScanNet10 (S*). The results are shown as following (detailed in our paper):

Method M → D S → D (ours) M → S* S → S* (ours)
PointNet++ 48.4 ± 1.3 60.9 ± 0.8 46.1 ± 2.0 57.9 ± 0.8
DGCNN 46.7 ± 1.4 64.0 ± 1.0 48.7 ± 1.0 51.1 ± 1.2
RSCNN 49.7 ± 1.1 53.9 ± 0.2 47.7 ± 1.0 55.9 ± 0.6
SimpleView 54.6 ± 0.7 62.3 ± 1.3 45.0 ± 0.8 47.8 ± 0.8

The codes we use for training the ordinary models are from SimpleView, please follow the instruction on their github repository to recurrent the results.

UDA Experiments

We also evaluate our QS3 and other unsupervised domain adaptation methods on realistic datasets ScanNet10 (S*) and DepthScanNet10 (D), the results are as following (detailed in our paper):

Method M → D S → D S → S*
PointDAN 58.9 ± 0.9 62.9 ± 1.6 53.5 ± 0.8
DefRec 57.8 ± 1.1 60.8 ± 0.6 50.9 ± 0.1
DefRec+PCM 62.1 ± 0.8 64.4 ± 0.7 56.1 ± 0.2
GAST w/o SPST 62.4 ± 1.1 61.8 ± 1.0 49.3 ± 1.1
GAST 64.8 ± 1.4 64.4 ± 0.2 51.9 ± 0.9
QBST (ours) 66.4 ± 1.1 - -
QS3 (ours) - 72.4 ± 0.8 57.4 ± 0.2

To train QS3 from scratch, please running:

python train_QBST_sim2real.py

Acknowledgment

This work is built on some excellent works, thanks for their efforts. If you find them helpful, please consider citing them.

Citation

If you find our work useful in your research, please consider citing:

    @inproceedings{chen2022quasi,
    title={Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point Clouds for Closing Domain Gap},
    author={Chen, Yongwei and Wang, Zihao and Zou, Longkun and Chen, Ke and Jia, Kui},
    booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
    year={2022}
    }

About

The official implementation for ECCV22 paper: Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point Clouds for Closing Domain Gap

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages