- Unity 2021+
- Python 3.x
- Python packages:
opencv-contrib-python,numpy,Pillow - Unity package:
com.unity.nuget.newtonsoft-json
https://drive.google.com/drive/folders/1-UUY4vqwnzN8cYLk0dYU7C2SdA2VdurU?usp=sharing
- unity/Visual_V0/Assets/ScreenShots
- unity/Visual_V0/Assets/ModelsDatasetOutput
- unity/Visual_V0/Assets/DatasetTypeModelNet
- unity/Visual_V0/Assets/DatasetTypeModelNetMultiView
- unity/Visual_V0/Assets/DatasetTypeModelNetStereovision
- unity/Visual_V0/Assets/DatasetTypeModelNetCanny
- unity/Visual_V0/Assets/Resources
cd src/builders/utils
python extract_modelnet10_test_dataset.pycd src/builders/utils
python conversion_off_obj.pyFrom Unity: Tools > conversion obj/prefabs
- Script (2 cameras already placed in Unity):
unity/Visual_V0/Assets/Scripts/ScreenshotCapture.cs - Script (2 fixed cameras placed automatically):
unity/Visual_V0/Assets/Scripts/ScreenshotCapturePlaceCamera.cs - 1 photo per camera
- Output:
left.png,right.png,cameras.jsonper object
cd src/vision/sampling
python reconstruct_stereovision.py- Input:
left.png+right.png+cameras.json - Output:
.ply(MeshLab) +.off(ModelNet10)
Partial results — disparity is poorly calibrated for synthetic objects without texture.
- Script:
unity/Visual_V0/Assets/Scripts/CameraOrbitCapture.cs - 1 moving camera, helical trajectory (36 positions)
- Perlin noise texture added on objects
- Output:
frame_XXXX.png+cameras.jsonper object
cd src/vision/sampling
python reconstruct.py- Input: multi-view RGB images +
cameras.json - Output:
.ply(MeshLab) +.off(ModelNet10)
Not working — synthetic surfaces without texture do not provide enough keypoints for epipolar matching.
- Script:
unity/Visual_V0/Assets/Scripts/CameraOrbitCaptureDepthMap.cs - 1 moving camera, helical trajectory
- Custom grayscale depth shader
- Output:
frame_XXXX.png+depth_XXXX.png+cameras.jsonper object
cd src/vision/sampling
python reconstruct_from_shaders.py # without break- Input:
depth_XXXX.png(36 files) +cameras.json - Output:
.ply(MeshLab) +.off(ModelNet10)
Partial results : views do not align correctly due to a coordinate frame conversion error Unity → OpenCV.
- Script:
unity/Visual_V0/Assets/Scripts/CameraOrbitCaptureDepthMap.cs - Same capture as Method C
- Output:
depth_0000.png+cameras.jsonper object
cd src/vision/sampling
python reconstruct_from_shaders.py # with break- Input:
depth_0000.png(first view only) +cameras.json - Output:
.ply(MeshLab) +.off(ModelNet10)
Best results : recognisable shape but partial reconstruction (only one face visible).
Assets/ModelsDatasetOutput/
└── object_name/
├── cameras.json # camera parameters
├── frame_XXXX.png # RGB images
└── depth_XXXX.png # depth maps
Assets/DatasetTypeModelNet/
└── object_name.off # point cloud for ModelNet10
└── object_name.ply # point cloud for MeshLab
.ply files can be opened in MeshLab to visualise the
reconstructed point clouds.
The generated .off files are in ModelNet10 format and can be used
directly as input for PointNet++ in the 3D classification pipeline.