Thank you for your awesome work.
I tried running your run.py with the Bonn Dataset and Wild-SLAM Mocap Dataset.
However, it fails with a CUDA out-of-memory error during the TRACKER running
Traceback (most recent call last):
File ".../multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/root/WildGS-SLAM/src/slam.py", line 125, in mapping
self.mapper.run()
File "/root/WildGS-SLAM/src/mapper.py", line 180, in run
self.initialize_mapper(video_idx)
File "/root/WildGS-SLAM/src/mapper.py", line 779, in initialize_mapper
self.gaussians.extend_from_pcd_seq(
File "/root/WildGS-SLAM/thirdparty/gaussian_splatting/scene/gaussian_model.py", line 265, in extend_from_pcd_seq
self.create_pcd_from_image(cam_info, init, scale=scale, depthmap=depthmap)
File "/root/WildGS-SLAM/thirdparty/gaussian_splatting/scene/gaussian_model.py", line 136, in create_pcd_from_image
return self.create_pcd_from_image_and_depth(cam, rgb, depth, init)
File "/root/WildGS-SLAM/thirdparty/gaussian_splatting/scene/gaussian_model.py", line 203, in create_pcd_from_image_and_depth
distCUDA2(torch.from_numpy(np.asarray(pcd.points)).float().cuda()),
MemoryError: std::bad_alloc: cudaErrorMemoryAllocation: out of memory
My setup is:
- GPU: Tesla V100 32GB
- OS: Ubuntu 22.04 (Docker)
- Python: 3.10.19 (using conda)
- PyTorch: 2.1.0+cu118
- MMVC: 1.7.2
I also lowered the image output resolution as suggested in the README. However, the error persists.
Could you provide any recommended solutions or debugging approaches?
Thank you for your awesome work.
I tried running your run.py with the Bonn Dataset and Wild-SLAM Mocap Dataset.
However, it fails with a CUDA out-of-memory error during the TRACKER running
My setup is:
I also lowered the image output resolution as suggested in the README. However, the error persists.
Could you provide any recommended solutions or debugging approaches?