Hi! Thanks for opening source DreamCraft3D. I am learning your paper and codes. When I tried to find the corresponding code with the part Diffusion timestep annealing, I find your implementation is actually randomly sampling timestep. I got confused. Have I missed something important? I have noted the time prior branch but the prior is set to None and I was not very clear about this part of codes:
if self.cfg.time_prior is not None:
time_index = torch.where(
(self.time_prior_acc_weights - current_step_ratio) > 0
)[0][0]
if time_index == 0 or torch.abs(
self.time_prior_acc_weights[time_index] - current_step_ratio
) < torch.abs(
self.time_prior_acc_weights[time_index - 1] - current_step_ratio
):
t = self.num_train_timesteps - time_index
else:
t = self.num_train_timesteps - time_index + 1
t = torch.clip(t, self.min_step, self.max_step + 1)
t = torch.full((batch_size,), t, dtype=torch.long, device=self.device)
Is this the implementation of dreamtime? Could you share some insights with me? Many thanks!
Hi! Thanks for opening source DreamCraft3D. I am learning your paper and codes. When I tried to find the corresponding code with the part Diffusion timestep annealing, I find your implementation is actually randomly sampling timestep. I got confused. Have I missed something important? I have noted the time prior branch but the prior is set to None and I was not very clear about this part of codes:
Is this the implementation of dreamtime? Could you share some insights with me? Many thanks!