Hi,
I am currently trying to reproduce the DI-Bench results.
The dataset and repository instances work fine, but I am running into issues during the dependency inference step due to API rate limits (OpenAI), which prevents generating prediction files.
I wanted to ask:
Do you provide any precomputed inference results (prediction files) for the dataset (especially the large Python set), so that I can directly run the evaluation step?
This would help verify that my setup is correct before running the full inference.
Thanks a lot for your work!
Hi,
I am currently trying to reproduce the DI-Bench results.
The dataset and repository instances work fine, but I am running into issues during the dependency inference step due to API rate limits (OpenAI), which prevents generating prediction files.
I wanted to ask:
Do you provide any precomputed inference results (prediction files) for the dataset (especially the large Python set), so that I can directly run the evaluation step?
This would help verify that my setup is correct before running the full inference.
Thanks a lot for your work!