Lately I've seen more and more of these errors when running parallel eval of many test cases.
I am using multi-turn metrics.
self.score = self._calculate_score()
^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/deepeval/metrics/conversation_completeness/conversation_completeness.py", line 304, in _calculate_score
if verdict.verdict.strip().lower() != "no":
^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'verdict'