Skip to content

fix

e92e234
Select commit
Loading
Failed to load commit list.
Merged

Custom Eval - Upload #45678

fix
e92e234
Select commit
Loading
Failed to load commit list.
Azure Pipelines / python - pullrequest failed Mar 13, 2026 in 25m 15s

Build #20260313.7 had test failures

Details

Tests

  • Failed: 36 (0.39%)
  • Passed: 7,614 (81.66%)
  • Other: 1,674 (17.95%)
  • Total: 9,324

Annotations

Check failure on line 316 in Build log

See this annotation in the file changed.

@azure-pipelines azure-pipelines / python - pullrequest

Build log #L316

The process '/mnt/vss/_work/1/s/venv/bin/python' failed with exit code 1

Check failure on line 67754 in Build log

See this annotation in the file changed.

@azure-pipelines azure-pipelines / python - pullrequest

Build log #L67754

There are one or more test failures detected in result files. Detailed summary of published test results can be viewed in the Tests tab.

Check failure on line 72940 in Build log

See this annotation in the file changed.

@azure-pipelines azure-pipelines / python - pullrequest

Build log #L72940

There are one or more test failures detected in result files. Detailed summary of published test results can be viewed in the Tests tab.

Check failure on line 3291 in Build log

See this annotation in the file changed.

@azure-pipelines azure-pipelines / python - pullrequest

Build log #L3291

PowerShell exited with code '1'.

Check failure on line 1 in test_evaluation_samples[sample_eval_upload_custom_evaluator]

See this annotation in the file changed.

@azure-pipelines azure-pipelines / python - pullrequest

test_evaluation_samples[sample_eval_upload_custom_evaluator]

azure.core.exceptions.HttpResponseError: b'{"Message":"Recording file path /mnt/vss/_work/1/s/.assets_distributed/5052/qUddAOLeug/python/sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_custom_evaluator].json does not exist.","Status":"NotFound"}'
Raw output
test_class = <test_samples_evaluations.TestSamplesEvaluations object at 0x7efe4288ead0>
sample_path = '/mnt/vss/_work/1/s/sdk/ai/azure-ai-projects/samples/evaluations/sample_eval_upload_custom_evaluator.py'
kwargs = {'__aggregate_cache_key': ('EnvironmentVariableLoader',), 'foundry_agent_name': 'sanitized-agent-name', 'foundry_model...'foundry_project_endpoint': 'https://sanitized-account-name.services.ai.azure.com/api/projects/sanitized-project-name'}

    def _wrapper_sync(test_class, sample_path, **kwargs):
>       return fn(test_class, sample_path, **kwargs)

tests/samples/sample_executor.py:596: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../../.venv/azure-ai-projects/.venv_mindependency/lib/python3.13/site-packages/devtools_testutils/proxy_testcase.py:309: in record_wrap
    recording_id, variables = start_record_or_playback(test_id)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test_id = 'sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_custom_evaluator]'

    def start_record_or_playback(test_id: str) -> "Tuple[str, Dict[str, str]]":
        """Sends a request to begin recording or playing back the provided test.
    
        This returns a tuple, (a, b), where a is the recording ID of the test and b is the `variables` dictionary that maps
        test variables to values. If no variable dictionary was stored when the test was recorded, b is an empty dictionary.
        """
        variables = {}  # this stores a dictionary of test variable values that could have been stored with a recording
    
        json_payload = {"x-recording-file": test_id}
        assets_json = get_recording_assets(test_id)
        if assets_json:
            json_payload["x-recording-assets-file"] = assets_json
    
        encoded_payload = json.dumps(json_payload).encode("utf-8")
        http_client = get_http_client()
    
        if is_live():
            result = http_client.request(
                method="POST",
                url=RECORDING_START_URL,
                body=encoded_payload,
            )
            if result.status != 200:
                raise HttpResponseError(message=result.data)
            recording_id = result.headers["x-recording-id"]
    
        else:
            result = http_client.request(
                method="POST",
                url=PLAYBACK_START_URL,
                body=encoded_payload,
            )
            if result.status != 200:
>               raise HttpResponseError(message=result.data)
E               azure.core.exceptions.HttpResponseError: b'{"Message":"Recording file path /mnt/vss/_work/1/s/.assets_distributed/5052/qUddAOLeug/python/sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_custom_evaluator].json does not exist.","Status":"NotFound"}'

../../../.venv/azure-ai-projects/.venv_mindependency/lib/python3.13/site-packages/devtools_testutils/proxy_testcase.py:116: HttpResponseError

Check failure on line 1 in test_evaluation_samples[sample_eval_upload_friendly_evaluator]

See this annotation in the file changed.

@azure-pipelines azure-pipelines / python - pullrequest

test_evaluation_samples[sample_eval_upload_friendly_evaluator]

azure.core.exceptions.HttpResponseError: b'{"Message":"Recording file path /mnt/vss/_work/1/s/.assets_distributed/5052/qUddAOLeug/python/sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_friendly_evaluator].json does not exist.","Status":"NotFound"}'
Raw output
test_class = <test_samples_evaluations.TestSamplesEvaluations object at 0x7efe4288e9c0>
sample_path = '/mnt/vss/_work/1/s/sdk/ai/azure-ai-projects/samples/evaluations/sample_eval_upload_friendly_evaluator.py'
kwargs = {'__aggregate_cache_key': ('EnvironmentVariableLoader',), 'foundry_agent_name': 'sanitized-agent-name', 'foundry_model...'foundry_project_endpoint': 'https://sanitized-account-name.services.ai.azure.com/api/projects/sanitized-project-name'}

    def _wrapper_sync(test_class, sample_path, **kwargs):
>       return fn(test_class, sample_path, **kwargs)

tests/samples/sample_executor.py:596: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../../.venv/azure-ai-projects/.venv_mindependency/lib/python3.13/site-packages/devtools_testutils/proxy_testcase.py:309: in record_wrap
    recording_id, variables = start_record_or_playback(test_id)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test_id = 'sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_friendly_evaluator]'

    def start_record_or_playback(test_id: str) -> "Tuple[str, Dict[str, str]]":
        """Sends a request to begin recording or playing back the provided test.
    
        This returns a tuple, (a, b), where a is the recording ID of the test and b is the `variables` dictionary that maps
        test variables to values. If no variable dictionary was stored when the test was recorded, b is an empty dictionary.
        """
        variables = {}  # this stores a dictionary of test variable values that could have been stored with a recording
    
        json_payload = {"x-recording-file": test_id}
        assets_json = get_recording_assets(test_id)
        if assets_json:
            json_payload["x-recording-assets-file"] = assets_json
    
        encoded_payload = json.dumps(json_payload).encode("utf-8")
        http_client = get_http_client()
    
        if is_live():
            result = http_client.request(
                method="POST",
                url=RECORDING_START_URL,
                body=encoded_payload,
            )
            if result.status != 200:
                raise HttpResponseError(message=result.data)
            recording_id = result.headers["x-recording-id"]
    
        else:
            result = http_client.request(
                method="POST",
                url=PLAYBACK_START_URL,
                body=encoded_payload,
            )
            if result.status != 200:
>               raise HttpResponseError(message=result.data)
E               azure.core.exceptions.HttpResponseError: b'{"Message":"Recording file path /mnt/vss/_work/1/s/.assets_distributed/5052/qUddAOLeug/python/sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_friendly_evaluator].json does not exist.","Status":"NotFound"}'

../../../.venv/azure-ai-projects/.venv_mindependency/lib/python3.13/site-packages/devtools_testutils/proxy_testcase.py:116: HttpResponseError

Check failure on line 1 in test_evaluation_samples[sample_eval_upload_custom_evaluator]

See this annotation in the file changed.

@azure-pipelines azure-pipelines / python - pullrequest

test_evaluation_samples[sample_eval_upload_custom_evaluator]

azure.core.exceptions.HttpResponseError: b'{"Message":"Recording file path /mnt/vss/_work/1/s/.assets_distributed/5050/qUddAOLeug/python/sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_custom_evaluator].json does not exist.","Status":"NotFound"}'
Raw output
test_class = <test_samples_evaluations.TestSamplesEvaluations object at 0x7f51e9a7ead0>
sample_path = '/mnt/vss/_work/1/s/sdk/ai/azure-ai-projects/samples/evaluations/sample_eval_upload_custom_evaluator.py'
kwargs = {'__aggregate_cache_key': ('EnvironmentVariableLoader',), 'foundry_agent_name': 'sanitized-agent-name', 'foundry_model...'foundry_project_endpoint': 'https://sanitized-account-name.services.ai.azure.com/api/projects/sanitized-project-name'}

    def _wrapper_sync(test_class, sample_path, **kwargs):
>       return fn(test_class, sample_path, **kwargs)

tests/samples/sample_executor.py:596: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../../.venv/azure-ai-projects/.venv_whl/lib/python3.13/site-packages/devtools_testutils/proxy_testcase.py:309: in record_wrap
    recording_id, variables = start_record_or_playback(test_id)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test_id = 'sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_custom_evaluator]'

    def start_record_or_playback(test_id: str) -> "Tuple[str, Dict[str, str]]":
        """Sends a request to begin recording or playing back the provided test.
    
        This returns a tuple, (a, b), where a is the recording ID of the test and b is the `variables` dictionary that maps
        test variables to values. If no variable dictionary was stored when the test was recorded, b is an empty dictionary.
        """
        variables = {}  # this stores a dictionary of test variable values that could have been stored with a recording
    
        json_payload = {"x-recording-file": test_id}
        assets_json = get_recording_assets(test_id)
        if assets_json:
            json_payload["x-recording-assets-file"] = assets_json
    
        encoded_payload = json.dumps(json_payload).encode("utf-8")
        http_client = get_http_client()
    
        if is_live():
            result = http_client.request(
                method="POST",
                url=RECORDING_START_URL,
                body=encoded_payload,
            )
            if result.status != 200:
                raise HttpResponseError(message=result.data)
            recording_id = result.headers["x-recording-id"]
    
        else:
            result = http_client.request(
                method="POST",
                url=PLAYBACK_START_URL,
                body=encoded_payload,
            )
            if result.status != 200:
>               raise HttpResponseError(message=result.data)
E               azure.core.exceptions.HttpResponseError: b'{"Message":"Recording file path /mnt/vss/_work/1/s/.assets_distributed/5050/qUddAOLeug/python/sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_custom_evaluator].json does not exist.","Status":"NotFound"}'

../../../.venv/azure-ai-projects/.venv_whl/lib/python3.13/site-packages/devtools_testutils/proxy_testcase.py:116: HttpResponseError

Check failure on line 1 in test_evaluation_samples[sample_eval_upload_friendly_evaluator]

See this annotation in the file changed.

@azure-pipelines azure-pipelines / python - pullrequest

test_evaluation_samples[sample_eval_upload_friendly_evaluator]

azure.core.exceptions.HttpResponseError: b'{"Message":"Recording file path /mnt/vss/_work/1/s/.assets_distributed/5050/qUddAOLeug/python/sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_friendly_evaluator].json does not exist.","Status":"NotFound"}'
Raw output
test_class = <test_samples_evaluations.TestSamplesEvaluations object at 0x7f51e9a7e9c0>
sample_path = '/mnt/vss/_work/1/s/sdk/ai/azure-ai-projects/samples/evaluations/sample_eval_upload_friendly_evaluator.py'
kwargs = {'__aggregate_cache_key': ('EnvironmentVariableLoader',), 'foundry_agent_name': 'sanitized-agent-name', 'foundry_model...'foundry_project_endpoint': 'https://sanitized-account-name.services.ai.azure.com/api/projects/sanitized-project-name'}

    def _wrapper_sync(test_class, sample_path, **kwargs):
>       return fn(test_class, sample_path, **kwargs)

tests/samples/sample_executor.py:596: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../../.venv/azure-ai-projects/.venv_whl/lib/python3.13/site-packages/devtools_testutils/proxy_testcase.py:309: in record_wrap
    recording_id, variables = start_record_or_playback(test_id)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test_id = 'sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_friendly_evaluator]'

    def start_record_or_playback(test_id: str) -> "Tuple[str, Dict[str, str]]":
        """Sends a request to begin recording or playing back the provided test.
    
        This returns a tuple, (a, b), where a is the recording ID of the test and b is the `variables` dictionary that maps
        test variables to values. If no variable dictionary was stored when the test was recorded, b is an empty dictionary.
        """
        variables = {}  # this stores a dictionary of test variable values that could have been stored with a recording
    
        json_payload = {"x-recording-file": test_id}
        assets_json = get_recording_assets(test_id)
        if assets_json:
            json_payload["x-recording-assets-file"] = assets_json
    
        encoded_payload = json.dumps(json_payload).encode("utf-8")
        http_client = get_http_client()
    
        if is_live():
            result = http_client.request(
                method="POST",
                url=RECORDING_START_URL,
                body=encoded_payload,
            )
            if result.status != 200:
                raise HttpResponseError(message=result.data)
            recording_id = result.headers["x-recording-id"]
    
        else:
            result = http_client.request(
                method="POST",
                url=PLAYBACK_START_URL,
                body=encoded_payload,
            )
            if result.status != 200:
>               raise HttpResponseError(message=result.data)
E               azure.core.exceptions.HttpResponseError: b'{"Message":"Recording file path /mnt/vss/_work/1/s/.assets_distributed/5050/qUddAOLeug/python/sdk/ai/azure-ai-projects/tests/recordings/samples/test_samples_evaluations.pyTestSamplesEvaluationstest_evaluation_samples[sample_eval_upload_friendly_evaluator].json does not exist.","Status":"NotFound"}'

../../../.venv/azure-ai-projects/.venv_whl/lib/python3.13/site-packages/devtools_testutils/proxy_testcase.py:116: HttpResponseError