Replies: 1 comment
-
|
Hey @SeanModo! I'm here to help you with any bugs, questions, or contributions you have regarding ADetailer. Let's work together to solve this issue. The error you're encountering, "ValueError: Expected a xpu device, but got: cuda:0," suggests that the code is trying to use an XPU device, but it's currently set to use a CUDA device. This mismatch is likely causing the issue. To resolve this, you can try the following steps:
If these steps do not resolve the issue, you may need to further investigate the specific configuration and setup of your environment to ensure compatibility with the desired device type. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
When I'm using Adetailer, I'm getting this error. Is there something else I need to install?
│ ┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓ │
│ ┃ ┃ Value ┃ │
│ ┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩ │
│ │ version │ 24.5.1 │ │
│ │ ad_model │ face_yolov8n.pt │ │
│ │ ad_prompt │ │ │
│ │ ad_negative_prompt │ │ │
│ │ ad_controlnet_model │ None │ │
│ │ is_api │ False │ │
│ └─────────────────────┴─────────────────┘ │
│ ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │
│ │ /home/seanmodo/automatic/extensions/adetailer/aaaaaa/traceback.py:147 in wrapper │ │
│ │ │ │
│ │ 146 │ │ try: │ │
│ │ ❱ 147 │ │ │ return func(*args, **kwargs) │ │
│ │ 148 │ │ except Exception as e: │ │
│ │ │ │
│ │ /home/seanmodo/automatic/extensions/adetailer/scripts/!adetailer.py:821 in postprocess_image │ │
│ │ │ │
│ │ 820 │ │ │ │ │ continue │ │
│ │ ❱ 821 │ │ │ │ is_processed |= self._postprocess_image_inner(p, pp, args, n=n) │ │
│ │ 822 │ │
│ │ │ │
│ │ /home/seanmodo/automatic/extensions/adetailer/scripts/!adetailer.py:742 in │ │
│ │ _postprocess_image_inner │ │
│ │ │ │
│ │ 741 │ │ with change_torch_load(): │ │
│ │ ❱ 742 │ │ │ pred = predictor(ad_model, pp.image, args.ad_confidence, **kwargs) │ │
│ │ 743 │ │
│ │ │ │
│ │ /home/seanmodo/automatic/extensions/adetailer/adetailer/ultralytics.py:29 in ultralytics_predict │ │
│ │ │ │
│ │ 28 │ apply_classes(model, model_path, classes) │ │
│ │ ❱ 29 │ pred = model(image, conf=confidence, device=device) │ │
│ │ 30 │ │
│ │ │ │
│ │ /home/seanmodo/automatic/venv/lib/python3.10/site-packages/ultralytics/engine/model.py:179 in │ │
│ │ call │ │
│ │ │ │
│ │ 178 │ │ """ │ │
│ │ ❱ 179 │ │ return self.predict(source, stream, **kwargs) │ │
│ │ 180 │ │
│ │ │ │
│ │ /home/seanmodo/automatic/venv/lib/python3.10/site-packages/ultralytics/engine/model.py:557 in │ │
│ │ predict │ │
│ │ │ │
│ │ 556 │ │ │ self.predictor.set_prompts(prompts) │ │
│ │ ❱ 557 │ │ return self.predictor.predict_cli(source=source) if is_cli else self.predictor(s │ │
│ │ 558 │ │
│ │ │ │
│ │ /home/seanmodo/automatic/venv/lib/python3.10/site-packages/ultralytics/engine/predictor.py:173 │ │
│ │ in call │ │
│ │ │ │
│ │ 172 │ │ else: │ │
│ │ ❱ 173 │ │ │ return list(self.stream_inference(source, model, *args, **kwargs)) # merge │ │
│ │ 174 │ │
│ │ │ │
│ │ /home/seanmodo/automatic/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py:35 in │ │
│ │ generator_context │ │
│ │ │ │
│ │ 34 │ │ │ with ctx_factory(): │ │
│ │ ❱ 35 │ │ │ │ response = gen.send(None) │ │
│ │ 36 │ │
│ │ │ │
│ │ /home/seanmodo/automatic/venv/lib/python3.10/site-packages/ultralytics/engine/predictor.py:254 │ │
│ │ in stream_inference │ │
│ │ │ │
│ │ 253 │ │ │ │ # Preprocess │ │
│ │ ❱ 254 │ │ │ │ with profilers[0]: │ │
│ │ 255 │ │ │ │ │ im = self.preprocess(im0s) │ │
│ │ │ │
│ │ /home/seanmodo/automatic/venv/lib/python3.10/site-packages/ultralytics/utils/ops.py:46 in │ │
│ │ enter │ │
│ │ │ │
│ │ 45 │ │ """Start timing.""" │ │
│ │ ❱ 46 │ │ self.start = self.time() │ │
│ │ 47 │ │ return self │ │
│ │ │ │
│ │ /home/seanmodo/automatic/venv/lib/python3.10/site-packages/ultralytics/utils/ops.py:61 in time │ │
│ │ │ │
│ │ 60 │ │ if self.cuda: │ │
│ │ ❱ 61 │ │ │ torch.cuda.synchronize(self.device) │ │
│ │ 62 │ │ return time.time() │ │
│ │ │ │
│ │ /home/seanmodo/automatic/venv/lib/python3.10/site-packages/torch/xpu/init.py:384 in │ │
│ │ synchronize │ │
│ │ │ │
│ │ 383 │ _lazy_init() │ │
│ │ ❱ 384 │ device = _get_device_index(device, optional=True) │ │
│ │ 385 │ return torch._C._xpu_synchronize(device) │ │
│ │ │ │
│ │ /home/seanmodo/automatic/venv/lib/python3.10/site-packages/torch/xpu/_utils.py:35 in │ │
│ │ _get_device_index │ │
│ │ │ │
│ │ 34 │ │ elif device.type != "xpu": │ │
│ │ ❱ 35 │ │ │ raise ValueError(f"Expected a xpu device, but got: {device}") │ │
│ │ 36 │ if not torch.jit.is_scripting(): │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ ValueError: Expected a xpu device, but got: cuda:0
Beta Was this translation helpful? Give feedback.
All reactions