Apple Silicon Macs can run Avatarify locally using PyTorch's MPS (Metal Performance Shaders) backend for GPU acceleration. This guide walks through the full setup.
- A Mac with Apple Silicon (M1, M2, M3, M4, or their Pro/Max/Ultra variants)
- macOS 12.3 (Monterey) or later
- Homebrew installed
- A working webcam (built-in or external)
Install the ARM64 version of Miniconda so that all packages are native to Apple Silicon:
brew install --cask minicondaOr download the installer manually from Miniconda — choose the macOS Apple M1 64-bit (arm64) variant.
After installation, initialize conda for your shell:
conda init zsh # default shell on macOS
# or: conda init bashImportant: You must close and reopen your terminal after running
conda init. If you skip this step you will seeCondaError: Run 'conda init' before 'conda activate'when trying to activate environments. Alternatively, runsource ~/.zshrc(orsource ~/.bashrc) to reload the shell configuration without restarting.
git clone https://github.com/alievk/avatarify-python.git
cd avatarify-pythonPython 3.7 does not have ARM64 builds. Use Python 3.8 (or 3.10 for best compatibility):
conda create -y -n avatarify python=3.10
conda activate avatarifyPyTorch 1.12+ supports MPS acceleration on Apple Silicon. Install the latest stable version:
conda install -y pytorch torchvision -c pytorchVerify MPS is available:
python -c "import torch; print('MPS available:', torch.backends.mps.is_available())"You should see MPS available: True. If not, make sure you are on macOS 12.3+ and using the ARM64 conda environment.
Some pinned dependency versions in the original requirements.txt do not have ARM64 wheels. Install compatible versions instead:
conda install -y numpy scikit-image -c conda-forge
pip install opencv-python
pip install face-alignment
pip install pyzmq msgpack-numpy pyyaml requestsNote:
pyfakewebcamis Linux-only (depends on v4l2). On macOS, you will use OBS Studio for virtual camera output (see Step 8).
git clone https://github.com/alievk/first-order-model.git fommDownload vox-adv-cpk.pth.tar (228 MB) from one of these mirrors:
Place the file in the avatarify-python root directory (do not unpack it):
# If you downloaded it to ~/Downloads:
mv ~/Downloads/vox-adv-cpk.pth.tar .Verify the checksum (optional):
md5 vox-adv-cpk.pth.tar
# Expected: 8a45a24037871c045fbb8a6a8aa95ebcSince CamTwist may not work reliably on Apple Silicon, use OBS Studio's built-in virtual camera:
- Download and install OBS Studio (the Apple Silicon native build).
- Open OBS Studio.
- In the Sources section, click +, select Window Capture, and choose the
avatarifywindow. - Go to Edit > Transform > Fit to screen.
- Click Start Virtual Camera in the bottom-right of OBS.
- The OBS Virtual Camera will now be available as a camera input in Zoom, Teams, Slack, etc.
The existing run_mac.sh script enforces remote-only mode for macOS. To run locally on Apple Silicon, launch directly:
conda activate avatarify
export PYTHONPATH=$PYTHONPATH:$(pwd):$(pwd)/fomm
python afy/cam_fomm.py \
--config fomm/config/vox-adv-256.yaml \
--checkpoint vox-adv-cpk.pth.tar \
--relative \
--adapt_scale \
--no-pad \
--is-clientImportant: The current codebase requires
--is-clienton macOS. If you want to run fully locally without a remote server, you will need to comment out the Darwin platform check inafy/cam_fomm.py(lines 22-26):# if _platform == 'darwin': # if not opt.is_client: # info('\nOnly remote GPU mode is supported for Mac ...') # exit()Then run without
--is-client:python afy/cam_fomm.py \ --config fomm/config/vox-adv-256.yaml \ --checkpoint vox-adv-cpk.pth.tar \ --relative \ --adapt_scale \ --no-pad
Two windows will appear:
- cam — shows your face position for calibration
- avatarify — shows the animated avatar preview
See the main README controls section for keyboard shortcuts.
Run conda init zsh (or conda init bash), then close and reopen your terminal. Alternatively, run source ~/.zshrc to reload without restarting. See Step 1.
- Make sure you are on macOS 12.3 or later:
sw_vers - Make sure you installed the ARM64 (not x86) conda:
python -c "import platform; print(platform.machine())"should printarm64 - Make sure PyTorch is version 1.12 or later:
python -c "import torch; print(torch.__version__)"
Make sure you set PYTHONPATH before running:
export PYTHONPATH=$PYTHONPATH:$(pwd):$(pwd)/fommIf opencv-python fails to import, try:
pip uninstall opencv-python
pip install opencv-python-headless- MPS acceleration is available but may not match CUDA performance. Expect approximately 5-15 FPS depending on your chip.
- Close other GPU-intensive applications.
- M1 Pro/Max/Ultra and newer chips will perform better than base M1.
pip install face-alignment --no-deps
pip install scipy dlibIf dlib fails to build, install CMake first:
brew install cmake
pip install dlib| Chip | Approximate FPS |
|---|---|
| M1 | 5-10 |
| M1 Pro/Max | 10-15 |
| M2 / M2 Pro | 10-18 |
| M3 / M3 Pro | 12-20 |
| M4 / M4 Pro | 15-25 |
These are rough estimates. Actual performance depends on resolution, background processes, and PyTorch/MPS optimizations at the time of use.