AdjointDiffusion is a new method for structural optimization using diffusion models. It is a physics-guided and fabrication-aware structural optimization using diffusion models augmented with adjoint gradient. By combining powerful generative models with adjoint sensitivity analysis, this approach can more efficiently discover complex, high-performance designs than the traditional methods.
The codes are provided following the paper named Physics-guided and fabrication-aware structural optimization using diffusion models
- TL;DR
- Intuitive Explanation of Diffusion Models
- Installation
- Quick Start
- Customize Your Simulation
- Experiment Logging with Weights & Biases
- Results
- Code Organization
- Citation
✨ Integrating adjoint sensitivity analysis with diffusion models can generate high-performance and interesting structures!
Key features:
- Adjoint Sensitivity Integration: Seamlessly incorporates adjoint gradients into the diffusion process.
- Fabrication Constraints: Accounts for manufacturability, ensuring real-world feasibility.
- Extensibility: Users can use their own datasets or simulations.
- Experiment Tracking & Visualization: Integrates with Weights & Biases.
Imagine an ink drop falling into water — it slowly spreads and dissolves. Diffusion models mimic this process in reverse: they start from noise and slowly form meaningful structures. By guiding this "reverse diffusion" with gradients from an adjoint method, we ensure the final designs are optimized and fabrication-ready.
This setup ensures compatibility between Meep and PyTorch. If you find any alternatives, feel free to contribute improvements via pull requests!
git clone https://github.com/dongjin-seo2020/AdjointDiffusion.git
cd AdjointDiffusionTo create and activate the recommended environment with necessary dependencies:
conda create -n adjoint_diffusion -c conda-forge pymeep pymeep-extras python=3.9
conda activate adjoint_diffusionInstall torch following the command (recommended):
pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 --index-url https://download.pytorch.org/whl/cu117Note: It has been observed that newer NVIDIA GPUs (e.g., RTX 5090) may not be compatible with this specific version of PyTorch. If you encounter issues, please refer to the official PyTorch installation guide to find a version compatible with your hardware: https://pytorch.org/get-started/locally/
Install the required packages listed in requirements.txt:
pip install -r requirements.txtIf you encounter permission-related issues when trying to run the training script, make sure it is executable by running:
chmod +x 01-train.shThe, you can execute it with:
./01-train.shIf you encounter errors while installing mpi4py, try the following steps:
apt --fix-broken install
apt install mpich
pip install mpi4pyMake sure you have root access when using apt.
- Generate a dataset:
python dataset_generation.py- The data will be saved at
datasets/<n>/sigma<k>/struct/, wherenis the structure dimension (e.g.,n=64generates 64×64 binary structures) andkis the variance of the Gaussian filter (a largerkincreases the minimum feature size). - Note: To reproduce the condition in the paper, run the code for
k=2,k=5, andk=8(three times). Or you can download our pretrained network from https://zenodo.org/records/15399997 - Note: You can also use your own dataset here! Provide your fabrication-satisfying image dataset and train the diffusion model with it!
- Update the training and sampling scripts to specify the appropriate output directories.
- For example, for
train.sh, you should specify the variables as:
DATA_DIR=/path/to/datasets
LOG_DIR=path/to/experiments
GPU_ID=0- Or, you can set environment variables (Linux/macOS):
export DATA_DIR=/path/to/datasets
export LOG_DIR=/path/to/experiments
export GPU_ID=0- For detailed usage examples, including training and sampling with actual settings, see:
- Train a diffusion model:
./01-train.sh- Alternatively: run
02-train.ipynb - Note: Set
--class_condtoFalseif your dataset contains only a single structural condition (i.e., no class conditioning needed). If you have multiple structural conditions (e.g., different fabrication constraints), set it toTrueto enable class-conditional training. - Note: The training process will continue indefinitely unless manually stopped. In our setup, training for around 25,000 steps produced satisfactory results, though fewer steps may also be sufficient. If you are using a customized dataset, the optimal number of steps may vary.
- Sample and optimize structures:
./01-sample.sh- Alternatively: run
02-sample.ipynb - Note: Set
--class_condtoFalseif your dataset contains only a single structural condition (i.e., no class conditioning needed). If you have multiple structural conditions (e.g., different fabrication constraints), set it toTrueto enable class-conditional training. - Note: We recommend to use the model where the name is
ema_0.9999_*.pt. The number in*means the training step number.emais forExponential Moving Average.
- View outputs
- Every output (performance, structure) is logged in wandb.
- Logs and generated structures are saved in
./logs/<run_name>
- Baseline Algorithms
We provide baseline algorithms in the ./baseline_algorithms directory. These include nlopt methods like MMA for comparison.
If you'd like to integrate a custom physical simulation into the reverse diffusion process, follow these steps:
-
Implement Your simulation class in
guided_diffusion/simulation.py.Create a class that defines how to compute the figure of merit (FoM) and its corresponding adjoint gradient. For example:
class YourSimClass: def __init__(self, ...): ... def compute_fom(self, structure): ... def compute_adjoint(self, structure): ...
-
Update the import in
guided_diffusion/gaussian_diffusion.py:Replace the existing simulation import with your custom class:
from guided_diffusion.simulation import YourSimClass
-
Plug your simulation into the sampling loop in
guided_diffusion/gaussian_diffusion.pyInguided_diffusion/gaussian_diffusion.py, locate wheresimulation_()is called (typically inside thep_sample()function), and replace it with your custom simulation logic. Make sure your class is initialized properly and passed viamy_kwargs. For example, formy_kwargs:
my_kwargs = {
"sim_guided": True,
"simulation_": YourSimClass(...),
"eta": 0.05,
"inter_rate": 25,
"stoptime": 0.1,
"guidance_type": "dps", # or "dds"
"exp_name": "experiment1",
...
}Now, your custom simulation will be used during the reverse diffusion process.
We use wandb for logging and visualization.
- Sign up at wandb.ai
- Log in:
wandb login- Run any training/sampling script and it will automatically log data to wandb.
We visualize the performance of AdjointDiffusion across different tasks and configurations.
AdjointDiffusion/
├── dataset_generation.py # Dataset generation script
├── image_train.py # Main training script
├── image_sample.py # Main sampling script
├── requirements.txt # Python dependencies
├── guided_diffusion/ # Backend of diffusion models
└── baseline_algorithms/ # Baseline algorithms (nlopt, Gradient Ascent)
If you use this code, please cite the following paper:
@article{Seo2026DiffusionPhotonics,
title = {Physics-Guided and Fabrication-Aware Inverse Design of Photonic Devices Using Diffusion Models},
author = {Seo, Dongjin and Um, Soobin and Lee, Sangbin and Ye, Jong Chul and Chung, Haejun},
journal = {ACS Photonics},
year = {2026},
doi = {10.1021/acsphotonics.5c00993},
url = {https://doi.org/10.1021/acsphotonics.5c00993}
}Parts of this repository are adapted from OpenAI's guided-diffusion, which is licensed under the MIT License.
We thank the OpenAI team for their contribution. Significant modifications have been made to enable adjoint sensitivity integration and fabrication-aware optimization.
Happy Diffusing & Optimizing!







