|
150 | 150 | 2, |
151 | 151 | None, |
152 | 152 | 'diffusion-models-for-hierarchical-vae-from-url-https-arxiv-org-abs-2208-11970'), |
153 | | - ('Equation for the Markovian hierarchical VAE', |
154 | | - 2, |
155 | | - None, |
156 | | - 'equation-for-the-markovian-hierarchical-vae'), |
157 | 153 | ('Variational Diffusion Models', |
158 | 154 | 2, |
159 | 155 | None, |
|
166 | 162 | ('Encoder transitions', 2, None, 'encoder-transitions'), |
167 | 163 | ('Third assumption', 2, None, 'third-assumption'), |
168 | 164 | ('Noisification', 2, None, 'noisification'), |
169 | | - ('Diffusion models, from URL:"https://arxiv.org/abs/2208.11970"', |
170 | | - 2, |
171 | | - None, |
172 | | - 'diffusion-models-from-url-https-arxiv-org-abs-2208-11970'), |
173 | 165 | ('Gaussian modeling', 2, None, 'gaussian-modeling'), |
174 | 166 | ('Optimizing the variational diffusion model', |
175 | 167 | 2, |
|
270 | 262 | <!-- navigation toc: --> <li><a href="#mathematical-representation" style="font-size: 80%;">Mathematical representation</a></li> |
271 | 263 | <!-- navigation toc: --> <li><a href="#back-to-the-marginal-probability" style="font-size: 80%;">Back to the marginal probability</a></li> |
272 | 264 | <!-- navigation toc: --> <li><a href="#diffusion-models-for-hierarchical-vae-from-url-https-arxiv-org-abs-2208-11970" style="font-size: 80%;">Diffusion models for hierarchical VAE, from URL:"https://arxiv.org/abs/2208.11970"</a></li> |
273 | | - <!-- navigation toc: --> <li><a href="#equation-for-the-markovian-hierarchical-vae" style="font-size: 80%;">Equation for the Markovian hierarchical VAE</a></li> |
274 | 265 | <!-- navigation toc: --> <li><a href="#variational-diffusion-models" style="font-size: 80%;">Variational Diffusion Models</a></li> |
275 | 266 | <!-- navigation toc: --> <li><a href="#second-assumption" style="font-size: 80%;">Second assumption</a></li> |
276 | 267 | <!-- navigation toc: --> <li><a href="#parameterizing-gaussian-encoder" style="font-size: 80%;">Parameterizing Gaussian encoder</a></li> |
277 | 268 | <!-- navigation toc: --> <li><a href="#encoder-transitions" style="font-size: 80%;">Encoder transitions</a></li> |
278 | 269 | <!-- navigation toc: --> <li><a href="#third-assumption" style="font-size: 80%;">Third assumption</a></li> |
279 | 270 | <!-- navigation toc: --> <li><a href="#noisification" style="font-size: 80%;">Noisification</a></li> |
280 | | - <!-- navigation toc: --> <li><a href="#diffusion-models-from-url-https-arxiv-org-abs-2208-11970" style="font-size: 80%;">Diffusion models, from URL:"https://arxiv.org/abs/2208.11970"</a></li> |
281 | 271 | <!-- navigation toc: --> <li><a href="#gaussian-modeling" style="font-size: 80%;">Gaussian modeling</a></li> |
282 | 272 | <!-- navigation toc: --> <li><a href="#optimizing-the-variational-diffusion-model" style="font-size: 80%;">Optimizing the variational diffusion model</a></li> |
283 | 273 | <!-- navigation toc: --> <li><a href="#continues" style="font-size: 80%;">Continues</a></li> |
@@ -1373,9 +1363,10 @@ <h2 id="diffusion-learning" class="anchor">Diffusion learning </h2> |
1373 | 1363 | <h2 id="mathematics-of-diffusion-models" class="anchor">Mathematics of diffusion models </h2> |
1374 | 1364 |
|
1375 | 1365 | <p>Let us go back our discussions of the variational autoencoders from |
1376 | | -last week, see |
1377 | | -<a href="https://github.com/CompPhysics/AdvancedMachineLearning/blob/main/doc/pub/week12/ipynb/week12.ipynb" target="_self"><tt>https://github.com/CompPhysics/AdvancedMachineLearning/blob/main/doc/pub/week12/ipynb/week12.ipynb</tt></a>. As |
1378 | | -a first attempt at understanding diffusion models, we can think of |
| 1366 | +last week and above. |
| 1367 | +</p> |
| 1368 | + |
| 1369 | +<p>As a first attempt at understanding diffusion models, we can think of |
1379 | 1370 | these as stacked VAEs, or better, recursive VAEs. |
1380 | 1371 | </p> |
1381 | 1372 |
|
@@ -1438,15 +1429,6 @@ <h2 id="diffusion-models-for-hierarchical-vae-from-url-https-arxiv-org-abs-2208- |
1438 | 1429 | \( \boldsymbol{h}_{t+1} \). Here \( \boldsymbol{z} \) is our latent variable \( \boldsymbol{h} \). |
1439 | 1430 | </p> |
1440 | 1431 |
|
1441 | | -<br/><br/> |
1442 | | -<center> |
1443 | | -<p><img src="figures/figure1.png" width="800" align="bottom"></p> |
1444 | | -</center> |
1445 | | -<br/><br/> |
1446 | | - |
1447 | | -<!-- !split --> |
1448 | | -<h2 id="equation-for-the-markovian-hierarchical-vae" class="anchor">Equation for the Markovian hierarchical VAE </h2> |
1449 | | - |
1450 | 1432 | <p>We obtain then</p> |
1451 | 1433 | $$ |
1452 | 1434 | \begin{align*} |
@@ -1553,15 +1535,6 @@ <h2 id="noisification" class="anchor">Noisification </h2> |
1553 | 1535 | identical to pure Gaussian noise. See figure on next slide. |
1554 | 1536 | </p> |
1555 | 1537 |
|
1556 | | -<!-- !split --> |
1557 | | -<h2 id="diffusion-models-from-url-https-arxiv-org-abs-2208-11970" class="anchor">Diffusion models, from <a href="https://arxiv.org/abs/2208.11970" target="_self"><tt>https://arxiv.org/abs/2208.11970</tt></a> </h2> |
1558 | | - |
1559 | | -<br/><br/> |
1560 | | -<center> |
1561 | | -<p><img src="figures/figure2.png" width="800" align="bottom"></p> |
1562 | | -</center> |
1563 | | -<br/><br/> |
1564 | | - |
1565 | 1538 | <!-- !split --> |
1566 | 1539 | <h2 id="gaussian-modeling" class="anchor">Gaussian modeling </h2> |
1567 | 1540 |
|
@@ -1655,8 +1628,13 @@ <h2 id="more-details" class="anchor">More details </h2> |
1655 | 1628 |
|
1656 | 1629 | <p>For more details and implementaions, see Calvin Luo at <a href="https://arxiv.org/abs/2208.11970" target="_self"><tt>https://arxiv.org/abs/2208.11970</tt></a></p> |
1657 | 1630 |
|
1658 | | -<!-- FIGURE: [figures/figure4.png, width=800 frac=1.0] --> |
1659 | | - |
| 1631 | +<p>The plans for the rest of the semester are</p> |
| 1632 | +<ol> |
| 1633 | +<li> May 1: public holiday</li> |
| 1634 | +<li> May 8: More on diffusion models and introduction to GANs. Project work</li> |
| 1635 | +<li> May 15: GANS and summary of course. Project work. This could also be the last lecture</li> |
| 1636 | +<li> May 22: Alternatively, we could discuss briefly reinforcement learning and do a summary of the course. Project work.</li> |
| 1637 | +</ol> |
1660 | 1638 | <!-- ------------------- end of main content --------------- --> |
1661 | 1639 | </div> <!-- end container --> |
1662 | 1640 | <!-- include javascript, jQuery *first* --> |
|
0 commit comments