You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
add: notebook workflow tests to execute code (#76)
## Changes
- Add CI workflow that runs every notebook end-to-end on Modal GPU
function.
- We run standard model loading workflows on small gpus, we run modified
version of training scripts with `0.01` epoch to verify it can run the
training on larger GPUs.
Included is a README.md in the util folder to understand how commands
can be used. We introduced skipping commands to skip notebooks or skip
specific cells in notebooks if required. In addition, in this util
folder we have a file called `modal_runner.py` which contains the actual
model function deployed to modal.
Copy file name to clipboardExpand all lines: notebooks/LFM2_Inference_with_Ollama.ipynb
+10-2Lines changed: 10 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,13 @@
3
3
{
4
4
"cell_type": "markdown",
5
5
"metadata": {},
6
-
"source": "# 💧 LFM2 Inference with Ollama\n\nThis notebook demonstrates how to use the [Ollama](https://ollama.com) API to run [LFM2](https://huggingface.co/collections/LiquidAI/lfm2-67d775f3b4b6fe79fbb21bda) and [LFM2.5](https://huggingface.co/collections/LiquidAI/lfm25-6839e3e26b2a9fdbde95b341) models.\n\n> ⚠️ **Note:** Ollama is intended to run locally on your machine. This notebook shows the Python and curl API usage to get Ollama running in Colab. Install Ollama from [ollama.com/download](https://ollama.com/download) and follow the [Liquid Docs](https://docs.liquid.ai/docs/inference/ollama) to get started. Also, right now LFM VL models are currently not working with ollama, we have an [open PR](https://github.com/ollama/ollama/pull/14069) to resolve this quickly."
6
+
"source": [
7
+
"# 💧 LFM2 Inference with Ollama\n",
8
+
"\n",
9
+
"This notebook demonstrates how to use the [Ollama](https://ollama.com) API to run [LFM2](https://huggingface.co/collections/LiquidAI/lfm2-67d775f3b4b6fe79fbb21bda) and [LFM2.5](https://huggingface.co/collections/LiquidAI/lfm25-6839e3e26b2a9fdbde95b341) models.\n",
10
+
"\n",
11
+
"> ⚠️ **Note:** Ollama is intended to run locally on your machine. This notebook shows the Python and curl API usage to get Ollama running in Colab. Install Ollama from [ollama.com/download](https://ollama.com/download) and follow the [Liquid Docs](https://docs.liquid.ai/docs/inference/ollama) to get started. Also, right now LFM VL models are currently not working with ollama, we have an [open PR](https://github.com/ollama/ollama/pull/14069) to resolve this quickly."
0 commit comments