Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ chapters:
- file: setup/1c_install_conda
- file: setup/jupyter_setup
- file: setup/populating_secrets
- file: setup/pyrit_conf
- file: setup/use_azure_sql_db
- file: contributing/README
sections:
Expand Down
1 change: 1 addition & 0 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -587,6 +587,7 @@ API Reference
PlagiarismScorer
PromptShieldScorer
QuestionAnswerScorer
RefusalScorerPaths
RegistryUpdateBehavior
Scorer
ScorerEvalDatasetFiles
Expand Down
246 changes: 144 additions & 102 deletions doc/code/setup/1_configuration.ipynb

Large diffs are not rendered by default.

21 changes: 17 additions & 4 deletions doc/code/setup/1_configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.17.3
# jupytext_version: 1.19.1
# ---

# %% [markdown]
Expand All @@ -19,9 +19,22 @@
# 2. Pick a database (required)
# 3. Set initialization scripts and defaults (recommended)
#
# Alternatively, you can write a config file (`~/.pyrit/.pyrit_conf`) to parameterize this for you.

# %% [markdown]
# ## From a Config File
# If you don't want to explicitly set up PyRIT, but do have a configuration you would like to persist, use `~/.pyrit/.pyrit_conf`. See the [PyRIT Configuration Guide](../../setup/pyrit_conf.md) for more details. Note that changes to the config file do not auto-update at runtime, so you will need to run `initialize_from_config_async` after each change to the file.

# %%
# You can specify your own path for the config file using config_path
from pyrit.setup.configuration_loader import initialize_from_config_async

await initialize_from_config_async() # type: ignore

# %% [markdown]
# ## Simple Example
#
# This section goes into each of these steps. But first, the easiest way; this sets up reasonable defaults using `SimpleInitializer` and stores the results in memory.
# This section goes into each of the three steps mentioned earlier. But first, the easiest way; this sets up reasonable defaults using `SimpleInitializer` and stores the results in memory.

# %%
# Set OPENAI_CHAT_ENDPOINT, OPENAI_CHAT_MODEL, and OPENAI_CHAT_KEY environment variables before running this code
Expand Down Expand Up @@ -133,9 +146,9 @@
# Alternative approach - you can pass the path to the initializer class.
# This is how you provide your own file not part of the repo that defines a PyRITInitializer class
# This is equivalent to loading the class directly as above
await initialize_pyrit_async( # type: ignore
await initialize_pyrit_async(
memory_db_type="InMemory", initialization_scripts=[f"{PYRIT_PATH}/setup/initializers/simple.py"]
)
) # type: ignore


# SimpleInitializer is a class that initializes sensible defaults for someone who only has OPENAI_CHAT_ENDPOINT, OPENAI_CHAT_MODEL, and OPENAI_CHAT_KEY configured
Expand Down
120 changes: 112 additions & 8 deletions doc/code/targets/4_openai_video_target.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,28 @@
"source": [
"# 4. OpenAI Video Target\n",
"\n",
"This example shows how to use the video target to create a video from a text prompt.\n",
"`OpenAIVideoTarget` supports three modes:\n",
"- **Text-to-video**: Generate a video from a text prompt.\n",
"- **Remix**: Create a variation of an existing video (using `video_id` from a prior generation).\n",
"- **Text+Image-to-video**: Use an image as the first frame of the generated video.\n",
"\n",
"Note that the video scorer requires `opencv`, which is not a default PyRIT dependency. You need to install it manually or using `pip install pyrit[opencv]`."
]
},
{
"cell_type": "markdown",
"id": "1",
"metadata": {},
"source": [
"## Text-to-Video\n",
"\n",
"This example shows the simplest mode: generating video from text prompts, with scoring."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1",
"id": "2",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -53,18 +66,18 @@
},
{
"cell_type": "markdown",
"id": "2",
"id": "3",
"metadata": {},
"source": [
"## Generating and scoring a video:\n",
"\n",
"Using the video target you can send prompts to generate a video. The video scorer can evaluate the video content itself. Note this section is simply scoring the **video** not the audio. "
"Using the video target you can send prompts to generate a video. The video scorer can evaluate the video content itself. Note this section is simply scoring the **video** not the audio."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3",
"id": "4",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -448,7 +461,7 @@
},
{
"cell_type": "markdown",
"id": "4",
"id": "5",
"metadata": {},
"source": [
"## Scoring video and audio **together**:\n",
Expand All @@ -461,7 +474,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "5",
"id": "6",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -661,11 +674,102 @@
")\n",
"\n",
"for result in results:\n",
" await ConsoleAttackResultPrinter().print_result_async(result=result, include_auxiliary_scores=True) # type: ignore"
" await ConsoleAttackResultPrinter().print_result_async(result=result, include_auxiliary_scores=True) # type: ignore\n",
"\n",
"# Capture video_id from the first result for use in the remix section below\n",
"video_id = results[0].last_response.prompt_metadata[\"video_id\"]\n",
"print(f\"Video ID for remix: {video_id}\")"
]
},
{
"cell_type": "markdown",
"id": "7",
"metadata": {},
"source": [
"## Remix (Video Variation)\n",
"\n",
"Remix creates a variation of an existing video. After any successful generation, the response\n",
"includes a `video_id` in `prompt_metadata`. Pass this back via `prompt_metadata={\"video_id\": \"<id>\"}` to remix."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8",
"metadata": {},
"outputs": [],
"source": [
"from pyrit.models import Message, MessagePiece\n",
"\n",
"# Remix using the video_id captured from the text-to-video section above\n",
"remix_piece = MessagePiece(\n",
" role=\"user\",\n",
" original_value=\"Make it a watercolor painting style\",\n",
" prompt_metadata={\"video_id\": video_id},\n",
")\n",
"remix_result = await video_target.send_prompt_async(message=Message([remix_piece])) # type: ignore\n",
"print(f\"Remixed video: {remix_result[0].message_pieces[0].converted_value}\")"
]
},
{
"cell_type": "markdown",
"id": "9",
"metadata": {},
"source": [
"## Text+Image-to-Video\n",
"\n",
"Use an image as the first frame of the generated video. The input image dimensions must match\n",
"the video resolution (e.g. 1280x720). Pass both a text piece and an `image_path` piece in the same message."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "10",
"metadata": {},
"outputs": [],
"source": [
"import uuid\n",
"\n",
"# Create a simple test image matching the video resolution (1280x720)\n",
"from PIL import Image\n",
"\n",
"from pyrit.common.path import HOME_PATH\n",
"\n",
"sample_image = HOME_PATH / \"assets\" / \"pyrit_architecture.png\"\n",
"resized = Image.open(sample_image).resize((1280, 720)).convert(\"RGB\")\n",
"\n",
"import tempfile\n",
"\n",
"tmp = tempfile.NamedTemporaryFile(suffix=\".jpg\", delete=False)\n",
"resized.save(tmp, format=\"JPEG\")\n",
"tmp.close()\n",
"image_path = tmp.name\n",
"\n",
"# Send text + image to the video target\n",
"i2v_target = OpenAIVideoTarget()\n",
"conversation_id = str(uuid.uuid4())\n",
"\n",
"text_piece = MessagePiece(\n",
" role=\"user\",\n",
" original_value=\"Animate this image with gentle camera motion\",\n",
" conversation_id=conversation_id,\n",
")\n",
"image_piece = MessagePiece(\n",
" role=\"user\",\n",
" original_value=image_path,\n",
" converted_value_data_type=\"image_path\",\n",
" conversation_id=conversation_id,\n",
")\n",
"result = await i2v_target.send_prompt_async(message=Message([text_piece, image_piece])) # type: ignore\n",
"print(f\"Text+Image-to-video result: {result[0].message_pieces[0].converted_value}\")"
]
}
],
"metadata": {
"jupytext": {
"main_language": "python"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
Expand Down
74 changes: 73 additions & 1 deletion doc/code/targets/4_openai_video_target.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,18 @@
# %% [markdown]
# # 4. OpenAI Video Target
#
# This example shows how to use the video target to create a video from a text prompt.
# `OpenAIVideoTarget` supports three modes:
# - **Text-to-video**: Generate a video from a text prompt.
# - **Remix**: Create a variation of an existing video (using `video_id` from a prior generation).
# - **Text+Image-to-video**: Use an image as the first frame of the generated video.
#
# Note that the video scorer requires `opencv`, which is not a default PyRIT dependency. You need to install it manually or using `pip install pyrit[opencv]`.

# %% [markdown]
# ## Text-to-Video
#
# This example shows the simplest mode: generating video from text prompts, with scoring.

# %%
from pyrit.executor.attack import (
AttackExecutor,
Expand Down Expand Up @@ -123,3 +131,67 @@

for result in results:
await ConsoleAttackResultPrinter().print_result_async(result=result, include_auxiliary_scores=True) # type: ignore

# Capture video_id from the first result for use in the remix section below
video_id = results[0].last_response.prompt_metadata["video_id"]
print(f"Video ID for remix: {video_id}")

# %% [markdown]
# ## Remix (Video Variation)
#
# Remix creates a variation of an existing video. After any successful generation, the response
# includes a `video_id` in `prompt_metadata`. Pass this back via `prompt_metadata={"video_id": "<id>"}` to remix.

# %%
from pyrit.models import Message, MessagePiece

# Remix using the video_id captured from the text-to-video section above
remix_piece = MessagePiece(
role="user",
original_value="Make it a watercolor painting style",
prompt_metadata={"video_id": video_id},
)
remix_result = await video_target.send_prompt_async(message=Message([remix_piece])) # type: ignore
print(f"Remixed video: {remix_result[0].message_pieces[0].converted_value}")

# %% [markdown]
# ## Text+Image-to-Video
#
# Use an image as the first frame of the generated video. The input image dimensions must match
# the video resolution (e.g. 1280x720). Pass both a text piece and an `image_path` piece in the same message.

# %%
import uuid

# Create a simple test image matching the video resolution (1280x720)
from PIL import Image

from pyrit.common.path import HOME_PATH

sample_image = HOME_PATH / "assets" / "pyrit_architecture.png"
resized = Image.open(sample_image).resize((1280, 720)).convert("RGB")

import tempfile

tmp = tempfile.NamedTemporaryFile(suffix=".jpg", delete=False)
resized.save(tmp, format="JPEG")
tmp.close()
image_path = tmp.name

# Send text + image to the video target
i2v_target = OpenAIVideoTarget()
conversation_id = str(uuid.uuid4())

text_piece = MessagePiece(
role="user",
original_value="Animate this image with gentle camera motion",
conversation_id=conversation_id,
)
image_piece = MessagePiece(
role="user",
original_value=image_path,
converted_value_data_type="image_path",
conversation_id=conversation_id,
)
result = await i2v_target.send_prompt_async(message=Message([text_piece, image_piece])) # type: ignore
print(f"Text+Image-to-video result: {result[0].message_pieces[0].converted_value}")
Loading
Loading