Skip to content

equicktruth-ctrl/multimodal-annotation-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multimodal Annotation Demo – Image Project Overview

This project shows part of the work a remote Data Annotator would do for an AI research team: setting up an image‑classification project, labeling images with clear rules, and exporting annotations for model training or evaluation. Label Studio is used as the main annotation tool for this demo.​

Project structure

  • data/images/ – sample images taken by the annotator.

  • labels/image_annotations.csv – exported labels from Label Studio in CSV format.​

  • guidelines/labeling-guidelines.md – written labeling instructions used while annotating.​

  • screenshots/ – step‑by‑step screenshots showing the annotation process.

  • env/ – optional Python virtual environment (not committed to Git).

Annotation task

  • Tool: Label Studio, running locally in a Python virtual environment.​

  • Project: Image Annotation – Demo.

  • Goal: Classify everyday photos into safety and content categories used by content‑moderation and perception models.

  • Labels used: Adult content, Weapons, Violence, Vehicle, Airplane, Airport Terminal.

  • Mode: Single‑label classification – one primary label per image, even if multiple concepts appear.​

Labeling workflow

  1. Start Label Studio with label-studio start from the project venv to open the web UI at http://localhost:8080.​

  2. Create the Image Annotation – Demo project using the Image Classification template and configure the labels above.​

  3. Import photos from data/images/ into the project.

  4. For each image, apply the guidelines from guidelines/labeling-guidelines.md and click Submit to save the annotation.​

  5. After labeling, return to the task list and use Export → CSV to download annotations and save them as labels/image_annotations.csv.​

Labeling guidelines (summary) Full details live in guidelines/labeling-guidelines.md. In short:

  • Choose the single label that best matches the main visible subject or risk in the image.

  • Use Vehicle for real full‑size vehicles (cars, trucks, etc.), and Airplane when an aircraft is the clear primary subject.

  • Do not label toys or decorations as vehicles; do not infer unseen weapons or violence.

  • If an image does not match any label (for example, mugs on a table), submit it with no boxes checked.

Screenshots for walkthrough The screenshots/ folder contains:

  • Login and project‑dashboard views for Label Studio.

  • Label‑setup configuration screen with the custom label set.

  • Example labeling screens for several images.

  • Task list showing completed annotations and the export button.

  • These images help other readers understand the exact UI steps taken without needing to run the tool themselves.​

Evaluation

AI output review: The evaluation/ai_output_review.csv file simulates how a data annotator would review model predictions against human labels and record correctness and feedback.

Author: Erik V (Data Annotator)

About

Image annotation demo using Label Studio, showing remote data annotator workflow, labeling guidelines, and AI output review.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors