Skip to content

Disitha395ra/Heatmap-Visualization-Techniques-for-Identifying-AI-Edited-Image-Manipulations

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Heatmap Visualization Techniques for Identifying AI-Edited Image Manipulations

GitHub Stars GitHub Forks License Language


Table of Contents


Summary

This project provides a collection of Python-based heatmap visualization techniques designed to identify and pinpoint AI-edited or AI-generated manipulations within images. Leveraging various computer vision and explainable AI (XAI) methodologies, the tool aims to enhance transparency and aid in the detection of subtle, often imperceptible, alterations introduced by advanced AI models. It offers researchers, forensic analysts, and developers a robust framework to visualize regions of interest where AI interventions are most prominent.


Features

  • Diverse Heatmap Generation: Implementations of various heatmap generation algorithms (e.g., Grad-CAM, Saliency Maps, Attention Maps, Anomaly Detection based heatmaps) tailored for image manipulation detection.
  • Image Preprocessing: Utilities for loading, resizing, and normalizing images to prepare them for analysis.
  • Visualization Overlay: Seamless overlay of generated heatmaps onto original images to clearly highlight manipulated regions.
  • Customizable Parameters: Options to adjust heatmap intensity, color schemes, and thresholding for refined visualization.
  • Output Formats: Support for saving analyzed images with heatmap overlays in various common image formats.
  • Modular Design: A flexible architecture that allows for easy integration of new heatmap algorithms or AI detection models.

Tech Stack

This project is built using the following technologies:

  • Python: The primary programming language.
  • NumPy: Essential for numerical operations and array manipulation.
  • Matplotlib: Used for generating and displaying heatmaps and visual overlays.
  • Pillow (PIL Fork): For basic image loading, manipulation, and saving.
  • OpenCV (cv2): For advanced image processing tasks (e.g., resizing, color conversions).
  • PyTorch / TensorFlow (Optional): If integrating with specific deep learning models for manipulation detection, one of these frameworks would be utilized for model inference and gradient-based heatmap generation.

Installation Steps

To get this project up and running on your local machine, follow these steps:

  1. Clone the repository:

    git clone https://github.com/YOUR_USERNAME/YOUR_REPOSITORY.git
    cd Heatmap-Visualization-Techniques-for-Identifying-AI-Edited-Image-Manipulations
  2. Create a virtual environment (recommended):

    python -m venv venv
    source venv/bin/activate  # On Windows use `venv\Scripts\activate`
  3. Install the required dependencies:

    pip install -r requirements.txt

    (A sample requirements.txt might look like:

    numpy
    matplotlib
    Pillow
    opencv-python
    # If using deep learning:
    # torch
    # torchvision
    # tensorflow
    

    )


Usage Instructions

This section will guide you on how to use the various heatmap visualization techniques.

  1. Prepare your input image(s): Place the images you wish to analyze in a designated directory (e.g., input_images/).

  2. Run the main script with desired parameters: You can specify the input image path and the heatmap generation method.

    Example: To generate a saliency map for an image:

    python main.py --image input_images/ai_edited_image.jpg --method saliency --output_dir output_heatmaps/

    To generate a Grad-CAM heatmap (requires a pre-trained model capable of detecting manipulations):

    python main.py --image input_images/ai_edited_image.jpg --method gradcam --model_path path/to/your/detection_model.pth --target_layer layer4 --output_dir output_heatmaps/

    Common Arguments:

    • --image: Path to the input image file. (Required)
    • --method: Heatmap generation method (e.g., saliency, gradcam, attention, anomaly). (Required)
    • --output_dir: Directory to save the output heatmaps and overlaid images. (Default: output/)
    • --model_path: (Optional, required for gradcam or similar methods) Path to a pre-trained AI manipulation detection model.
    • --target_layer: (Optional, for gradcam) The target convolutional layer for Grad-CAM.
    • --alpha: (Optional) Transparency level for the heatmap overlay (0.0-1.0). (Default: 0.6)

    Refer to the main.py script for a comprehensive list of arguments and their descriptions.

  3. View the results: The generated heatmap and the original image with the heatmap overlay will be saved in the specified output_dir.

Example Output

(You would typically include a screenshot here showing an example image with a heatmap overlay)

Example Heatmap Output A placeholder for an image demonstrating a heatmap overlaying an AI-edited image, highlighting manipulated regions.


Contributing

We welcome contributions to this project! If you'd like to improve the codebase, add new features, or fix bugs, please follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix: git checkout -b feature/your-feature-name.
  3. Make your changes and ensure the code adheres to the existing style.
  4. Write clear, concise commit messages.
  5. Push your branch to your forked repository.
  6. Open a Pull Request against the main branch of this repository, describing your changes in detail.

Please ensure your pull requests are well-documented and pass any existing tests.


License

This project is licensed under the MIT License - see the LICENSE file for details.


Note: Remember to replace YOUR_USERNAME/YOUR_REPOSITORY in the badge links with your actual GitHub username and repository name once your project is hosted. You should also create a LICENSE file in the root of your project with the MIT license text.

About

The project bridges the gap between deep learning detection and human understanding. By generating heatmaps (using techniques like Grad-CAM), the system highlights "artifacts"—tiny digital inconsistencies left behind by AI—transforming complex data into a visual map that anyone can interpret.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors