Skip to content

ssh4net/RawGL

Repository files navigation

RawGL

Command line image processing tool using OpenGL GLSL shaders.

Features

Image import/export uses native backends for common production formats, with OpenImageIO fallback for formats that are not native yet. Native paths currently cover:

  • JPEG through libjpeg-turbo/libjpeg
  • PNG through libpng
  • TIFF through libtiff, including tiled TIFF and BigTIFF
  • OpenEXR through OpenEXR, including tiled EXR

OpenImageIO remains available as the fallback path for formats that have not moved to native backends yet, such as camera RAW through OIIO's LibRaw plugin, JPEG-2000 through OIIO's OpenJPEG plugin, TGA, HDR, WebP, and other OIIO plugins available in the dependency build.

On image import and export allow define codec reading/writing options like:

For this moment support vertex, fragment and compute shaders in text and binary SPIR-V form

  • two files: shader.vert shader.frag
  • single file: shader.vertfrag or shader.glsl with RAWGL_VERTEX_SHADER and RAWGL_FRAGMENT_SHADER stage guards
  • binary: shader.vert_spv shader.frag_spv\

For prototyping or in-house use text-based shaders are easier to manage, adapt and use without any loss of speed. But in case of possible distribution, some can prefer SPIR-V binary format shaders. They do not provide too much security and can be decompiled, but decompiling them can violate the license, and can be a useful choice if you have not planned to distribute your shaders as open source.

As a tool for image processing in mind RawGL for this moment supports (hardcoded) single quad and only isometric camera.

RawGL support multi-passes shaders.
That allows the run process from input images, export results as files to disk as well as pass this result to second and other shader passes.\

For example, it is possible to create multiple shaders to open camera RAW images, do a custom demosaic in the first shader, apply color correction and/or some image processings like denoise or sharpen in the next shader, and export results to disk, in same moment whithout leaving GPU VRAM pass this results to another shader or shaders that will compile normals or separate specular components from photometric stereo capture inputs. Or import spherical panoramas, decompose them into linear projections, and pass these projections to image processing pass for preprocessing and export to disk.

RawGL itself is not a batch tool and runs as one thread, but using windows CMD batch, PowerShell, or Python scripts it is possible to run RawGL in parallel as many as can allow your system and GPU memory. On multi-GPU systems, it is possible to clone RawGL binary and define GPU affinity to run a specific copy on a specific GPU and use batch scripts run multi-threaded on Multi-GPUs.
Native support for GPU affinity as a RawGL option is in the TODO list, but only if I found a usable way to do this. In that case, it will be possible to define GPU# in CLI. Before that only used Nvidia Control Panel and use specific RawGL binary.*

Dependencies

Required dependencies -- RawGL will not build without these

Optional dependencies -- features may be disabled if not found

  • If you want support for camera "RAW" formats:
    • openimageio libraw plugin
  • If you want support for jpeg 2000 images:
    • openimageio openjpeg plugin

Building

CMakeLists.txt, cmake/, and CMakePresets.json are the canonical build entry points.

Typical Linux workflow:

cmake --preset linux-release
cmake --build --preset linux-release
ctest --preset linux-release

If your Linux dependency prefix is built with libc++, use the explicit libc++ presets instead of the generic Linux presets.

The current validated UBc prefix on this system is:

/mnt/e/UBc/Release

Use it with:

export RAWGL_LINUX_PREFIX=/mnt/e/UBc/Release
cmake --preset linux-release-python-core-libcxx
cmake --build --preset linux-release-python-core-libcxx
ctest --test-dir build_linux_release_python_core_libcxx --output-on-failure

Linux Python wheel workflow:

export RAWGL_LINUX_PREFIX=/path/to/deps-prefix
cmake --preset linux-release-python-wheel
cmake --build --preset linux-release-python-wheel --target rawgl_wheel -j 1
cmake --install build_linux_release_python_wheel --prefix /tmp/rawgl-install

That install step now mirrors OpenMeta's packaging flow: it builds the wheel during cmake --install if needed and places the resulting artifact under:

/tmp/rawgl-install/share/rawgl/wheels/

If your Linux dependency prefix is built with libc++, use the explicit linux-release-python-wheel-libcxx preset instead of the generic wheel preset.

Python API

The default Python API is workflow-oriented, not OpenGL-oriented. For the common fullscreen image-processing path, use rawgl.image(...) or session.prepare_image(...). When a Python workflow uses file-backed inputs or outputs, the high-level helpers now route that path through the public IO layer (rawgl::io::IoRuntime on the C++ side) before execution reaches the core session.

Minimal one-shot example:

import rawgl

result = rawgl.image(
    """#version 450 core
layout(location = 0) in vec2 UV;
layout(location = 0) out vec3 OutColor;
void main()
{
    OutColor = vec3(UV, 0.0);
}
""",
    size=512,
    output="output.png",
)

if not result.success:
    raise RuntimeError(result.error_message)

Repeated-run example:

import rawgl

session = rawgl.Session()
prepared = session.prepare_image(
    fragment_shader,
    size=512,
    output={"format": "rgba32f", "capture_to_host": True},
)

result = prepared.run(inputs={"gain": 1.0})
image = result.output_array()

NumPy is optional. When installed, host image inputs can be passed as NumPy arrays and captured outputs can be read back as arrays through:

result.output_array()
result.output_arrays
rawgl.make_host_image(array)
rawgl.host_image_to_array(host_image)

Use rawgl.inspect_mesh_file(path) when a script needs mesh facts before it builds a workflow. The current detailed path is OBJ-focused and reports source counts, bounds, UV range, group spans, and usemtl material IDs without loading MTL files.

For more explicit control, the lower-level nanobind façade remains available under:

rawgl.advanced

That path exposes the direct bound classes and structs such as Session, Workflow, Pass, InputBinding, and OutputBinding. Use it for advanced workflows. The default examples should prefer the higher-level helpers.

C++ IO-facing API

For file-backed C++ workflows, prefer the explicit IO/runtime split:

#include <rawgl/rawgl.h>
#include <rawgl/rawgl_io.h>

rawgl::Session session;
rawgl::io::IoRuntime io_runtime;

rawgl::Workflow workflow = /* build memory-first workflow */;
std::vector<rawgl::io::FileInputBinding> file_inputs;
std::vector<rawgl::io::FileOutputBinding> file_outputs;

rawgl::io::PrepareWorkflowResult prepared = io_runtime.prepare(session, workflow, file_inputs, file_outputs);
rawgl::RunResult result = prepared.workflow->run(rawgl::io::RunRequest{});

That keeps file decode/encode in rawgl_io and leaves rawgl_core focused on prepared workflow execution and host-memory transfer.

Typical Windows workflow from a Visual Studio 2022 x64 developer shell:

cmake --preset vs2022
cmake --build --preset vs2022-debug
cmake --build --preset vs2022-release
ctest --preset vs2022-debug

If you do not want presets, the equivalent manual configure step is:

cmake -S . -B build_vs2022 -G "Visual Studio 17 2022" -A x64

Legacy hand-written Visual Studio files remain only as reference material under ide/msvc-legacy/. CMake is the source of truth for targets, include paths, and libraries.

Known bugs and limitations

  • Long-running GPU work on Windows can still hit the default driver timeout. Use tools/registry/gpu_delays.reg and restart the system before relying on long compute or fragment workloads.
  • Compute shaders are still less exercised than the fragment path.
  • Uniform arrays are still not fully supported.

Documentation

RawGL.exe

Options Definition
-h [ --help ] Show help message
-v [ --version ] Show program version
-V [ --verbosity ] arg Log level (selection & above will be shown):
0 - fatal error only
1 - errors only
2 - warnings only
3 - info (default)
4 - debug
5 - trace
-P [ --pass_vertfrag ] arg New pass using vertex & fragment shaders (in GLSL or SPIR-V format):
--pass_vertfrag s.vert s.frag
--pass_vertfrag s.vertfrag
(single-file shader with RAWGL_VERTEX_SHADER / RAWGL_FRAGMENT_SHADER guarded sections)
--pass_vertfrag s.vert_spv s.frag_spv
(SPIR-V binary shaders)
--bg_color arg Optional. Define background color (OpenGL clear color) in RGBA
--bg_color 0.5 0.5 0.5 1.0
default background color: RGBA(0.0, 0.0, 0.0, 0.0)
--bg_color 0.5 - will be parsed as RGBA(0.5, 0.0, 0.0, 1.0)
--bg_color 0.5 0.5 - as RGBA(0.5, 0.5, 0.0, 1.0)
at least one arg is mandatory, more than 4 will end with error.
-M [ --pass_mesh ] arg Use default quad or external Mesh from file
--pass_mesh quad
--pass_mesh mesh tris true rend tr path\to\file.ply
--pass_mesh mesh path\to\file.obj tris false rend tr
mesh references like mesh::N are not supported
tris:
true (default) - file already contains triangles
false - triangulate polygon faces during mesh load
Supported file formats: PLY and OBJ
OBJ usemtl material-name IDs are available in shaders as layout(location = 4) in uint material_id;
MTL files are ignored on this path. Bind textures explicitly from the workflow/script.
rend:
tr (default) - GL_TRIANGLES: render as polygons
ln - GL_LINES: render as lines
pt - GL_POINTS - render as a point cloud
-C [ --pass_comp ] arg New pass using a compute shader:
--pass_comp s.comp
-S [ --pass_size ] arg Output size of this pass (default 512x512px):
--pass_size X [Y]
X and Y can also reference the size of an input texture from any pass:
--pass_size Texture0::0 [Texture1::1]
-W [ --pass_workgroupsize ] arg Number of threads per work group in compute shader on each axis:
--pass_workgroupsize X [Y]
Must be equal to the 'local_size' layout constant inside compute shader.
-i [ --in ] arg Uniform pass index, name & value (numeric or texture path)
(e.g.: --in Texture0 BasicTex.png).
as output from #-pass: --in outTexture::0 <- Changed in this version!
Texture filtering and sampling:
min - Texture minification function:
l - GL_LINEAR (default)
n - GL_NEAREST
ll - GL_LINEAR_MIPMAP_LINEAR
ln - GL_LINEAR_MIPMAP_NEAREST
nl - GL_NEAREST_MIPMAP_LINEAR
nn - GL_NEAREST_MIPMAP_NEAREST
mag - Texture magnification function:
l - GL_LINEAR (default)
n -GL_NEAREST
wrps - Texture wrap s-axis:
ce - GL_CLAMP_TO_EDGE (default)
r -GL_REPEAT
cb - GL_CLAMP_TO_BORDER
mr - GL_MIRRORED_REPEAT)
mce - GL_MIRROR_CLAMP_TO_EDGE
wrpt - Texture wrap t-axis:
ce - GL_CLAMP_TO_EDGE (default)
r - GL_REPEAT
cb - GL_CLAMP_TO_BORDER
mr - GL_MIRRORED_REPEAT)
mce - GL_MIRROR_CLAMP_TO_EDGE
Uniform can be float, integer, boolean, vec/ivec/bvec 2/3/4,
matrices 2x2, 2x3, ... and 4x4
--in UniformBoolName True
--in UniformFloatName 0.1524654
--in UniformVec3 0.15 0.25 0.165
--in UniformMatx 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
-t [ --in_attr ] arg Codec/plugin input attribute value
(e.g.: --in_attr oiio:colorspace sRGB).
Some used defaults
oiio:ColorSpace Linear
raw:colorSpace raw
raw:demosaic AAHD
raw:user_flip -1
raw:use_camera_wb 0
-o [ --out ] arg Shader output channel pass, name & path to save the processed image
(e.g.: --out OutColor output.jpg)
Must have a file name and extension, the extension will determine the ability to save the file with the necessary options.
The file will be recognized as:
BMP: *.bmp
PNG: *.png
JPEG: *.jpg, *.jpe, *.jpeg, *.jif, *.jfif, *.jfi
Targa: *.tga, *.tpic
OpenEXR: *.exr
HDR/RGBE: *.hdr
TIFF: *.tif, *.tiff, *.tx, *.env, *.sm, *.vsm
-f [ --out_format ] arg Output framebuffer format (default rgba32f):
r8, rg8, rgb8, rgba8,
r16, rg16, rgb16, rgba16,
r16f, rg16f, rgb16f, rgba16f,
r32f, rg32f, rgb32f, rgba32f
File writers convert this host format to the requested --out_bits when the target codec supports it.
-r [ --out_attr ] arg Codec/plugin output attribute value
(e.g.: --out_attr tiff:compression zip).
Native output families use native attributes directly. Invalid native JPEG, PNG, TIFF, or OpenEXR writer options fail instead of silently falling back to OpenImageIO.
-n [ --out_channels ] arg # of channels in output image
-a [ --out_alpha_channel ] arg Alpha channel index hint for output image (-1 = off, 0-3 = RGBA).
-b [ --out_bits ] arg # of bits per output image channel
(depends on file format):
BMP: 8
PNG: 8, 16
JPEG: 8
Targa: 8, 16
OpenEXR: 16, 32 (half & float)
HDR/RGBE: 32
TIFF: 8, 16, 32 float

Examples

RawGL.exe" -V 3 ^
-P empty.vert EmptyLUT.frag ^
--pass_size 512 ^
--in img_size 512 ^
--in lut_size 8 ^
--out EmptyLUT EmptyLUT.png ^
--out_format rgb16 ^
--out_channels 3 ^
--out_bits 16 ^
--out_attr oiio:ColorSpace linear ^
--out_attr oiio:RawColor 1 ^
--out_attr oiio:nchannels 3 ^
--out_attr oiio:UnassociatedAlpha 2 ^
--out_attr png:compressionLevel 0

This command run EmptyLUT shader with Image Buffer size 512x512px and generate EmptyLUT.png GPULut texture 8x8x8 size. If you want 1024x1024px GPULut size with 16x16x16 3D Lut you can change pass size to 1024, set Uniform img_size to 1024 and Uniform lut_size to 16. Technically you can use any size of Image buffer and img/lut sizes. For example, this can be 1024x1024 buffer size but 512x512 and 8x8x8 LUT that will make 2x px upscaled image.

RawGL.exe" ^
-V 5 ^
-P shaders\empty.vert shaders\pass1.frag ^
--pass_size 1024 ^
--in InSample inputs\EmptyPresetLUT.png ^
--out OutSample outputs\pass1.tif ^
--out_format rgb32f --out_channels 3 --out_bits 32 ^
--out_attr oiio:ColorSpace linear ^
--out_attr oiio:RawColor 1 ^
--out_attr oiio:nchannels 3 ^
--out_attr oiio:UnassociatedAlpha 2 ^
--out_attr tiff:compression ZIP ^
-P shaders\empty.vert shaders\pass2.frag ^
--in InSample2 OutSample::0 ^
--out OutSample2 outputs\pass2.tif ^
--out_format rgb32f --out_channels 3 --out_bits 32 ^
--out_attr oiio:ColorSpace linear ^
--out_attr oiio:RawColor 1 ^
--out_attr oiio:nchannels 3 ^
--out_attr oiio:UnassociatedAlpha 2 ^
--out_attr tiff:compression ZIP ^
-P shaders\empty.vert shaders\pass3.frag ^
--in _LOD 5 ^
--in InSample3 OutSample2::1 min ll ^
--out OutSample3 outputs\pass3.tif ^
--out_format rgb32f --out_channels 3 --out_bits 32 ^
--out_attr oiio:ColorSpace linear ^
--out_attr oiio:RawColor 1 ^
--out_attr oiio:nchannels 3 ^
--out_attr oiio:UnassociatedAlpha 2 ^
--out_attr tiff:compression ZIP

In this example RawGL will load image EmptyPresetLUT.png as InSample Uniform and process it in first pass1 shader. Results of this pass will be saved as 32bit float pass1.tif. Pass #2 will use Output from pass #1 as InSample2 Uniform using OutSample 0 directive and results from this pass will be saved as pass2.tif. Pass #3 will use Output from pass #2 as InSample3 Uniform using OutSample2 1 directive, compute mip-maps (min ll directive to make mimp-maps using linear interpolation) and after shader pass save output as pass3.tif.

RawGL.exe ^
-V 5 ^
-P color.vert color.frag ^
--pass_size 2048 2048 ^
-M mesh tris true rend pt Shell.ply ^
--bg_color 0.1 0.4 0.6 ^
--in inTexture "s:\3D\MATCAPS\coldsteel.png" ^
--out outColor render.jpg ^
--out_format rgba16 ^
--out_channels 4 ^
--out_bits 16 ^
--out_attr oiio:ColorSpace linear ^
--out_attr oiio:RawColor 1 ^
--out_attr oiio:nchannels 4 ^
--out_attr compression jpeg:100

More examples available in Examples folder. And some more examples can be found in Tests folder.

License

Copyright © 2022-2026 Erium Vladlen.

RawGL first-party code is licensed under the Apache License Version 2.0. Bundled third-party components retain their own licenses.

About

Command line image processing tool using GLSL shaders

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors