Skip to content

Add tests #21

@Nitish-bot

Description

@Nitish-bot

As L5 grows, it’s going to become harder to ensure that new changes aren’t breaking existing functionality. Since we currently have no automated tests, we're essentially "flying blind" on regressions. I am thinking of two testing strategies that make sense for L5.

  1. Unit Testing: We need a way to verify that individual functions like math and state management behave as expected.

  2. Visual Regression / Example Runner: Since this is a processing-like repo, the most important output is the canvas. A unit test can’t easily tell if a circle is being rendered with the wrong offset.
    In essence we need the test runner to execute a subset of the scripts in /examples (only deterministic examples that have the same output on every run), captures the frame buffer, and compares it against a "Golden Master" (a known good reference image).

As for the testing framework, I have not delved deep into so take this with a grain of salt but I think busted is a strong choice. It says it's a unit-tester but really it can be used as an orchestrator. This way one framework can be used for both testing strategies.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions