KISS research project template built for Cursor IDE, Python, and LaTeX.
docs/README.md— Main tracked project memory. Keep it short, human-readable, and current.docs/questions/README.md— Research questions, hypotheses, and evaluation criteria.docs/literature/README.md— Literature search focus, seed papers, and notes index.docs/experiments/README.md— Experiment plans, run index, and decisions.results/— Untracked outputs (git-ignored). Use for experiment runs, logs, checkpoints, and generated figures.
After forking, run the project-setup command so Cursor can fill the scaffolding. See Commands below.
This repo assumes you are using Cursor IDE, hence the .cursor folder.
- Keep
.cursor/rules/short and general. - Treat
.cursor/commands/as the main entry points. - Use
.cursor/agents/for specialist roles. - Keep project memory in
docs/, not buried in prompts.
You'll find MCP servers configured in .cursor/mcp.json and repo-level hooks in .cursor/hooks.json. Set EXA_API_KEY in your shell environment to enable Exa web search without hardcoding the secret.
export EXA_API_KEY="your-key-here"After setting-up Latex (e.g. with TeX Live), download the latex workshop extension from here.
This project uses the following latex workshop outdir: %WORKSPACE_FOLDER%/latex/build/.
Using uv to manage python dependencies and run scripts.
This project uses Just to manage scripts. With uv (see Python Config), install the CLI as a tool:
uv tool install rust-justFor platform-specific or binary installs, see Just's installation docs.
This project targets Slurm clusters (e.g. Jean Zay) using Hydra with hydra-submitit-launcher. Launcher profiles live in configs/hydra/launcher/.
From the cluster login or head node, run the experiment module with a launcher override, for example:
uv run -m scripts.run_experiment -m demo=first \
hydra/sweeper=groups_optuna \
hydra/launcher=jz-devjust launch jz-dev groups_optuna demo=firstjust launch <launcher> <sweeper> runs uv run -m scripts.run_experiment -m … with hydra/launcher=<launcher> and hydra/sweeper=<sweeper>; append further Hydra overrides as additional arguments.
The Hydra + submitit driver stays attached until every submitted Slurm job in that multirun finishes (it orchestrates the sweep). If you close SSH without planning for that, the driver can be killed and the run may fail or leave jobs in an awkward state.
tmux (or screen) is the usual pattern: run the launch command inside a persistent session, detach, and reconnect later from the same or another login node.
tmux new -s sweep
# inside the session:
just launch jz-dev groups_optuna demo=first
# detach: Ctrl+b, then d
# later (any login node):
tmux attach -t sweepUse a distinct session name per project or sweep (tmux new -s myproj-hparam). List sessions with tmux ls. This is complementary to sbatch (below), which returns immediately and does not need tmux for disconnection.
Remote clone paths and SSH host aliases are defined once at the top of the Justfile (cluster_host_*, cluster_repo_*). Defaults use the template directory name research-project-template and an example Jean Zay path under /lustre/fswork/projects/rech/<project>/<login>/ (edit nwq / uim47nr to match your allocation if needed). After project setup (fork rename or a different clone directory), update cluster_repo_* so they match the real paths on each machine.
just sync-to cvorjust sync-to jz— update local branchtrfrommain, pushtrto the remote, then on the clustercheckout mainand fast-forward mergetr.just sync-from cvorjust sync-from jz— fetch on the cluster, fetchtrlocally, mergetrinto your current branch.
Requires git remotes named cv / jz (or matching your cluster argument) and a branch tr used for transfer, as in Setup Git below.
salloc --gpus 1 -A nwq@v100
srun uv run -m scripts.run_experiment demo=firstYou can submit a one-off Slurm job with a minimal script that only runs your entrypoint, for example:
sbatch --wrap='srun uv run -m scripts.run_experiment demo=first'Adapt #SBATCH resource flags to your site (or use salloc / Hydra + submitit above). Unlike submitit, sbatch returns right after scheduling; for long Hydra + submitit runs, use tmux as described under Submit jobs (Hydra + submitit) above.
Offline runs can be synced with:
just sync-experimentsSet WANDB_API_KEY in the environment (or paste when prompted).
See notebooks/ to run notebooks on a cluster JupyterHub.
Initialize or clone your project, add a remote for your laptop, and use a dedicated transfer branch:
mkdir research-project-template
cd research-project-template
git initgit remote add jz <remote_url>git branch tr
git push --set-upstream jz trFollow the uv installation guide, then:
curl -LsSf https://astral.sh/uv/install.sh | sh
uv syncHead node vs compute. Jean Zay does not support the usual remote VS Code server workflow; options include rsync or sshfs (VS Code troubleshooting). Compute nodes are often offline; install dependencies and assets on the head node. For uv on compute you can use --no-sync where appropriate.
Quotas. Per-user and per-project disk/inode: idr_quota_user, idr_quota_project (IDRIS).
Allocation. idracct, projects via idrproj, fair share via idr_compuse (account doc).
Inodes by directory (reference):
find . ! -name . -prune -type d -exec sh -c '
for dir do
dir="${dir#./}"
printf "%s:\t" "$dir"
find ".//$dir" -type f | grep -c //
done' sh {} +Monitor jobs. squeue -u <user>, then SSH to the node and use top, htop, nvidia-smi, etc. (IDRIS).
Links. Jean Zay Slurm partitions, BigScience Slurm how-to.
These are the main Cursor entry points for the template: project-setup, literature-review, design-experiments, write-paper, whats-next, review-paper. Definitions live in .cursor/commands/.