Skip to content

Tune predator prey model parameters#27

Open
tkoskela wants to merge 6 commits intocarpentries-incubator:mainfrom
tkoskela:tk/predprey-params
Open

Tune predator prey model parameters#27
tkoskela wants to merge 6 commits intocarpentries-incubator:mainfrom
tkoskela:tk/predprey-params

Conversation

@tkoskela
Copy link

@tkoskela tkoskela commented Jan 28, 2026

I sat down with @JostMigenda in a workshop and had a go at tuning the parameters of the predator prey model to get it to oscillate between predator and prey populations. It is not quite there yet, the prey still all die, but works a bit better.

image

JostMigenda and others added 2 commits November 3, 2025 07:49
Ensure prey (1) have children, which (2) aren’t predators
@github-actions
Copy link

🆗 Pre-flight checks passed 😃

This pull request has been checked and contains no modified workflow files or spoofing.

It should be safe to Approve and Run the workflows that need maintainer approval.

@JostMigenda
Copy link
Collaborator

Thanks @tkoskela for looking into this. Yep—it’s very tricky to tune the parameters to give nice results, but this is certainly a big improvement over the previous version.

A couple of notes:

  • This includes the bugfix commit from fix logic issues in predprey example #17. Makes sense to keep bugfixes and tuning in one PR; we just shouldn’t get confused by this later.
  • Before we merge this, I should check the episode text & exercises for consistency; some of the profiling outcomes we mention there will have changed as a result of these changes.
  • Finally, very minor notes on the code itself:
    • The comments look like LLM slop: vaguely plausible, but sometimes contradicting the actual changes in the PR; and I don’t think they add much over the parameter names themselves
    • Adding extra spaces around = for alignment: I know there are different opinions around this, but the Python convention is not to add them

@Robadob
Copy link
Collaborator

Robadob commented Jan 28, 2026

More than happy to defer to @JostMigenda's judgement on approving this, looks reasonable at a glance. Please, remember to briefly review the instructor notes/solution incase anything subtle has changed there.

I think the original code was heavily inspired by our FLAMEGPU Python tutorial, which is probably inspired by a common simple agent-based model. Though that subtle bug looks like something FLAMEGPU would handle natively so it's almost certainly my mistake.

https://github.com/FLAMEGPU/FLAMEGPU2-tutorial-python

Thanks for supporting this project!

@Robadob
Copy link
Collaborator

Robadob commented Jan 28, 2026

Not sure why that workflow is failing, it's something managed by carpentries developers. There is a warning about their new docker workflow, I'll try and remember to flag it to them on Slack when I'm on my office machine tomorrow.

Edit: Other people have reported the same issue since yesterday afternoon, so it will probably be resolved before the end of the week.

Feel free to ignore it, the build markdown one passes. 🤷‍♂️

@tkoskela
Copy link
Author

Finally, very minor notes on the code itself:

  • The comments look like LLM slop: vaguely plausible, but sometimes contradicting the actual changes in the PR; and I don’t think they add much over the parameter names themselves
  • Adding extra spaces around = for alignment: I know there are different opinions around this, but the Python convention is not to add them

Yeah, I was using Copilot for tuning the parameters. I get your point about the comments, they're not very useful and I've removed them and fixed the formatting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants