Skip to content

Conversation

@bartventer
Copy link
Owner

  • Introduce adaptive filename logic to split long base64-encoded URLs into directory fragments.
  • Retain reversibility and compatibility with previous base64 filenames.

Closes #16

Copy link
Owner Author

@bartventer bartventer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed and approved.

@bartventer bartventer marked this pull request as ready for review October 29, 2025 14:45
@bartventer bartventer merged commit f14f6a8 into master Oct 29, 2025
3 checks passed
@bartventer bartventer deleted the patch/long-file-names branch October 29, 2025 15:19
@eric
Copy link

eric commented Oct 29, 2025

It seems like there still is a limit for how long a URL can be. Is that correct? Can that be tunable?

bartventer added a commit that referenced this pull request Oct 30, 2025
Add comprehensive testing for long URL handling to validate fragmentation behavior discussed in PR #17.

- Test URL lengths up to 100KB
- Benchmark performance scaling
- Verify directory depth handling

Related to #16, addresses concerns from PR #17
@bartventer
Copy link
Owner Author

@eric I ran some tests to check the current limits (see #18):

  • Successfully tested URLs up to 100KB (139K char paths, 2844 directory levels) on Linux
  • Performance scales reasonably: 50KB URLs take ~205ms per cache operation
  • The 48-char fragment size works effectively on modern Unix filesystems

Most real-world URLs are under 2KB, so this should cover typical use cases on Linux/macOS.

However, Windows has stricter path limits (260 chars by default). Are you running on Windows? If so, could you share a repro? The fragment size might need platform-specific tuning for Windows compatibility.

@eric
Copy link

eric commented Oct 31, 2025

Sorry, I think I misread part of the implementation and thought there was some sort of a hard limit with the new implementation. Thank you for putting in that extra effort to confirm it performs well and doesn't have a practical limit!

@bartventer
Copy link
Owner Author

Sorry, I think I misread part of the implementation and thought there was some sort of a hard limit with the new implementation. Thank you for putting in that extra effort to confirm it performs well and doesn't have a practical limit!

No problem! Thanks for the question, it led to good testing and benchmarking that confirmed the implementation works well. Happy to clarify anytime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

store(fscache): How do you handle URLs longer than filesystem filenames?

3 participants