diff --git a/docs/ConfigExamples/Caching/pfsense_Squid_different_cache_location.md b/docs/ConfigExamples/Caching/pfsense_Squid_different_cache_location.md new file mode 100644 index 00000000..fee9e0da --- /dev/null +++ b/docs/ConfigExamples/Caching/pfsense_Squid_different_cache_location.md @@ -0,0 +1,80 @@ +--- +categories: [ConfigExample] +FaqSection: operation +--- +## Dscription +This script creates a filesystem overlay using nullfs to mount an alternate NVMe-backed directory as the Squid cache location. On pfSense systems, the standard cache path (/var/squid/cache) is transparently redirected to a user-defined path on a different drive. +The overlay itself functions correctly: Squid writes to and utilizes the new cache location as intended. However, the Clear Cache button in the pfSense GUI does not currently work with this setup. While this was the original goal of the project, this implementation is an important first step—it enables cache relocation without modifying Squid’s configured path, even though cache clearing still requires manual intervention. +— +J. Lee +## Usage +This script is intended for Netgate pfSense systems where Squid’s cache is normally located at: + /var/squid/cache +When an alternate drive or filesystem (such as NVMe storage) is used for caching, management becomes more complex. In particular, the pfSense Clear Cache function does not operate correctly, requiring manual deletion of cache files. +This script was created to simplify cache relocation by overlaying the default Squid cache path with a different storage location, while maintaining compatibility with pfSense’s expected directory structure. + +## Add the cron file + @reboot /root/mount_squid_nullfs.sh +or what ever path you use to the script file also it must be made execuateable. +## Script .sh file used for cron job + + #!/bin/sh + + TAG="squid-nullfs" + NVME_DEV="/dev/nda0p2" + NVME_MNT="/nvme/LOGS_Optane" + CACHE_SRC="${NVME_MNT}/Squid_Cache" + CACHE_DST="/var/squid/cache" + + # --- Helper function to log safely to system.log using pfSense PHP --- + log_sys() { + MESSAGE="$1" + logger -t "$TAG" "$MESSAGE" + } + + log_sys "Starting Squid nullfs mount sequence" + + # 1. Ensure NVMe filesystem is mounted + if ! mount | grep -q "on ${NVME_MNT} "; then + log_sys "Mounting NVMe filesystem" + mount_msdosfs "${NVME_DEV}" "${NVME_MNT}" || { + log_sys "ERROR: NVMe mount failed" + exit 1 + } + else + log_sys "NVMe already mounted" + fi + + # 2. Stop squid if running + if pgrep -x squid >/dev/null; then + log_sys "Stopping squid" + /usr/local/sbin/pfSsh.php playback svc stop squid + sleep 2 + fi + + # 3. Ensure directories exist + mkdir -p "${CACHE_SRC}" "${CACHE_DST}" + + # 4. Mount nullfs if not already mounted + if ! mount | grep -q "on ${CACHE_DST} "; then + log_sys "Mounting nullfs cache" + mount -t nullfs "${CACHE_SRC}" "${CACHE_DST}" || { + log_sys "ERROR: nullfs mount failed" + exit 1 + } + else + log_sys "nullfs already mounted" + fi + + # 5. Start squid + log_sys "Starting squid" + /usr/local/sbin/pfSsh.php playback svc start squid + + log_sys "Squid nullfs mount completed" + +## Testing should show a valid mount on reboot in standard system logs + Squid nullfs mount completed + Starting squid + Mounting nullfs cache + Mounting NVMe filesystem + Starting Squid nullfs mount sequence diff --git a/docs/ConfigExamples/Caching/pfsense_dual_process.md b/docs/ConfigExamples/Caching/pfsense_dual_process.md new file mode 100644 index 00000000..be5b779a --- /dev/null +++ b/docs/ConfigExamples/Caching/pfsense_dual_process.md @@ -0,0 +1,172 @@ +--- +categories: [ConfigExample] +FaqSection: operation +--- +# Squid Multi-Worker Configuration on pfSense + +## Disk Cache Workarounds and Per-Worker Cache Macros + +**Author:** J. Lee + +--- + +## Overview + +Squid supports SMP (multi-worker) operation, allowing it to run multiple main processes ("workers") to better utilize multi-core systems. + +On pfSense, there are currently two practical ways to use multiple workers: + +1. **Disable disk caching entirely** (recommended and fully supported) +2. **Use per-worker cache directories via macros** (advanced workaround) + +This document explains both approaches, their limitations, and when each should be used. + +--- + +## Important Limitation: `rock` Cache Support + +Squid requires `cache_dir rock` to safely support shared disk caching across multiple workers. + +**pfSense does not currently support rock disk caching.** + +Because of this: +- Traditional disk cache types (`aufs`, `ufs`, `diskd`) cannot be shared between workers +- Disk caching + SMP is not officially supported in pfSense +--- + +## Option 1 (Recommended): Multiple Workers with Disk Cache Disabled + +This is the simplest, safest, and most common configuration. + +### Description + +Most Squid deployments rely primarily on memory caching and CPU throughput. By disabling disk caching, Squid can safely run multiple workers on pfSense without requiring `rock`. + +This configuration significantly improves performance for: +- SSL_BUMP traffic +- High concurrent proxy connections +- Multi-core systems + +### Configuration Steps + +#### 1. Disable Disk Caching + +In the pfSense Squid package: +- Set disk cache to **null / disabled** +- This is required for SMP operation without `rock` + +#### 2. Add Required System Tunable + +A system tunable must be added to prevent worker startup failures in SMP mode. + +``` +net.local.dgram.maxdgram=16384 +net.local.dgram.recvspace=262144 +``` + +After applying this tunable: +- Squid worker failure errors will stop +- SMP mode will initialize correctly + +#### 3. Configure Workers in Advanced Squid Options + +Add the following to your Squid configuration: + +``` +workers 3 +``` + +##### Worker Directive Explanation + +> Number of main Squid processes or "workers" to fork and maintain. +> +> - `0`: "no daemon" mode, like running `squid -N ...` +> - `1`: "no SMP" mode, start one main Squid process daemon (default) +> - `N`: start N main Squid process daemons (i.e., SMP mode) +> +> In SMP mode, each worker does nearly all what a single Squid daemon does (e.g., listen on `http_port` and forward HTTP requests). + +This is particularly effective when performing SSL_BUMP operations. + +#### 4. Restart Requirement + +After changing the `workers` value: +- A **full Squid restart is required** +- A reload or refresh is not sufficient + +### Result + +- Fully supported multi-worker Squid on pfSense +- No disk cache dependency +- Large performance gains for SSL_BUMP and proxy traffic + +> **Note:** Heavy memory use - this will require something beyond 4GB of RAM + +--- + +## Option 2 (Advanced): Per-Worker Disk Cache Using Macros + +### ⚠️ Advanced / Experimental + +This method avoids shared disk access by assigning separate cache directories per worker using Squid macros. + +### Description + +This approach uses conditional logic based on the Squid process number to assign unique disk cache paths to each worker. + +This avoids cache corruption by ensuring: +- No two workers access the same cache directory + +### Requirements + +- Disk cache directories must exist before Squid starts +- Each cache directory must be initialized manually +- Increased memory usage per worker +- **Not officially supported by pfSense** + +### Macro Location + +Add this macro in pfSense under: + +**Squid → Custom Options (Before Auth)** + +### Macro Example + +```squid +if ${process_number} = 2 +cache_dir diskd /nvme/LOGS_Optane/Squid_Cache_B 32000 64 256 +endif +``` + +### Macro Explanation + +- The macro is evaluated per worker +- `process_number = 2` ensures only worker 2 uses this cache +- Prevents multiple workers from sharing a disk cache path +- `diskd` is optimized for fast storage such as NVMe as rock is not yet supported. + +### Memory Warning + +Each additional cache directory increases: +- Memory usage +- File descriptor usage +- Cache metadata overhead + +**Ensure adequate RAM is available.** + +### Pre-Usage: Cache Initialization + +Before enabling the macro, initialize the cache: + +```bash +squid -z /path/to/second/cache/ +``` + +This prepares the directory structure so Squid can safely use the second cache location. + +--- + +## Categories + +- ConfigExample +- FAQ Section: Operation