Skip to content

✨ Magically access any process/container on your computer through a .localhost URL

Notifications You must be signed in to change notification settings

Xetera/localproxy

Repository files navigation

Localproxy

Build

cd ~/projects/coolproject
PORT=$(random) npm run dev
curl https://coolproject.localhost
# ✨ it just works like magic

I hate having to remember random port numbers. Localproxy solves this problem for me. It runs an embedded caddy server on port 80 and 443 with a self-signed certificate, and auto-discovers targets from both docker using EXPOSE fields and labels, and local processes listening to ports running directly under a given "projects" folder.

Localproxy supports proxying http(s), http/2, http/3 (QUIC) and TCP connections. If there's more than one service running on the same port (ex: 2 postgres databases listening on port 5432), the TCP connections MUST use TLS to allow the proxy to determine the target domain. Otherwise that information is missing without a layer 7 protocol involved (ex: redis with --tls --sni and postgres with ?sslnegotiation=direct&sslmode=require)

Certain well-known ports on your computer are also checked to detect software you may have running locally outside your regular "projects" folder like syncthing, to also proxy connections there as well.


Sadly, the project requires root privileges on macos because:

  1. Unlike Linux, On MacOS, docker runs on top of a Linux VM. Which makes reaching into docker ports impossible without having root access. There's really no good workaround for this that I know of.
  2. MacOS for whatever reason does not automatically map *.localhost to 127.0.0.1. To allow this mapping, localproxy reaches into /etc/hosts to add entries just because it's the simplest alternative since we already need root.
  3. There's a bug that prevents listening to privileged ports on specific interfaces without root. That's right, you can listen to port 80 on 0.0.0.0 but not 127.0.0.1.

If you don't need docker integration in macos, you can run without root after doing a small setup to allow resolving localhost and internal tlds.

brew install dnsmasq
echo 'address=/.localhost/127.0.0.1' >> $(brew --prefix)/etc/dnsmasq.conf
echo 'address=/.internal/127.0.0.1' >> $(brew --prefix)/etc/dnsmasq.conf
sudo brew services start dnsmasq
sudo CGO_ENABLED=0 go run ./cmd/localproxyd --watch ~/myprojects

Then navigate to the dashboard https://localhost

Flags

  • --watch Adds a folder to watch for processes. Local process watching is disabled if no folders are watched.
  • --https-redirect Force https redirects for all created endpoints (default: false)
  • --log-level Log level for the caddy server. error, info, debug (default: info)
  • --trace-process-logs Show logs from external processes on the dashboard using dtrace on macOS. Requires disabling SIP. This WILL eventually lock up your system badly enough for you to hold down the power button for a restart unless you're on Tahoe (default: false)

Local process example

cd ~/myprojects/project1
# run a webserver on any port
npm run dev
# localproxy uses the path passed to --watch to
# automatically detect processes running in its sub-folders
curl https://project1.localhost

Docker example with labels

Proxying traffic to docker containers works without exposing ports using -p. Instead you can use the following labels to configure the proxy behavior:

  • localproxy.subdomain controls the [subdomain].internal domain
  • localproxy.port 443/80 -> $port (used for webservers)
  • localproxy.tcpport $tcpport -> $tcpport (used for for tcp servers that listen non non-web ports)

To allow reaching out to localproxy urls from within containers, you need to use and map the alternative .internal tld to host-gateway using --add-hosts. .localhost specifically only points to 127.0.0.1 as per its RFC, which creates problems

Using docker run
docker run --add-host=test.internal:host-gateway alpine/curl https://test.internal
Using docker-compose
services:
  curl:
    image: alpine/curl
    command: curl https://test.internal
    extra_hosts:
      - test.internal:host-gateway

Postgres

docker run --name postgres -l localproxy.tcpport=5432 -e POSTGRES_HOST_AUTH_METHOD=trust postgres

Two requirements for connection:

  1. sslmode has to be require to not attempt a plaintext connection
  2. sslnegotiation has to be direct to use TLS instead of STARTTLS
psql "postgresql://postgres@postgres.localhost/postgres?sslmode=require&sslnegotiation=direct"

Redis

docker run --name myredis -l localproxy.tcpport=6379 redis

Connect to it from the host without exposing ports. Sadly redis-cli doesn't seem to use the local trust chain on macos. You may be able to omit --insecure on other platforms.

redis-cli --tls --insecure -h myredis.localhost --sni myredis.localhost
# If you want to explicitly verify the certificate
redis-cli --tls --cacert "$(mkcert -CAROOT)/rootCA.pem" -h myredis.localhost --sni myredis.localhost

Logs

By default localproxy tries to capture stdout logs from local processes as well as docker containers. However on macos this requires you to partially turn off SIP in recovery mode with

csrutil enable --without dtrace

If you're a yabai user, you'll want to combine this with the flags yabai requires. Relevant documentation.

csrutil enable --without dtrace --without fs --without debug --without nvram

About

✨ Magically access any process/container on your computer through a .localhost URL

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages