Help Troubleshooting Grist behind Nginx Reverse Proxy (Docker) - 502 Bad Gateway

Hi everyone,

I’m trying to set up Grist Omnibus behind an Nginx reverse proxy to enable HTTPS. I’ve managed to get my other service (XWiki) working with this setup, but Grist is proving to be a challenge. I seem to be stuck on the final step and would appreciate any guidance.

My Environment:

  • OS: Ubuntu Server

  • Grist: gristlabs/grist-omnibus:latest running via Docker Compose.

  • Nginx: Running in a separate Docker container with network_mode: host.

  • Certificates: Self-signed certs generated with mkcert.

  • Goal: Access Grist at https://grist.internal.lan (hostname is resolved via /etc/hosts on the server).

After some initial errors (“Please define URL” and “HTTPS must be auto, external, or manual”), I’ve settled on the following docker-compose.yml for Grist. This configuration allows the container to start and run successfully.

services:
  grist:
    image: gristlabs/grist-omnibus:latest
    ports:
      - "8484:80"
    environment:
      - URL=https://grist.internal.lan
      - HTTPS=external
      - TEAM=myteam
    volumes:
      - ./data:/persist

With Grist running, I can confirm it’s alive on the host machine. A curl to its HTTP port results in the expected redirect, which tells me it’s waiting for a secure proxy:

# On the host server
$ curl -I http://127.0.0.1:8484
HTTP/1.1 308 Permanent Redirect
Location: https://localhost/

The Problem:

When I try to access https://grist.internal.lan through my Nginx proxy, I get a 502 Bad Gateway error.

Here is my nginx.conf file. The block for XWiki on port 8080 works perfectly, but the Grist block does not.

events {}

http {
  # Redirect all HTTP to HTTPS
  server {
    listen 80;
    server_name grist.internal.lan xwiki.internal.lan;
    return 301 https://$host$request_uri;
  }

  # Grist Proxy (NOT WORKING)
  server {
    listen 443 ssl http2;
    server_name grist.internal.lan;

    ssl_certificate      /certs/grist.internal.lan.pem;
    ssl_certificate_key  /certs/grist.internal.lan-key.pem;

    location / {
      proxy_pass http://127.0.0.1:8484;

      proxy_set_header Host              $host;
      proxy_set_header X-Real-IP         $remote_addr;
      proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;

      # WebSocket support
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "Upgrade";
    }
  }

  # XWiki Proxy (WORKING)
  server {
    listen 443 ssl http2;
    server_name xwiki.internal.lan;
    # ... (config is similar, proxy_pass to http://127.0.0.1:8080)
  }
}

When the 502 error occurs, the Nginx error log shows the following:

[error] 24#24: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.X, server: grist.internal.lan, request: “GET / HTTP/2.0”, upstream: “http://127.0.0.1:8484/”, host: “grist.internal.lan”

This is confusing, because a curl from the host can connect, but Nginx (also on the host network) gets a “Connection refused”.

Am I missing a specific header or configuration that this version of Grist Omnibus requires? Any insight would be greatly appreciated.

Thanks!

Huh. Thanks for the very detailed explanation of your setup. The fact that XWiki proxy_pass is working and not Grist narrows things down, perhaps. If you were to set GRIST_HOST=0.0.0.0 in the environment for grist in docker-compose.yml, that would rule out binding problems. It could be that Grist is listening specifically for grist.internal.lan which maybe is not loopback but on the lan network interface? You could also try proxy_pass to http://grist.internal.lan:8484

Hi Paul,

Thanks again for your suggestions. I’ve applied both of them, but unfortunately, it’s still not working. However, the debug results are very interesting and point to a specific behavior.

Here is a summary of the steps I took and the results:

1. Updated Grist’s docker-compose.yml:
I added GRIST_HOST=0.0.0.0 to the environment variables and restarted the container. The container starts successfully.

  • Grist Logs: The Grist logs show it’s running and repeatedly trying to check a dex configuration, but there are no fatal errors. The key line seems to be Calling traefik […] --entrypoints.web.http.redirections.entrypoint.to=websecure, confirming it’s set up to redirect HTTP to HTTPS.
grist-app | Calling traefik [...]
grist-app |   '--entrypoints.web.http.redirections.entrypoint.to=websecure'
grist-app | ]
grist-app | ...
grist-app | Welcome to Grist.

2. Updated Nginx proxy_pass:
I changed the proxy_pass directive to proxy_pass http://gristpavicon.com.br:8484; and reloaded Nginx.

3. Diagnostic Results:

  • Nginx Access Log: My Nginx error log is now clean, but the access log shows an infinite loop of 301 redirects. This confirms Nginx is receiving a redirect instruction and passing it to the browser.
127.0.0.1 - - [24/Jun/2025:15:15:00 +0000] "GET / HTTP/2.0" 301 17 "-" "Mozilla/5.0..."
127.0.0.1 - - [24/Jun/2025:15:15:00 +0000] "GET / HTTP/2.0" 301 17 "-" "Mozilla/5.0..."
(repeats indefinitely)

Direct curl Test: This is the most telling result. When I curl from the host using the exact same URL as the proxy_pass, Grist responds with a 308 redirect. This shows the host can connect to Grist, but Grist insists on redirecting to HTTPS.

$ curl -I http://gristpavicon.com.br:8484
HTTP/1.1 308 Permanent Redirect
Location: https://gristpavicon.com.br/

It seems we have a classic redirect loop. Even though my Nginx config sends the X-Forwarded-Proto: https header, Grist doesn’t seem to be honoring it when Nginx proxies the request to http://gristpavicon.com.br:8484.

My Nginx container is running in network_mode: host, which I suspect might be the root cause of this communication issue.

Given these results, is there another setting I’m missing to make Grist honor the X-Forwarded-Proto header in this specific setup, or is the best path forward to re-architect my Nginx container to use a shared Docker network instead of network_mode: host?

Thanks

Ah, I found an extra piece in the grist-omnibus README:

If you run the omnibus behind a separate reverse proxy that terminates SSL, then you should HTTPS=external, and set an additional environment variable TRUSTED_PROXY_IPS to the IP address or IP range of the proxy. This may be a comma-separated list, e.g. 127.0.0.1/32,192.168.1.7. See Traefik’s forwarded headers.

Context:

Still didn´t work.

After a long debugging session, I’ve gathered a lot of information but have reached a deadlock. I have an idea but would like some opinion on it before implementing.

I want to run Grist Omnibus on a server, accessible via HTTPS through an Nginx reverse proxy at a custom domain like grist.example.com.

My Current Setup That Causes the Loop:

  1. Grist docker-compose.yml:
  • HTTPS=external

  • URL=https://grist.example.com

  • GRIST_HOST=0.0.0.0 (as suggested by Paul)

  • TRUSTED_PROXY_IPS=127.0.0.1/32 (as suggested by Paul)

  • The Grist container is running and exposes port 8484.

  1. Nginx docker-compose.yml:
  • Nginx is running in a separate container with network_mode: host .
  1. Nginx nginx.conf:

Why it’s looping (My Diagnosis):

  • A curl -I http://grist.example.com:8484 from the host machine confirms that Grist is alive but always responds with a 308 redirect to HTTPS.

  • The Nginx access log shows it receives this redirect and passes it to my browser.

  • My Conclusion: Grist’s internal Traefik proxy is not honoring the X-Forwarded-Proto header in this specific architecture (network_mode: host). It always sees the proxy_pass request as plain HTTP and forces a redirect, ignoring that the original connection was secure.

JordiGH’s comment that “it wasn’t working correctly for me either when I tried to setup external TLS termination” strongly suggests this might be an unpatched issue or a very brittle aspect of the Omnibus version.

POSSIBLE SOLUTION

Since trying to fix the current setup has failed, I have formulated a plan to rebuild the architecture following standard Docker best practices. This approach bypasses the problematic HTTPS=external logic entirely.

The plan is to use a single docker-compose.yml file to manage both Nginx and Grist together on a shared Docker network.

# ... (events, http, redirect block) ...
server {
  listen 443 ssl http2;
  server_name grist.example.com;
  # ... (ssl certs) ...

  location / {
    # Proxy to Grist using its service name on the internal Docker network
    proxy_pass http://grist-app:80;
    # ... (all standard X-Forwarded-* headers) ...
  }
}

Why I imagine this might have a chance to work

  1. It avoids the bug: By setting HTTPS=off, I can completely disable Grist’s internal proxy and its problematic redirect logic.

  2. Reliable Communication: Nginx and Grist will communicate directly and reliably over the private Docker network (app-net) using service names (grist-app). This eliminates all issues related to localhost, network_mode: host, and IP addresses.

  3. Nginx has full control: Nginx handles 100% of the SSL termination and proxying. Grist’s only job is to serve content, which is a much more stable configuration.

QUESTION TO GRIST TEAM

Before I go ahead and rebuild my setup with this unified architecture, does this plan seem sound to you? Is there any reason why running Grist Omnibus with HTTPS=off behind a reverse proxy would not be a recommended or supported configuration?

Just one small point here, I had meant to suggest the 0.0.0.0 change and this change as two distinct things you could try. The 0.0.0.0 should make Grist listen on all network interfaces if it isn’t already. If you connect at grist.example.com I’m not sure what IP address you’ll have, and it would be important to match that with TRUSTED_PROXY_IPS. @dmitry-grist may know more about the TRUSTED_PROXY_IPS issue.

If you suspect traefik is not honoring something, I’d focus on the TRUSTED_PROXY_IPS setting. It goes directly into Traefik’s entryPoints.web.forwardedHeaders.trustedIPs setting (see here), documented at Traefik EntryPoints Documentation - Traefik. I imagine that 127.0.0.1/32 isn’t the right setting, since nginx runs in a separate container. It should be an IP or a range that includes the IP address from which Nginx connects to Traefik.

a final, detailed summary of my troubleshooting efforts. Despite trying multiple architectures, including the best practices recommended here and by both ChatGPT and Google AI Studio, the issue persists. I am now convinced this is a bug or an intractable issue within the Grist Omnibus image itself, as JordiGH initially suspected.

Summary of Actions Taken:

After restoring the server to a clean snapshot, we abandoned the old, separated setup and built a new, unified architecture from scratch, following standard Docker best practices.

1. Unified Architecture:

  • A single docker-compose.yml file was created to manage both Nginx and Grist.

  • A private bridge network (app-net) was defined in the compose file, ensuring both containers could communicate directly and reliably via their service names (grist-app and nginx-proxy).

  • The problematic network_mode: host on Nginx was removed. Nginx was configured to expose ports 80 and 443 in the standard way.

  • All data (Grist’s database, XWiki’s data, etc.) and configurations were moved to a single, organized directory structure (~/sistemaspavicon).

2. Network Connectivity Test (Successful):

  • We executed a shell inside the running nginx-proxy container .

  • From inside Nginx, we ran curl http://grist-app:80.

  • Result: The command was successful . Grist responded with a Moved Permanently redirect. This 100% confirms that the Docker networking is perfect and Nginx can successfully resolve and connect to the Grist container.

The Final Deadlock: The TRUSTED_PROXY_IPS Failure

Based on Dmitry’s feedback, the final attempt was to make Grist’s internal Traefik proxy explicitly trust Nginx within our new, clean architecture.

1. Grist Configuration:

  • HTTPS was set to external (as required by Grist).

  • URL was set to https://grist.example.com.

  • We found the internal IP of the nginx-proxy container on the app-net network (e.g., 172.22.0.5).

  • We set TRUSTED_PROXY_IPS=172.22.0.5/32 in Grist’s environment variables.

2. Nginx Configuration:

  • proxy_pass was set to http://grist-app:80;.

  • All X-Forwarded-* headers (including X-Forwarded-Proto) were correctly set.

3. The Resulting Error:
Despite this “perfect” configuration, the result was the same:

  • A 502 Bad Gateway error in the browser.

  • Nginx Error Logs: connect() failed (111: Connection refused) while connecting to upstream, client: …, server: grist.example.com, request: …, upstream: “http://grist-app:80/”.

Final Conclusion:
Even in a pristine, textbook Docker network environment, and even when explicitly telling Grist’s Traefik to trust the Nginx proxy’s internal IP address, Grist still fails to initialize or accept the connection correctly. It seems the HTTPS=external logic is fundamentally broken or has an unhandled edge case, causing it to refuse connections from the proxy instead of honoring the forwarded headers.

At this point, I have exhausted all logical configuration paths. I will be reverting to my pre-HTTPS snapshot to have a functional system. I believe this issue requires investigation from the Grist development team.

I can’t offer any direct fixes to the problems that you’ve encountered, as you seem to have a much better functional understanding of the inner workings than I do.

I had similar issues trying to get Grist Omnibus and NPM running together. I ended up using Caddy and Grist core and have a running instance available at grist.example.com. I’ve also configured Caddy with Authelia to enforce authenticated users and passing the email header to grist so each user is signed into their own grist account. I still feeling like I’m holding on by a shoestring (understanding wise), but it’s working.

I can’t offer any direct fixes to the problems that you’ve encountered, as you seem to have a much better functional understanding of the inner workings than I do.

You mean ChatGPT and Google Gemini seem to have that understanding lol

As I tried a lot of things including suggestions here, but they failed…to eliminate any previous misconfigurations, I have restored my server to a clean snapshot and am starting fresh.

My current plan

  1. I have a single Grist Omnibus container running via its own docker-compose.yml.
  2. It is working and accessible on the host machine at http://localhost:8484.
  3. Separately, I will run a standard Nginx container to act as the reverse proxy for https://grist.example.com.

As I understand based on Dmitry’s last comment, I need to

  1. Configure Grist with HTTPS=external and URL=https://grist.example.com.
  2. Configure Nginx to proxy_pass to Grist’s exposed port (e.g., http://localhost:8484).
  3. Crucially, find the correct IP address of the Nginx container (as seen by Grist) and set it in the TRUSTED_PROXY_IPS environment variable in Grist’s configuration.

Question 1: How do I reliably determine the correct IP for TRUSTED_PROXY_IPS in this architecture? Is it the Docker gateway IP (found with docker network inspect bridge), and is this IP stable enough for a production setting?

Question 2: Is this “find the gateway IP and trust it” method the simplest and recommended way forward, or is there a more straightforward approach I am missing for this basic architecture?