Tailscale Funnel Stuck in “Background Configuration Already Exists” State

:lady_beetle: Tailscale Funnel: Stuck in Phantom Issue Solved

As stated in my first post, when attempting to enable Funnel, the node consistently returns a persistent, phantom error, regardless of local state cleanup.

  • tailscale funnel on always returns:

    background configuration already exists

  • Local reset fails:

    tailscale serve reset
    invalid localapi request

  • The error survives OS reinstalls, new Node IDs, and complete wiping of all local Tailscale state.

  • I attempted to contact Tailscale by submitting a support ticket, which received no reply.

So I rolled my sleeves and went to work….

Root Cause

The ServeConfig row is stuck in the control-plane database and can only be reliably cleared by:

  1. the same node-key calling /localapi/v0/serve reset, or

  2. Tailscale staff running an internal manual deletion .

If your binary is ≤ 1.78 (most distro packages) the local-api is disabled or broken, so option 1 is impossible.

The Fix that worked!

1. Nuke every old binary (they were ≤ 1.78)

Ensure all old Tailscale versions (especially those ≤ 1.78) are completely removed.

sudo systemctl stop tailscaled
sudo pkill -9 tailscaled
sudo rm -f /usr/sbin/tailscaled /usr/bin/tailscale /usr/local/bin/tailscale

2. Install current unstable (≥ 1.91) so I have working LocalAPI flags

Install a modern static binary (e.g., ≥ 1.91). This one took a while as I had to try different ones.

wget https://pkgs.tailscale.com/unstable/tailscale_1.91.65_amd64.tgz
sudo tar -C /usr/local -xzf tailscale_1.91.65_amd64.tgz
sudo cp /usr/local/tailscale_1.91.65_amd64/tailscale* /usr/bin/

3. Override systemd to use the new binary forever

Ensure the system uses the new binary consistently.

sudo tee /etc/systemd/system/tailscaled.service.d/binary.conf <<'EOF'
[Service]
ExecStart=
ExecStart=/usr/sbin/tailscaled --state=/var/lib/tailscale/tailscaled.state --socket=/run/tailscale/tailscaled.sock --port=41641
EOF
sudo systemctl daemon-reload
sudo systemctl enable --now tailscaled

4. Create a fresh tailnet and delete the old one

  • Admin console → Settings → Delete tailnet (destroys every stuck row).
    Then, re-create the tailnet with the another account for a blank slate.

5. Generate a re-usable auth-key with the tag that bypasses the ghost row

  • Admin console → Keys → Generate auth-key → Advanced
    Add the tag: tag:server (disables key expiry and auto-tags the node).
    Copy the single tskey-auth-… string.
  • ACL requirement (must add before creating the key):

Ensure the tag is owned in the ACL:

"tagOwners": {
  "tag:server": ["ola_olu@example.com"]
}

On the server, authenticate with the new key:

sudo tailscale up --auth-key=<KEY>

6. Start your gRPC/JSON-RPC service (example: Zebra)

Start your application that will be exposed via Funnel.

cat > zebra.toml <<'EOF'
[network]
listen_addr = "0.0.0.0:8233"
initial_mainnet_peers = []

[rpc]
listen_addr = "0.0.0.0:8232"
debug_force_finished_sync = true
enable_cookie_auth = false
EOF
docker run -d --name zebra -p 8232:8232 -p 8233:8233 \
  -v "$PWD/zebra.toml":/home/zebra/.config/zebrad.toml \
  zfnd/zebra:latest

7. Enable Funnel (Success):rocket:

The Funnel command should now execute without error.

sudo tailscale funnel --bg --https=443 127.0.0.1:8232

8. Test from the internet (not server)

curl -X POST https://<node>.<tailnet>.ts.net:443 \
  -d '{"jsonrpc":"2.0","method":"getblockcount","id":1}' \
  -H 'Content-Type: application/json'
# → {"jsonrpc":"2.0","id":1,"result":258629}

What I learned from this stubborn issue

  1. “background configuration already exists” = control-plane row, not local files issue.

  2. If local-api is disabled, serve reset will most likely not work. So use auth-key on a fresh tailnet.

  3. Distro packages are ancient as the roman empire; rely on static/unstable installs for the latest features/fixes.

  4. systemd override keeps the new binary across reboots. No human uptime required so far as I can tell.

  5. Lastly, once the row is purged, Funnel will mostly likely work.

There’s a one-liner health check you could try though

curl -k http://127.0.0.1:<local-port> -d '{"jsonrpc":"2.0","method":"getblockcount","id":1}' -H 'Content-Type: application/json'

If that returns JSON, your tunnel is healthy. That means it’s pointing the world at https://<node>.<tailnet>.ts.net:443

Let me know if this works for you if you ever get stuck with a similar issue. And, if you tried something else, please feel free to share that solution as well!

4 Likes