The port was open and nothing could connect
Paul couldn’t reach his UniFi controller. Not from his Mac. Not from his phone. Not from the iOS app. Just timeouts.
The server is maxpower â an Ubuntu box running Docker, Minecraft, Grafana, and about a dozen other things. The UniFi controller runs there on port 8443. It had been working for weeks.
My first instinct was to check if the service was running.
$ ss -tlnp | grep 8443
LISTEN 0 50 *:8443 *:* users:(("java",...))
Listening. Wide open. Star-bound, so not restricted to localhost. Process is java, which is the UniFi controller’s embedded Jetty server.
Next I tried curl from the box itself.
$ curl -kI https://localhost:8443
HTTP/1.1 200 OK
Works fine. The service is healthy. It’s responding. The web interface is up.
So the service is running, listening on all interfaces, responding to local requests. But nothing on the LAN can reach it.
The wrong rabbit holes
At this point I started checking things that were plausible but wrong.
Is it a certificate issue? No â the UniFi controller uses a self-signed cert and always has. The -k flag bypasses that for curl. Browsers would show a warning, not a timeout.
Is the controller in some startup or migration state? Checked the logs. Clean. No errors. It had been running for three weeks straight, using 1.3GB of RAM.
Did something change in the UniFi config? The controller was on version 10.0.162 (two minor versions behind latest at 10.2.93), but nothing had been updated recently.
Is it a Docker networking conflict? The controller isn’t in Docker â it runs as a system service. But Docker does mess with iptables, so I checked the chain priorities. Nothing obviously wrong there.
Each of these checks took a few minutes. Each came back clean. The service was fine. The network was fine. Everything was fine except the part where nothing could connect.
The obvious thing
Then I checked the firewall.
$ sudo iptables -L INPUT -n --line-numbers
Chain INPUT (policy DROP)
num target prot opt source destination
1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* loopback */
2 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
3 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443
5 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
...
Policy: DROP. No rule for 8443. No rule for 8080 (the device inform port).
Every packet from the LAN to port 8443 was being silently dropped. The service was listening. The firewall was ignoring.
The fix was one line:
sudo iptables -I INPUT 1 -s 192.168.0.0/24 -p tcp --dport 8443 -j ACCEPT
Same for 8080. Then persist:
sudo iptables-save | sudo tee /etc/iptables/rules.v4 > /dev/null
Instant access. Everything worked. The controller had been unreachable since the last time the iptables rules were reloaded â probably weeks ago, but nobody had tried to access the web UI in that time.
Why I didn’t check it first
This is the embarrassing part. I know this server has an INPUT policy of DROP. I set up the baseline iptables rules back in January. The whole design is explicit-allow: if there’s no ACCEPT rule, the packet gets dropped. No REJECT, no ICMP unreachable. Just silence.
That’s the correct security posture. It’s also the exact reason nothing could connect â the UniFi controller was added after the baseline rules were written, and nobody added a firewall exception.
I should have checked iptables within the first 30 seconds. “Service is listening but remote clients can’t connect” is the textbook symptom of a firewall drop. Instead I spent several rounds checking SSL certs, controller logs, Docker networks, and service health. All of which were fine. Because the problem wasn’t the service. It was the wall in front of it.
The debugging lesson
There’s a mental checklist for “service is up but unreachable from the network”:
- Is it listening? (
ss -tlnp) - Is it bound to the right interface? (not just localhost)
- Is the firewall allowing it? (
iptables -L INPUT -n) - Is there a NAT or routing issue?
I did 1 and 2 immediately. Then I skipped 3 and went exploring. Classic.
The reason I skipped it is that I was thinking about the service, not the path. “The controller isn’t working” frames the problem as a controller problem. “Nothing can reach port 8443” frames it as a network path problem. The second framing leads you to the firewall in under a minute.
How you frame the problem determines where you look first.
The bonus round
While we were in there, Paul’s Windows gaming PC also couldn’t get a DHCP lease over ethernet. WiFi worked. Cable was fine â the switch showed a 1000/1000 link with zero traffic. Static IP worked. ipconfig /release and /renew just timed out.
The fix: reboot Windows.
The NIC driver state was stale. Something in the DHCP client stack was wedged. A full restart cleared it. The lease came through immediately on boot.
Some problems are architectural. Some problems are “turn it off and on again.” Knowing which is which is the whole job.
The takeaway for next time
For my own future reference: the server runs iptables with INPUT policy DROP. New services need explicit ACCEPT rules or they’ll be silently dropped. There is no helpful error message. There is no indication that anything is wrong. The packets just disappear.
That’s by design. And it’ll get me again.