
In the end not even service network restart did the trick we restarted the whole server and from then on the packets would begin to arrive at the container. Also if I checked /proc/net/ip_conntrack I could see the connection from my customer with and not being forwarded to the container, meanwhile my connections would show up properly, but I could also see that the connection on ip_conntrack was not ever closed - before the timeout new packets arrived which would refresh it.

Vodafone Sure Signal also uses this port. 3) Make system and audit logs unreliable since time is alterable. Summary: conntrack UNREPLIED state for UDP 4789 Keywords: Status: CLOSED DUPLICATE of bug 1985336: Alias: None Product: OpenShift Container Platform. 2) Stop security-related cron jobs from running or cause them to run at incorrect times. Bug 2005733 - conntrack UNREPLIED state for UDP 4789. On syslog, my customer traffic showed as IN=eth0 OUT=, while new connections I made to that port showed directly in the FORWARD log with IN=eth0 OUT=docker1. 1) Run replay attacks using captured OTP and Kerberos tickets before they expire. When troubleshooting unknown open ports, it is useful to find exactly what services/processes are listening to them. Then I added two iptables rules to log the UDP inbound packets: one just at the beginning of the INPUT chain, and one at the beginning of the FORWARD chain. UDP is often used with time-sensitive applications, such as audio/video streaming and realtime gaming, where dropping some packets is preferable to waiting for delayed data. I could see with tcpdump the packets reaching the host, but a tcpdump inside the container showed nothing. I don't have any specific configuration for network except exposing UDP ports where target and published ports are the same. Containers are deployed over 3 docker swarm nodes with global mode. However from then on, only new connections to this port reached the same internal port in the container, but our customer packets would not be forwarded. Also, as mentioned in the first post, if I change the source port of the UDP sender, containers start to receive traffic.

We prepped up a new server with this app inside docker, using docker-compose.yml for port mapping, and then just switched the IP addresses. There were a couple instances this year when was DDoSed, but the other servers, and :6060, were fine. including their destination and source IP addresses and TCP/UDP port numbers. After disabling uPNP, only the 324 ports remained (I fired up my Xbox 360 to make sure it couldn't poke a hole using uPNP). I have adjusted Unreplied and Assured UDP timeouts properly in my routers.
#Udp unreplied windows
In detail: We containerized a legacy application which listened on a particular UDP port, which only one of our customers used to send data. My two client IPs that had strange UDP connections were both windows boxes and both had applications that would have tried to uPNP their way out (uTorrent and a media server). Is it possible that an UDP connection gets stuck in a state where it would not be properly routed to a container, necessitating a server restart?
