Skip to Content
Service DiscoveryService ManagementAcquiring Real Client IP

Acquiring Real Client IP

How to Get Remote IP in Network Programming

  1. If it is an HTTP1.1 protocol, general reverse proxies or load balancing devices (like ULB7) support the X-Forwarded-For header field, and will add a header similar to X-Forwarded-For:114.248.238.236 in the request message sent by the user. The Web application only needs to parse this header to get the user’s real IP.

  2. If it is a custom TCP or UDP protocol, you can define a big-endian unsigned field in the protocol field to save its own IP, and the server can parse this field and call the inet_ntoa(3) function to get an ipv4 dotted string.

  3. If the protocol in 2 does not support filling in its own IP, the server can use the getpeername(2) system call of the socket to get the remote address. The following discussion is about this method.

Issue Encountered with Kubernetes Loadbalancer ULB4

UK8S uses ULB4 and ULB7 to support the Service of type Loadbalancer. For ULB7, because it only supports HTTP protocol and supports X-Forwarded-For header by default, Web service can easily obtain the real IP of the client. However, for pure four-layer protocol services accessed by ULB4, it may be necessary to use getpeername(2) to obtain the real IP of the client. However, due to the current use of Iptables mode by kube-proxy, the network library of backend pod applications cannot get the correct IP address when calling getpeername(2). The following example can illustrate the problem.

Deploy a simple webserver and access it through Loadbalancer ULB4 external network mode.

apiVersion: v1 kind: Service metadata: name: Surfercloud-nginx labels: app: ucloud-nginx annotations: service.beta.kubernetes.io/ucloud-load-balancer-type: "outer" service.beta.kubernetes.io/ucloud-load-balancer-vserver-method: "source" spec: type: LoadBalancer ports: - protocol: "TCP" port: 80 targetPort: 12345 selector: app: ucloud-nginx --- apiVersion: v1 kind: Pod metadata: name: test-nginx labels: app: ucloud-nginx spec: containers: - name: nginx image: uhub.surfercloud.com/uk8s/uk8s-helloworld:stable ports: - containerPort: 12345

After deployment, the Service status is as shown below.

# kubectl get svc ucloud-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ucloud-nginx LoadBalancer 172.17.179.247 117.50.3.206 80:43832/TCP 112s

You can access the service through EIP 117.50.3.206.

The source code of the service itself is very simple, only returning the client address, as shown below.

package main import ( "fmt" "io" "log" "net/http" "net/http/httputil" ) func main() { log.Println("Server hello-world") http.HandleFunc("/", AppRouter) http.ListenAndServe(":12345", nil) } func AppRouter(w http.ResponseWriter, r *http.Request) { dump, _ := httputil.DumpRequest(r, false) log.Printf("%q\n", dump) io.WriteString(w, fmt.Sprintf("Guest come from %v\n", r.RemoteAddr)) return }

Access the service via browser on the public network, as shown below.

The results show that the IP address of the user’s access is the internal IP address of a cloud host, which is obviously incorrect.

Explanation of the Cause

After the Loadbalancer is successfully created, ULB4’s VServer uses each cloud host node in the UK8S cluster as its RS node, and the RS port is the Port value declared by the Service (Note that it is not NodePort). After ULB4 forwards the traffic to one of the RS, the RS forwards the traffic to the backend Pod according to the iptables rules generated on the host by kube-proxy, as shown below.

In the figure, ULB4 first forwards the traffic to Node1. According to the iptables DNAT rules in Node1, the traffic is forwarded to the Pod in Node2.

One thing to note is that before Node1 forwards the IP package to Node2, there is a MASQUERADE operation on this package, the rule is as follows.

-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE

This rule changes the source address to the local address of Node1, that is, 10.9.31.26.

Why is SNAT operation required for outbound packets?

The reason is quite simple. Refer to the figure below. After the Pod on Node1 processes the request, it needs to send a response package. If there is no SNAT operation, the source address of the request package received by the Pod is the client’s IP address. At this time, the Pod will directly send the response package to the client’s IP address. But for the client program, it clearly did not send a request packet to the Pod IP, but received an IP packet from the Pod, this packet is likely to be discarded by the client.

client \ ^ \ \ v \ ulb4 \ ^ \ \ v \ node 1 <--- node 2 | ^ SNAT | | ---> v | endpoint

How to Get the Source IP?

For situations where the Pod needs to explicitly know the source address of the client, we need to set the spec.externalTrafficPolicy of the Service to Local, as shown below.

apiVersion: v1 kind: Service metadata: name: ucloud-nginx labels: app: ucloud-nginx annotations: service.beta.kubernetes.io/ucloud-load-balancer-type: "outer" service.beta.kubernetes.io/ucloud-load-balancer-vserver-method: "source" spec: type: LoadBalancer ports: - protocol: "TCP" port: 80 targetPort: 12345 selector: app: ucloud-nginx externalTrafficPolicy: Local

After redeploying the service and accessing it with a browser again, the Pod will correctly obtain the access IP of the browser.

client \ ^ \ \ v \ ulb4 ^ / \ / / \ VServer health check fails / v X node 1 node 2 ^ | | | | v endpoint

For other Node nodes that do not have any running Service corresponding Pod, the health check detection from ULB VServer to them will fail due to the DROP rule in iptables, this ensures that requests from users will never be sent to these nodes, and ensures that these requests can be correctly responded to.