java在pod中丢弃数据包
我观察到k8s上一个pod的HTTP调用出现奇怪的间歇性超时。pod正在运行一个java程序,并对其他节点进行HTTP调用,甚至在其本地IP上返回自身。我们每天都要经历大约5-10分钟的时间段,其中一些HTTP调用会超时,即使被调用的远程服务器可用。在这些时间窗口中,无论HTTP调用是指向其他POD,还是返回到自己的本地IP,超时都会时断时续
为了说明这个问题,下面是一个日志文件,其中包含3个http调用以及在节点上使用tcpdump完成的相关数据包捕获:
Call 1 - success
2020/03/10 22:38:24.471 FINE [jg-es-1][guru.jini.arch.impl.other.HttpImpl.doHttpMethodReactive] # Sending [GET] request to [http://100.96.26.24:8080] --
2020/03/10 22:38:24.475 FINE [jg-async-http-3-4][guru.jini.arch.impl.rest.JaxRsResponse.<init>] # Response to [GET] on [http://100.96.26.24:8080] with RequestId [4545818456] after [3.307892]ms --
2 seconds later... Call 2 - fail
2020/03/10 22:38:26.471 FINE [jg-es-1][guru.jini.arch.impl.other.HttpImpl.doHttpMethodReactive] # Sending [GET] request to [http://100.96.26.24:8080] --
2020/03/10 22:38:27.506 WARNING [jg-async-http-timer-1-1][guru.jini.arch.impl.servicediscovery.ServiceImpl.failedTest] # Failed: java.util.concurrent.TimeoutException: Request timeout to 100.96.26.24:8080 after 1000 ms]
2 seconds later... Call 3 - success
2020/03/10 22:38:28.471 FINE [jg-es-1][guru.jini.arch.impl.other.HttpImpl.doHttpMethodReactive] # Sending [GET] request to [http://100.96.26.24:8080] --
2020/03/10 22:38:28.475 FINE [jg-async-http-3-1][guru.jini.arch.impl.rest.JaxRsResponse.<init>] # Response to [GET] on [http://100.96.26.24:8080] with RequestId [4545818464] after [3.483304]ms --
以下是从正在运行的docker容器中获取的该时段的tcpdump:
Call 1
22:38:24.472285 IP wakanda-wep-v024-428bk.45124 > 100.96.26.24.8080: Flags [S], seq 2419932923, win 65535, options [mss 8911,sackOK,TS val 1192127411 ecr 0,nop,wscale 9], length 0
22:38:24.472837 IP 100.96.26.24.8080 > wakanda-wep-v024-428bk.45124: Flags [S.], seq 331830089, ack 2419932924, win 65535, options [mss 8911,sackOK,TS val 2177727951 ecr 1192127411,nop,wscale 9], length 0
22:38:24.472853 IP wakanda-wep-v024-428bk.45124 > 100.96.26.24.8080: Flags [.], ack 1, win 128, options [nop,nop,TS val 1192127412 ecr 2177727951], length 0
22:38:24.473302 IP wakanda-wep-v024-428bk.45124 > 100.96.26.24.8080: Flags [P.], seq 1:219, ack 1, win 128, options [nop,nop,TS val 1192127412 ecr 2177727951], length 218: HTTP: GET / HTTP/1.1
22:38:24.473711 IP 100.96.26.24.8080 > wakanda-wep-v024-428bk.45124: Flags [.], ack 219, win 131, options [nop,nop,TS val 2177727952 ecr 1192127412], length 0
22:38:24.475130 IP 100.96.26.24.8080 > wakanda-wep-v024-428bk.45124: Flags [P.], seq 1:18, ack 219, win 131, options [nop,nop,TS val 2177727954 ecr 1192127412], length 17: HTTP: HTTP/1.0 200 OK
22:38:24.475140 IP wakanda-wep-v024-428bk.45124 > 100.96.26.24.8080: Flags [.], ack 18, win 128, options [nop,nop,TS val 1192127414 ecr 2177727954], length 0
22:38:24.475278 IP 100.96.26.24.8080 > wakanda-wep-v024-428bk.45124: Flags [FP.], seq 18:3550, ack 219, win 131, options [nop,nop,TS val 2177727954 ecr 1192127412], length 3532: HTTP
22:38:24.475286 IP wakanda-wep-v024-428bk.45124 > 100.96.26.24.8080: Flags [.], ack 3551, win 163, options [nop,nop,TS val 1192127414 ecr 2177727954], length 0
22:38:24.475335 IP wakanda-wep-v024-428bk.45124 > 100.96.26.24.8080: Flags [F.], seq 219, ack 3551, win 163, options [nop,nop,TS val 1192127414 ecr 2177727954], length 0
22:38:24.475750 IP 100.96.26.24.8080 > wakanda-wep-v024-428bk.45124: Flags [.], ack 220, win 131, options [nop,nop,TS val 2177727954 ecr 1192127414], length 0
Call 3
22:38:28.472442 IP wakanda-wep-v024-428bk.45182 > 100.96.26.24.8080: Flags [S], seq 1259113428, win 65535, options [mss 8911,sackOK,TS val 1192131411 ecr 0,nop,wscale 9], length 0
22:38:28.472999 IP 100.96.26.24.8080 > wakanda-wep-v024-428bk.45182: Flags [S.], seq 3440185005, ack 1259113429, win 65535, options [mss 8911,sackOK,TS val 2177731951 ecr 1192131411,nop,wscale 9], length 0
22:38:28.473031 IP wakanda-wep-v024-428bk.45182 > 100.96.26.24.8080: Flags [.], ack 1, win 128, options [nop,nop,TS val 1192131412 ecr 2177731951], length 0
22:38:28.473366 IP wakanda-wep-v024-428bk.45182 > 100.96.26.24.8080: Flags [P.], seq 1:219, ack 1, win 128, options [nop,nop,TS val 1192131412 ecr 2177731951], length 218: HTTP: GET / HTTP/1.1
22:38:28.473773 IP 100.96.26.24.8080 > wakanda-wep-v024-428bk.45182: Flags [.], ack 219, win 131, options [nop,nop,TS val 2177731952 ecr 1192131412], length 0
22:38:28.475348 IP 100.96.26.24.8080 > wakanda-wep-v024-428bk.45182: Flags [P.], seq 1:18, ack 219, win 131, options [nop,nop,TS val 2177731954 ecr 1192131412], length 17: HTTP: HTTP/1.0 200 OK
22:38:28.475361 IP wakanda-wep-v024-428bk.45182 > 100.96.26.24.8080: Flags [.], ack 18, win 128, options [nop,nop,TS val 1192131414 ecr 2177731954], length 0
22:38:28.475492 IP 100.96.26.24.8080 > wakanda-wep-v024-428bk.45182: Flags [FP.], seq 18:3550, ack 219, win 131, options [nop,nop,TS val 2177731954 ecr 1192131412], length 3532: HTTP
22:38:28.475501 IP wakanda-wep-v024-428bk.45182 > 100.96.26.24.8080: Flags [.], ack 3551, win 163, options [nop,nop,TS val 1192131414 ecr 2177731954], length 0
22:38:28.475545 IP wakanda-wep-v024-428bk.45182 > 100.96.26.24.8080: Flags [F.], seq 219, ack 3551, win 163, options [nop,nop,TS val 1192131414 ecr 2177731954], length 0
22:38:28.476074 IP 100.96.26.24.8080 > wakanda-wep-v024-428bk.45182: Flags [.], ack 220, win 131, options [nop,nop,TS val 2177731954 ecr 1192131414], length 0
正如您所看到的,在22:38:26调用2的请求甚至没有在TCP转储中看到。这解释了为什么当请求甚至没有进入网络时会出现超时
我不知道该去哪里找。我们在Java代码中尝试了不同的JVM(hotspot和openj9)、不同的HTTP客户机。一切都没有变化。如果我们将超时设置得很高,则不会有任何区别——在tcpdump上永远看不到数据包
k8s集群使用cilium,集群运行正常。这些时候没有网络问题、内存和cpu峰值。群集中的其他POD没有此问题。我们已经在独立设置的两个不同集群上看到了这个问题,运行这个特定java微服务的POD总是会遇到这个问题。在普通docker中k8s之外运行的微服务工作正常,没有任何超时
更多信息: Java是JDK11。 容器操作系统是Ubuntu 18.04。 节点位于AWS EC2中,运行k8s v1。13.9. Node OS是CoreOS 2345.3.0(Rhyolite)内核版本4.19.106-CoreOS的容器Linux。Docker是docker://18.6.3
服务清单:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wakanda-wep
annotations:
moniker.spinnaker.io/application: wakanda-wep
moniker.spinnaker.io/cluster: deployment wakanda-wep
traffic.spinnaker.io/load-balancers: '["service wakanda-wep"]'
labels:
app.kubernetes.io/name: wakanda-wep
app.kubernetes.io/managed-by: spinnaker
spec:
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
app.kubernetes.io/name: wakanda-wep
template:
metadata:
annotations:
moniker.spinnaker.io/application: wakanda-wep
moniker.spinnaker.io/cluster: deployment wakanda-wep
labels:
app.kubernetes.io/name: wakanda-wep
app.kubernetes.io/managed-by: spinnaker
spec:
containers:
- name: wakanda-wep
image: xxx/wakanda-wep
imagePullPolicy: Always
envFrom:
- configMapRef:
name: wakanda-wep
env:
- name: XXX
valueFrom:
secretKeyRef:
key: XXX
name: XXX
resources:
limits:
cpu: 2000m
memory: 2000Mi
requests:
cpu: 1000m
memory: 1000Mi
ports:
- containerPort: 8000
name: http
protocol: TCP
livenessProbe:
failureThreshold: 5
httpGet:
path: wep-conductor/rest/v1/isup
port: 8000
initialDelaySeconds: 60
periodSeconds: 3
timeoutSeconds: 1
readinessProbe:
failureThreshold: 5
httpGet:
path: wep-conductor/rest/v1/isup
port: 8000
initialDelaySeconds: 30
periodSeconds: 5
timeoutSeconds: 1
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
labels:
app: wakanda-wep
path: /prod/wep(/|$)(.*)
servicePort: https
hosts:
- internal-api.xxx.yyy
tls:
- hosts:
- internal-api.xxx.yyy
secretName: xxx
service:
enabled: true
ports:
https:
port: 443
targetPort: 8000
protocol: TCP
horizontalPodAutoscaler:
enabled: true
minReplicas: 1
maxReplicas: 3
avgCpuUtilization: 80
avgMemoryUtilization: 80
canary:
enabled: false
说明情况的基本图表:Diagram
我们应该从哪里开始寻找,以及任何可能导致这种情况的线索
共 (0) 个答案