Docker 容器 tcp syn retransmission问题排查

起因

在使用docker的clusterIP extraIP中频繁出现访问出现卡顿无法访问的问题,出现后60s后恢复正常

排查过程

容器抓包


kubctl describe pod <pod> -n mservice

docker inspect -f {{.State.Pid}} <container>

nsenter --target <PID> -n

tcpdump port 888

抓包结果

频繁出现tcp retranmisssion

netstat 结果

Tcp:
    24 active connections openings
    48 passive connection openings
    0 failed connection attempts
    0 connection resets received
    4 connections established
    892 segments received
    876 segments send out
    1 segments retransmited
    0 bad segments received.
    0 resets sent
UdpLite:
TcpExt:
    18 TCP sockets finished time wait in fast timer
    47 TCP sockets finished time wait in slow timer
    127 passive connections rejected because of time stamp
    10 delayed acks sent
    127 SYNs to LISTEN sockets dropped
    188 packets directly queued to recvmsg prequeue.
    117862 bytes directly received in process context from prequeue
    11 packet headers predicted
    72 packets header predicted and directly queued to user
    172 acknowledgments not containing data payload received
    68 predicted acknowledgments
    1 times recovered from packet loss by selective acknowledgements
    1 congestion windows fully recovered without slow start
    1 fast retransmits
    TCPSackShiftFallback: 2
    TCPOrigDataSent: 419
IpExt:
    InOctets: 186913
    OutOctets: 130888
    InNoECTPkts: 975

比对发现每次出现卡顿机会有 delayed acks sent ,SYNs to LISTEN sockets dropped 增长

查询文档发现

  • syn包相应连接的时间戳问题导致的。

    问题明显和tcp timestmap有关,发现tcp_tw_recycle/tcp_timestamps都开启的条件下,60秒内同一源ip主机的socket connect请求中的timestamp必须是递增的。(正好符合问题60s恢复)

    官方定义是在NAT环境中使用会引发问题,因为我们是通过kube-proxy访问,理论上也经过了nat,引发了问题

    解决方法

    关闭tcp_tw_recycle
    
    因为在tcp timestamp关闭的条件下,开启tcp_tw_recycle是不起作用的;而tcp timestamp可以独立开启并起作用。所以建议服务关闭tcp_tw_recycle。
    
    1、临时关闭方法:
    echo "0" > /proc/sys/net/ipv4/tcp_tw_recycle
    
    2、永久关闭方法:
    在 /etc/sysctl.conf 文件中添加 net.ipv4.tcp_tw_recycle = 0
    然后使用 sysctl -p 命令让配置文件生效