Druid写入zookeeper数据太大的解决办法

报错如下

org.apache.zookeeper.ClientCnxn - Session 0x102c87b7f880003 for server cweb244/10.17.2.241:2181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Packet len6429452 is out of range!

意思是数据包长度大于jute.maxBuffer允许的长度。

源码详情

private int packetLen = ZKClientConfig.CLIENT_MAX_PACKET_LENGTH_DEFAULT;

protected void initProperties() throws IOException {
    try {
        packetLen = clientConfig.getInt(
            ZKConfig.JUTE_MAXBUFFER,
            ZKClientConfig.CLIENT_MAX_PACKET_LENGTH_DEFAULT);
        LOG.info("{} value is {} Bytes", ZKConfig.JUTE_MAXBUFFER, packetLen);
    } catch (NumberFormatException e) {
        String msg = MessageFormat.format(
            "Configured value {0} for property {1} can not be parsed to int",
            clientConfig.getProperty(ZKConfig.JUTE_MAXBUFFER),
            ZKConfig.JUTE_MAXBUFFER);
        LOG.error(msg);
        throw new IOException(msg);
    }
}

void readLength() throws IOException {
    int len = incomingBuffer.getInt();
    if (len < 0 || len >= packetLen) {
        throw new IOException("Packet len " + len + " is out of range!");
    }
    incomingBuffer = ByteBuffer.allocate(len);
}

zookeeper的默认值最大值为4M。所以Druid一些数据大于默认上限时就会报错。

解决办法

  进入zk的conf目录下,新建一个java.env文件 将 -Djute.maxbuffer 设置为10M

  

#!/bin/sh

export JAVA_HOME=/...../

# heap size MUST be modified according to cluster environment

export JVMFLAGS="-Xms2048m -Xmx4096m $JVMFLAGS -Djute.maxbuffer=10485760 "

同步修改所有节点后重启

强烈建议Druid单独部署一套zookeeper集群

原文地址:https://www.cnblogs.com/successok/p/14203623.html