使用Hive运行Job程序报GC错误

  不多bb,直接开干,错误如下:

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. GC overhead limit exceeded
AsyncLogger error handling event seq=16174, value='[ERROR calling class org.apache.logging.log4j.core.async.RingBufferLogEvent.toString(): java.lang.NullPointerException]':
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "HiveServer2-Handler-Pool: Thread-503" java.lang.OutOfMemoryError: GC overhead limit exceeded

  解决方案:

  1)修改Hadoop中Yarn运行每个容器的最小内存配置(修改完成后重启Hadoop集群)

vim /opt/module/hadoop-3.1.3/etc/hadoop/yarn-site.xml
<!-- yarn容器允许分配的最大最小内存 -->
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>2048</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>4096</value>
    </property>
<!-- yarn容器允许管理的物理内存大小 -->
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>2</value>
    </property>

    <!-- 关闭yarn对物理内存和虚拟内存的限制检查 -->
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>

  2)增大Hive的堆内存大小,修改hive-env.sh文件(完成之后重启Hive)

mv hive-env.sh.template hive-env.sh
vim hive-env.sh
#设置Hive的堆内存大小
export HADOOP_HEAPSIZE=1024
原文地址:https://www.cnblogs.com/LzMingYueShanPao/p/14870403.html