出现了系统内存不够的错误:
1
|
java.lang.OutOfMemoryError: Java heap space |
hadoop出现内存不够时完整的异常信息如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
11/12/11 17:38:22 INFO util.NativeCodeLoader: Loaded the native-hadoop library 11/12/11 17:38:22 INFO mapred.FileInputFormat: Total input paths to process : 7 11/12/11 17:38:22 INFO mapred.JobClient: Running job: job_local_0001 11/12/11 17:38:22 INFO util.ProcessTree: setsid exited with exit code 0 11/12/11 17:38:22 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@e49dcd 11/12/11 17:38:22 INFO mapred.MapTask: numReduceTasks: 1 11/12/11 17:38:22 INFO mapred.MapTask: io.sort.mb = 100 11/12/11 17:38:22 WARN mapred.LocalJobRunner: job_local_0001 java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:949) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212) 11/12/11 17:38:23 INFO mapred.JobClient: map 0% reduce 0% 11/12/11 17:38:23 INFO mapred.JobClient: Job complete: job_local_0001 11/12/11 17:38:23 INFO mapred.JobClient: Counters: 0 11/12/11 17:38:23 INFO mapred.JobClient: Job Failed: NA java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1257) at org.apache.hadoop.examples.Grep.run(Grep.java:69) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.examples.Grep.main(Grep.java:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) |
具体解决方法为编辑conf/mapred-site.xml文件并增加配置如下:
1
2
3
4
|
<property> <name>mapred.child.java.opts</name> <value>-Xmx1024m</value> </property> |
通过此配置可以增加hadoop的jvm可分配的的内存大小。
对于通过RPM or DEB方式来安装的,所有的配置文件在 /etc/hadoop目录下, /etc/hadoop/hadoop-env.sh 里设置了java可用的最大堆内存大小:
1
|
export HADOOP_CLIENT_OPTS= "-Xmx128m $HADOOP_CLIENT_OPTS" |
可以改变此设置为:
1
|
export HADOOP_CLIENT_OPTS= "-Xmx2048m $HADOOP_CLIENT_OPTS" |
来增加可用内存大小。