有 Java 编程相关的问题?

你可以在下面搜索框中键入要查询的问题!

python程序“java.lang.OutOfMemoryError:java堆空间”的Spark错误

我在spark上运行pythonkmeans程序,如下命令所示:

./bin/spark-submit --master spark://master_ip:7077 my_kmeans.py

主要的pythonkmeans程序如下所示:

sc = spark.sparkContext
# data
X = jl.load('X.jl.z')
data_x = sc.parallelize(X)
# kmeans
model = KMeans.train(data_x, 10000, maxIterations=5)

文件'X.jl.z'大小约为100M

但我得到了火花误差:

  File "/home/xxx/tmp/spark-2.0.2-bin-hadoop2.7/my_kmeans.py", line 24, in <module>
    data_x = sc.parallelize(X)
py4j.protocol.Py4JJavaError: An error occurred while calling    z:org.apache.spark.api.python.PythonRDD.readRDDFromFile.    
  : java.lang.OutOfMemoryError: Java heap space

我知道如何修改Java程序的JVM堆大小。但是如何增加python程序的堆大小呢


共 (1) 个答案

  1. # 1 楼答案

    尝试添加分区数:

    data_x = sc.parallelize(X,n)
    # n = 2-4 partitions for each CPU in your cluster
    

    或:

    Maximum heap size settings can be set with spark.driver.memory in the cluster mode and through the driver-memory command line option in the client mode