当我在oozie中将python脚本作为jar提交给spark action时,我看到以下错误:
Traceback (most recent call last):
File "/home/hadoop/spark.py", line 5, in <module>
from pyspark import SparkContext, SparkConf
ImportError: No module named pyspark
Intercepting System.exit(1)
虽然我可以看到pyspark库存在于我的本地FS上:
^{pr2}$我知道在oozie上运行pyspark会有问题,比如https://issues.apache.org/jira/browse/OOZIE-2482,但是我看到的错误与JIRA罚单不同。在
另外,我在工作流定义中将--conf spark.yarn.appMasterEnv.SPARK_HOME=/usr/lib/spark --conf spark.executorEnv.SPARK_HOME=/usr/lib/spark
作为spark-opts
传递。在
以下是我的申请样本供参考:
masterNode ip-xxx-xx-xx-xx.ec2.internal
nameNode hdfs://${masterNode}:8020
jobTracker ${masterNode}:8032
master yarn
mode client
queueName default
oozie.libpath ${nameNode}/user/oozie/share/lib
oozie.use.system.libpath true
oozie.wf.application.path /user/oozie/apps/
<workflow-app name="spark-wf" xmlns="uri:oozie:workflow:0.5">
<start to="spark-action-test"/>
<action name="spark-action-test">
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.compress.map.output</name>
<value>true</value>
</property>
</configuration>
<master>${master}</master>
<mode>${mode}</mode>
<name>Spark Example</name>
<jar>/home/hadoop/spark.py</jar>
<spark-opts>--driver-memory 512m --executor-memory 512m --num-executors 4 --conf spark.yarn.appMasterEnv.SPARK_HOME=/usr/lib/spark --conf spark.executorEnv.SPARK_HOME=/usr/lib/spark --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=/usr/lib/spark/python --conf spark.executorEnv.PYTHONPATH=/usr/lib/spark/python --files ${nameNode}/user/oozie/apps/hive-site.xml</spark-opts>
</spark>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
# sc is an existing SparkContext.
from pyspark import SparkContext, SparkConf
from pyspark.sql import HiveContext
conf = SparkConf().setAppName('test_pyspark_oozie')
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)
sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
正如这里推荐的-http://www.learn4master.com/big-data/pyspark/run-pyspark-on-oozie,我还放置了以下两个文件:py4j-0.9-src.zip文件pyspark.zip文件,在我的${nameNode}/user/oozie/share/lib文件夹下。在
我使用的是单节点纱线集群(AWS-EMR),并试图找出可以将这些pyspark模块传递给oozie应用程序中的python。感谢任何帮助。在
您得到的是
No module named error
,因为您在配置中没有提到PYTHONPATH
。在conf
中再添加一行PYTHONPATH=/usr/lib/spark/python
。我不知道如何在oozie工作流定义中设置此PYTHONPATH
,但通过在配置中添加PYTHONPATH
属性,肯定能解决您的问题。在相关问题 更多 >
编程相关推荐