从Jupyter笔记本运行Spark/Python

2024-06-16 11:06:35 发布

您现在位置:Python中文网/ 问答频道 /正文

我已经创建了shell脚本来从Jupyter笔记本访问PySpark。 当我运行脚本时,我在下面得到了这个错误

sudo /home/scripts/jupyspark.sh test.py 
**/home/scripts/jupyspark.sh: line 6: /bin/pyspark: No such file or directory**

这是我的jupyspark脚本

#!/bin/bash
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS="notebook --NotebookApp.open_browser=True --NotebookApp.ip='localhost' --NotebookApp.port=8888"

${SPARK_HOME}/bin/pyspark \
--master local[4] \
--executor-memory 1G \
--driver-memory 1G \
--conf spark.sql.warehouse.dir="file:///tmp/spark-warehouse" \
--packages com.databricks:spark-csv_2.11:1.5.0 \
--packages com.amazonaws:aws-java-sdk-pom:1.10.34 \
--packages org.apache.hadoop:hadoop-aws:2.7.3

我还做了以下步骤:

cat ~/.bash_profile 
export SPARK_HOME=/usr/local/spark
export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH
export HADOOP_HOME=/usr/local/hadoop
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native/:$LD_LIBRARY_PATH
export AWS_ACCESS_KEY_ID='MY_ACCESS_KEY'
export AWS_SECRET_ACCESS_KEY='MY_SECRET_ACCESS_KEY'

你有什么办法解决这个问题吗


Tags: key脚本hadoophomebinaccesspackageslocal