保存mod时spark mlib python解决错误

2024-04-26 01:39:37 发布

您现在位置:Python中文网/ 问答频道 /正文

我正处于错误之下。我下面是page中的线性回归示例。我有spark 1.6.1和Python3.5.1。我应该做些什么改变?在

from pyspark.mllib.regression import LabeledPoint, LinearRegressionWithSGD, LinearRegressionModel

# Load and parse the data
def parsePoint(line):
    values = [float(x) for x in line.replace(',', ' ').split(' ')]
    return LabeledPoint(values[0], values[1:])

data = sc.textFile("data/mllib/ridge-data/lpsa.data")
parsedData = data.map(parsePoint)

# Build the model
model = LinearRegressionWithSGD.train(parsedData, iterations=100, step=0.00000001)

# Evaluate the model on training data
valuesAndPreds = parsedData.map(lambda p: (p.label, model.predict(p.features)))
MSE = valuesAndPreds.map(lambda v: (v[0] - v[1])**2).reduce(lambda x, y: x + y) / valuesAndPreds.count()
print("Mean Squared Error = " + str(MSE))

# Save and load model
>>> model.save(sc, "myModelPath")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "c:\spark-1.6.1-bin-hadoop2.6\spark-1.6.1-bin-hadoop2.6\python\pyspark\mllib\regression.py", line 185, in save
    java_model.save(sc._jsc.sc(), path)
  File "c:\spark-1.6.1-bin-hadoop2.6\spark-1.6.1-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\java_gateway.py", line 813, in __call__
  File "c:\spark-1.6.1-bin-hadoop2.6\spark-1.6.1-bin-hadoop2.6\python\pyspark\sql\utils.py", line 45, in deco
    return f(*a, **kw)

请注意,我编辑了计算MSE的行,如下所示

^{pr2}$

Tags: theinmapdatamodelbinlinespark