Pyspark PCA数据处理通用模型

2024-04-20 09:00:50 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在使用PCA进行数据分析,我用PySpark编写了这段代码,它工作得很好,但它只适用于从csv文件中读取的数据,例如5列[“a”、“b”、“c”、“d”、“e”],我不需要编写通用代码来计算从csv文件中读取的任何列数的PCA。我应该补充什么? 这是我的密码:

#########################! importing libraries !########################
from __future__ import print_function
from pyspark.ml.linalg import Vectors
from pyspark.sql import SparkSession
from pyspark import SparkConf, SparkContext
from pyspark.ml.feature import PCA, VectorAssembler
from pyspark.mllib.linalg import Vectors
from pyspark.ml import Pipeline
from pyspark.sql import SQLContext
from pyspark import SparkContext
from pyspark.mllib.feature import Normalizer
import timeit
########################! main script !#################################
sc = SparkContext("local", "pca-app")
sqlContext = SQLContext(sc)
if __name__ == "__main__":
    spark = SparkSession\
        .builder\
        .appName("PCAExample")\
        .getOrCreate()  
    data = sc.textFile('dataset.csv') \
        .map(lambda line: [float(k) for k in line.split(';')])\
        .collect()
    df = spark.createDataFrame(data, ["a","b","c","d","e"])
    df.show()
    vecAssembler = VectorAssembler(inputCols=["a","b","c","d","e"], outputCol="features")

    pca = PCA(k=2, inputCol="features", outputCol="pcaFeatures")
    pipeline = Pipeline(stages=[vecAssembler, pca]
    model = pipeline.fit(df)
    result = model.transform(df).select("pcaFeatures")
    result.show(truncate=False))
    spark.stop()

Tags: 文件csv代码fromimportdfmlspark
1条回答
网友
1楼 · 发布于 2024-04-20 09:00:50

您需要通过更改几行代码使代码通用:-

fileObj = sc.textFile('dataset.csv') 
data = fileObj.map(lambda line: [float(k) for k in line.split(';')]).collect()
columns = (fileObj.first()).split()
df = spark.createDataFrame(data, columns)
df.show()
vecAssembler = VectorAssembler(inputCols=columns, outputCol="features")

相关问题 更多 >