SCAP数据列中的一个列(标准化)

2024-04-19 13:40:05 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图用Python规范Selk DeaFrrAm中的列。

我的数据集:

--------------------------
userID|Name|Revenue|No.of.Days|
--------------------------
1      A     12560    45
2      B     2312890  90
.      .       .       .
.      .       .       .
.      .       .       .
--------------------------

在这个数据集中,除了用户id和名称,我必须规范收入和天数。

输出应该如下所示


userID|Name|Revenue|No.of.Days|
--------------------------
1      A     0.5     0.5
2      B     0.9       1
.      .       1     0.4
.      .     0.6       .
.      .       .       .
--------------------------

用于计算或规范化每个列中的值的公式是

val = (ei-min)/(max-min)
ei = column value at i th position
min = min value in that column
max = max value in that column

如何使用PySpark简单地完成此操作?


Tags: of数据nonamein规范thatvalue
3条回答

像这样:

scaler = MinMaxScaler(inputCol="Revenue", outputCol="scaledRevenue")
scalerModel = scaler.fit(dataFrame)
scaledData = scalerModel.transform(dataFrame)

对每个要缩放的列重复此操作。

希望下面的代码满足您的要求。

代码:

df = spark.createDataFrame([ (1, 'A',12560,45),
                             (1, 'B',42560,90),
                             (1, 'C',31285,120),
                             (1, 'D',10345,150)
                           ], ["userID", "Name","Revenue","No_of_Days"])

print("Before Scaling :")
df.show(5)

from pyspark.ml.feature import MinMaxScaler
from pyspark.ml.feature import VectorAssembler
from pyspark.ml import Pipeline
from pyspark.sql.functions import udf
from pyspark.sql.types import DoubleType

# UDF for converting column type from vector to double type
unlist = udf(lambda x: round(float(list(x)[0]),3), DoubleType())

# Iterating over columns to be scaled
for i in ["Revenue","No_of_Days"]:
    # VectorAssembler Transformation - Converting column to vector type
    assembler = VectorAssembler(inputCols=[i],outputCol=i+"_Vect")

    # MinMaxScaler Transformation
    scaler = MinMaxScaler(inputCol=i+"_Vect", outputCol=i+"_Scaled")

    # Pipeline of VectorAssembler and MinMaxScaler
    pipeline = Pipeline(stages=[assembler, scaler])

    # Fitting pipeline on dataframe
    df = pipeline.fit(df).transform(df).withColumn(i+"_Scaled", unlist(i+"_Scaled")).drop(i+"_Vect")

print("After Scaling :")
df.show(5)

输出:

Output

您只需使用.withColumn()

df.withColumn('norm_val', (df.val-min)/(max-min))

这将返回一个列为norm_val的新数据帧。见withColumn文件here

相关问题 更多 >