PySpark获取列的最大和最小非零值

2024-04-16 17:23:36 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个数据框,如下所示:

+-------+--------------------+--------------------+--------------+---------+----------+
|  label|              app_id|               title|download_count|entity_id|risk_score|
+-------+--------------------+--------------------+--------------+---------+----------+
|ANDROID|com.aaron.test.ze...|          Aaron Test|             0|      124|       100|
|ANDROID|com.boulderdailyc...|Boulder Daily Cam...|           100|      122|       100|
|ANDROID|com.communitybank...|     Budgeting Tools|             0|      123|       100|
|ANDROID|com.communitybank...|   PB Mobile Banking|           600|      123|       100|
|ANDROID|com.mendocinobeac...|Mendocino Beacon ...|            10|      122|       100|
|ANDROID|com.profitstars.t...|Johnson City Mobi...|           500|      123|       100|
|ANDROID|com.spreedinc.pro...|Oneida Dispatch f...|          1000|      122|       100|
+-------+--------------------+--------------------+--------------+---------+----------+

我希望获得按实体ID分组的非零max和mindownload_count值。我不太确定如何使用聚合实现这一点,当然简单的max和min不会起作用

apps_by_entity = (
        group_by_entity_id(df)
        .agg(F.min(df.download_count), F.max(df.download_count), F.count("entity_id").alias("app_count"))
        .withColumnRenamed("max(download_count)", "download_max")
        .withColumnRenamed("min(download_count)", "download_min")
    )

因为实体123和124的最小值为0

+---------+------------+------------+---------+
|entity_id|download_min|download_max|app_count|
+---------+------------+------------+---------+
|      124|           0|           0|        1|
|      123|           0|         600|        3|
|      122|          10|        1000|        3|
+---------+------------+------------+---------+

所需的输出类似于

+---------+------------+------------+---------+
|entity_id|download_min|download_max|app_count|
+---------+------------+------------+---------+
|      124|           0|           0|        1|
|      123|         500|         600|        3|
|      122|          10|        1000|        3|
+---------+------------+------------+---------+

有没有一种方法可以通过聚合实现这一点?如果不是,那么获得非零值的最佳方法是什么?在max = min = 0的情况下,只返回0null就可以了


Tags: 方法实体comidappdfbydownload
1条回答
网友
1楼 · 发布于 2024-04-16 17:23:36

我不确定在进行最小、最大聚合时是否可以排除零,而不会丢失计数
实现输出的一种方法是分别对聚合进行(最小、最大)和计数,然后将它们重新连接起来

from pyspark.sql.functions import *
from pyspark.sql import functions as F

min_max_df = df.filter(col("download_count")!=0).groupBy('entity_id')\
                .agg(F.min('download_count').alias("download_min"),\
                F.max('download_count').alias("download_max"))\
                .withColumnRenamed("entity_id", "entity_id_1")

count_df =df.groupBy('entity_id').agg(F.count('download_count')\
            .alias("app_count"))

count_df.join(min_max_df, (count_df.entity_id == min_max_df.entity_id_1), \
        "left").drop("entity_id_1").fillna(0, subset=['download_min',\
         'download_max']).show()

+    -+    -+      +      +
|entity_id|app_count|download_min|download_max|
+    -+    -+      +      +
|      124|        1|           0|           0|
|      123|        3|         500|         600|
|      122|        3|          10|        1000|
+    -+    -+      +      +

相关问题 更多 >