spark中的partitionBy和groupBy有什么区别我有一个pyspark rdd,它可以收集为元组列表,如下所示: rdds = self.sc.parallelize([(("good", "spark"), 1), (("sood", "hpar ...2024-05-24 已阅读: n次