如何在Python/pyspark中运行graphx?

34 投票
3 回答
44442 浏览
提问于 2025-04-18 04:17

我正在尝试用Python通过pyspark来运行Spark的graphx。我的安装看起来没问题,因为我可以顺利运行pyspark的教程和(Java版的)GraphX教程。因为GraphX是Spark的一部分,所以我想pyspark应该可以和它对接,对吧?

这是pyspark的教程链接:
http://spark.apache.org/docs/0.9.0/quick-start.html
http://spark.apache.org/docs/0.9.0/python-programming-guide.html

这是GraphX的教程链接:
http://spark.apache.org/docs/0.9.0/graphx-programming-guide.html
http://ampcamp.berkeley.edu/big-data-mini-course/graph-analytics-with-graphx.html

有没有人能把GraphX的教程转换成Python版本的?

3 个回答

3

GraphX 0.9.0 目前还没有 Python 的接口。预计在未来的版本中会推出。

23

你可以看看GraphFrames(https://github.com/graphframes/graphframes),它是一个把GraphX算法封装在DataFrames API下的工具,并且提供了Python接口。

这里有一个简单的例子,来自于https://graphframes.github.io/graphframes/docs/_site/quick-start.html,我稍微修改了一下,以便它能正常工作。

首先,启动pyspark,并加载graphframes这个包。

pyspark --packages graphframes:graphframes:0.1.0-spark1.6

接下来是Python代码:

from graphframes import *

# Create a Vertex DataFrame with unique ID column "id"
v = sqlContext.createDataFrame([
  ("a", "Alice", 34),
  ("b", "Bob", 36),
  ("c", "Charlie", 30),
], ["id", "name", "age"])

# Create an Edge DataFrame with "src" and "dst" columns
e = sqlContext.createDataFrame([
  ("a", "b", "friend"),
  ("b", "c", "follow"),
  ("c", "b", "follow"),
], ["src", "dst", "relationship"])
# Create a GraphFrame
g = GraphFrame(v, e)

# Query: Get in-degree of each vertex.
g.inDegrees.show()

# Query: Count the number of "follow" connections in the graph.
g.edges.filter("relationship = 'follow'").count()

# Run PageRank algorithm, and show results.
results = g.pageRank(resetProbability=0.01, maxIter=20)
results.vertices.select("id", "pagerank").show()
21

看起来,Python对GraphX的支持至少要等到Spark的1.4或1.5版本以后才会推出。现在它还在等Java接口的更新。

你可以在这个链接查看最新进展:SPARK-3789 GRAPHX Python bindings for GraphX - ASF JIRA

撰写回答