我有如下pyspark
脚本。你知道吗
#!/usr/bin/env python
from datetime import datetime
from pyspark import SparkContext, SparkConf
from pyspark.sql import HiveContext
conf = SparkConf()
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)
hivedb='MySql'
table='abc_123'
df = sqlContext.table("{}.{}".format(hivedb,table))
# Register the Data Frame as a TempTable
df.registerTempTable('mytempTable')
#Time:
date=datetime.now().strftime('%Y-%m-%d %H:%M:%S')
#Find min value ID:
min_id = sqlContext.sql("select nvl(min(id),0) as minval from mytempTable").collect()[0].asDict()['minval']
sc.stop()
现在我想分别找出每行代码所花费的时间。像下面这样
df = sqlContext.table("{}.{}".format(hivedb,table))
Time taken for `df` to create was 10 seconds
date=datetime.now().strftime('%Y-%m-%d %H:%M:%S')
Time taken for finding `date` was 1 second
min_id = sqlContext.sql("select nvl(min(id),0) as minval from mytempTable").collect()[0].asDict()['minval']
Time taken for `min_id` query to execute was 3 seconds
我怎样才能做到这一点。你知道吗
如果可能的话,我也想打印这些值
您可以使用内置的cProfile。如果您想可视化信息,可以使用Snakeviz
TLDR公司: 使用
python -m cProfile [-o output_file] [-s sort_order] myscript.py
命令运行脚本,下载snakevis并运行snakeviz output_file
相关问题 更多 >
编程相关推荐