如何关闭Spark中的INFO日志?
我按照AWS EC2的指南安装了Spark,使用bin/pyspark
脚本可以顺利启动程序,进入Spark提示符,也能成功完成快速入门指南。
不过,我就是搞不懂怎么能停止每个命令后面那些冗长的INFO
日志输出。
我在conf
文件夹里的log4j.properties
文件中尝试了几乎所有可能的设置(注释掉、设置为OFF),无论是在启动应用的地方还是每个节点上,都没有任何效果。每次执行命令后,INFO
日志还是会打印出来。
我对这个应该怎么工作感到非常困惑。
#Set everything to be logged to the console log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
这是我使用SPARK_PRINT_LAUNCH_COMMAND
时的完整类路径:
Spark命令: /Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/bin/java -cp :/root/spark-1.0.1-bin-hadoop2/conf:/root/spark-1.0.1-bin-hadoop2/conf:/root/spark-1.0.1-bin-hadoop2/lib/spark-assembly-1.0.1-hadoop2.2.0.jar:/root/spark-1.0.1-bin-hadoop2/lib/datanucleus-api-jdo-3.2.1.jar:/root/spark-1.0.1-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/root/spark-1.0.1-bin-hadoop2/lib/datanucleus-rdbms-3.2.1.jar -XX:MaxPermSize=128m -Djava.library.path= -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit spark-shell --class org.apache.spark.repl.Main
spark-env.sh
的内容:
#!/usr/bin/env bash
# This file is sourced when running various Spark programs.
# Copy it as spark-env.sh and edit that to configure Spark for your site.
# Options read when launching programs locally with
# ./bin/run-example or ./bin/spark-submit
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program
# - SPARK_CLASSPATH=/root/spark-1.0.1-bin-hadoop2/conf/
# Options read by executors and drivers running inside the cluster
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
# - SPARK_CLASSPATH, default classpath entries to append
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
# - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos
# Options read in YARN client mode
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2)
# - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)
# - SPARK_YARN_APP_NAME, The name of your application (Default: Spark)
# - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’)
# - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job.
# - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job.
# Options for the daemons used in the standalone deploy mode:
# - SPARK_MASTER_IP, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers
export SPARK_SUBMIT_CLASSPATH="$FWDIR/conf"
16 个回答
在编程中,有时候我们需要让程序在特定的条件下执行某些操作。这就像给程序设定了一些规则,只有当这些规则被满足时,程序才会继续运行。
比如说,你可能希望程序在用户输入一个正确的密码后才能进入下一步。这种情况下,你就需要用到“条件语句”。条件语句就像是一个检查点,只有通过了这个检查,程序才会继续执行。
在代码中,条件语句通常是用“如果(if)”这样的词来开始的。它会检查某个条件是否为真,如果是,就执行后面的代码。如果不是,就可以选择执行其他的代码,或者什么都不做。
这样一来,程序就能根据不同的情况做出不同的反应,让它更加灵活和智能。
>>> log4j = sc._jvm.org.apache.log4j
>>> log4j.LogManager.getRootLogger().setLevel(log4j.Level.ERROR)
编辑你的 conf/log4j.properties 文件,然后把下面这一行改成:
log4j.rootCategory=INFO, console
改成:
log4j.rootCategory=ERROR, console
另一种方法是:
启动 spark-shell,然后输入以下内容:
import org.apache.log4j.Logger
import org.apache.log4j.Level
Logger.getLogger("org").setLevel(Level.OFF)
Logger.getLogger("akka").setLevel(Level.OFF)
这样做之后,你就看不到任何日志了。
我受到pyspark/tests.py的启发,做了以下操作:
def quiet_logs(sc):
logger = sc._jvm.org.apache.log4j
logger.LogManager.getLogger("org"). setLevel( logger.Level.ERROR )
logger.LogManager.getLogger("akka").setLevel( logger.Level.ERROR )
在创建SparkContext之后立即调用这个方法,可以把我测试中记录的错误信息行数从2647减少到163。不过,创建SparkContext本身就会记录163行,直到
15/08/25 10:14:16 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
我还不太清楚怎么通过程序来调整这些记录。
在Spark 2.0中,你可以通过使用setLogLevel来动态配置你的应用程序。
from pyspark.sql import SparkSession
spark = SparkSession.builder.\
master('local').\
appName('foo').\
getOrCreate()
spark.sparkContext.setLogLevel('WARN')
在pyspark控制台中,默认会有一个可用的spark
会话。
只需在Spark的目录下执行这个命令:
cp conf/log4j.properties.template conf/log4j.properties
接下来,编辑log4j.properties文件:
# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
在第一行替换成:
log4j.rootCategory=INFO, console
用:
log4j.rootCategory=WARN, console
保存文件后,重启你的命令行界面。对我来说,这在OS X上使用Spark 1.1.0和Spark 1.5.1都能正常工作。