不使用PuTTy/SSH通过Python启动Hadoop MapReduce作业

2024-05-20 02:03:56 发布

您现在位置:Python中文网/ 问答频道 /正文

我一直在运行hadoopmapreduce作业,方法是通过PuTTy登录SSH,这要求我在PuTTy中输入主机名/IP地址、登录名和密码,以获得SSH命令行窗口。进入SSH控制台窗口后,我将提供适当的MR命令,例如:

hadoop jar/usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.0.0-mr1-cdh4.0.1.jar-file/nfs_home/appers/user1/映射器.py-文件/nfs_home/appers/user1/减速器.py-mapper'/usr/lib/python_2.7.3/bin/python映射器.py'-reducer'/usr/lib/python_2.7.3/bin/python减速器.py'-输入/ccexp/data/test_xml/0901282-510179094535002-oozie-oozi-W/extract//.xml-output/user/ccexptest/output/user1/MRoutput

我想做的是使用Python来改变这个笨拙的过程,这样我就可以从Python脚本中启动MapReduce作业,避免不得不通过PuTTy登录SSH。在

这能做到吗?如果可以,有人能告诉我怎么做吗?在


Tags: pyhadoophomebinlibusr作业ssh
1条回答
网友
1楼 · 发布于 2024-05-20 02:03:56

我用以下脚本解决了这个问题:

import paramiko

# Define connection info
host_ip = 'xx.xx.xx.xx'
user = 'xxxxxxxx'
pw = 'xxxxxxxx'

# Paths
input_loc = '/nfs_home/appers/extracts/*/*.xml'
output_loc = '/user/lcmsprod/output/cnielsen/'
python_path = "/usr/lib/python_2.7.3/bin/python"
hdfs_home = '/nfs_home/appers/cnielsen/'
output_log = r'C:\Users\cnielsen\Desktop\MR_Test\MRtest011316_0.txt'

# File names
xml_lookup_file = 'product_lookups.xml'
mapper = 'Mapper.py'
reducer = 'Reducer.py'
helper_script = 'Process.py'
product_name = 'test1'
output_ref = 'test65'

#                           

def buildMRcommand(product_name):
    space = " "
    mr_command_list = [ 'hadoop', 'jar', '/share/hadoop/tools/lib/hadoop-streaming.jar',
                        '-files', hdfs_home+xml_lookup_file,
                        '-file', hdfs_home+mapper,
                        '-file', hdfs_home+reducer,
                        '-mapper', "'"+python_path, mapper, product_name+"'",
                        '-file', hdfs_home+helper_script,
                        '-reducer', "'"+python_path, reducer+"'",
                        '-input', input_loc,
                        '-output', output_loc+output_ref]

    MR_command = space.join(mr_command_list)
    print MR_command
    return MR_command

#                           

def unbuffered_lines(f):
    line_buf = ""
    while not f.channel.exit_status_ready():
        line_buf += f.read(1)
        if line_buf.endswith('\n'):
            yield line_buf
            line_buf = ''

#                           

client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(host_ip, username=user, password=pw)

# Build Commands
list_dir = "ls "+hdfs_home+" -l"
getmerge = "hadoop fs -getmerge "+output_loc+output_ref+" "+hdfs_home+"test_011216_0.txt"

# Run Command
stdin, stdout, stderr = client.exec_command(list_dir)
##stdin, stdout, stderr = client.exec_command(buildMRcommand(product_name))
##stdin, stdout, stderr = client.exec_command(getmerge)

print "Executing command..."
writer = open(output_log, 'w')

for l in unbuffered_lines(stderr):
    e = '[stderr] ' + l
    print '[stderr] ' + l.strip('\n')
    writer.write(e)

for line in stdout:
    r = '[stdout]' + line
    print '[stdout]' + line.strip('\n')
    writer.write(r)

client.close()
writer.close()

相关问题 更多 >