使用Lambda上传到S3

2024-05-19 01:46:45 发布

您现在位置:Python中文网/ 问答频道 /正文

我创建了一个lambda函数,该函数从S3下载数据,然后执行合并,然后将数据重新上传回S3,但我得到了这个结果

错误{ “错误消息”:“2020-05-18T23:23:27.556Z 37233f48-18ea-43eb-9030-3e8a2bf62048任务在3.00秒后超时” }

当我删除45和58之间的行时,它工作得很好

enter image description herehttps://ideone.com/RvOmPS

作为pd进口熊猫 将numpy作为np导入 导入时间 从io导入StringIO#python3;Python2:BytesIO 进口boto3 导入S3F 从botocore.exceptions导入NoCredentialsError

def lambda_处理程序(事件、上下文):

# Dataset 1
# loading the data
df1 = pd.read_csv("https://i...content-available-to-author-only...s.com/Minimum+Wage+Data.csv",encoding= 'unicode_escape')

# Renaming the columns.
df1.rename(columns={'High.Value': 'min_wage_by_law', 'Low.Value': 'min_wage_real'}, inplace=True)

# Removing all unneeded values.
df1 = df1.drop(['Table_Data','Footnote','High.2018','Low.2018'], axis=1)
df1 = df1.loc[df1['Year']>1969].copy()

# ---------------------------------

# Dataset 2
# Loading from the debt S3 bucket
df2 = pd.read_csv("https://i...content-available-to-author-only...s.com/USGS_Final_File.csv") 

#Filtering getting the range in between 1969 and 2018.
df2 = df2.loc[df2['Year']>1969].copy()
df2 = df2.loc[df2['Year']<2018].copy()
df2.rename(columns={'Real State Growth %': 'Real State Growth','Population (million)':'Population Mil'}, inplace=True)

# Cleaning the data
df2['State Debt'] = df2['State Debt'].str.replace(',', '')
df2['Local Debt'] = df2['Local Debt'].str.replace(',', '')
df2["State and Local Debt"] = df2["State and Local Debt"].str.replace(',', '')
df2["Gross State Product"] = df2["Gross State Product"].str.replace(',', '')

# Cast to Floating
df2[["State Debt","Local Debt","State and Local Debt","Gross State Product"]] = df2[[ "State Debt","Local Debt","State and Local Debt","Gross State Product"]].apply(pd.to_numeric)

# --------------------------------------------
# Merge the data through an inner join.
full = pd.merge(df1,df2,on=['State','Year'])
#--------------------------------------------
filename = '/tmp/'#specify location of s3:/{my-bucket}/
file= 'debt_and_wage' #name of file
datetime = time.strftime("%Y%m%d%H%M%S") #timestamp
filenames3 = "%s%s%s.csv"%(filename,file,datetime) #name of the filepath and csv file

full.to_csv(filenames3, header = True)

## Saving it on AWS

s3 = boto3.resource('s3',aws_access_key_id='accesskeycantshare',aws_secret_access_key= 'key')

s3.meta.client.upload_file(filenames3, 'information-arch',file+datetime+'.csv')

Tags: andcsvthetolocalyearreplacefile
3条回答

您应该增加lambda函数的超时时间。新创建函数的默认行为是在3秒后终止

谢谢大家的帮助,只是增加了暂停时间,效果很好,我在这里完成了

enter image description here

您的默认lambda execution timeout3秒。请将其增加到适合您的任务:

Timeout – The amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds. The maximum allowed value is 900 seconds.

相关问题 更多 >

    热门问题