运行Python脚本从API获取信息时出现异常错误

2024-04-20 09:23:49 发布

您现在位置:Python中文网/ 问答频道 /正文

我有一个python代码,用于通过API get请求在多个位置的循环中提取资产负债表报告。我设置了一个else语句来返回所有不获取JSON数据的位置ID。你知道吗

有时循环一直工作到最后,直到得到最终报告。但大多数情况下,代码会抛出以下错误并停止运行:

Traceback (most recent call last):

  File "<ipython-input-2-85715734b89c>", line 1, in <module>
    runfile('C:/Users/PVarimalla/.spyder-py3/temp.py', wdir='C:/Users/PVarimalla/.spyder-py3')

  File "C:\Users\PVarimalla\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
    execfile(filename, namespace)

  File "C:\Users\PVarimalla\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "C:/Users/PVarimalla/.spyder-py3/temp.py", line 107, in <module>
    dict1 = json.loads(json_data)

  File "C:\Users\PVarimalla\Anaconda3\lib\json\__init__.py", line 348, in loads
    return _default_decoder.decode(s)

  File "C:\Users\PVarimalla\Anaconda3\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())

  File "C:\Users\PVarimalla\Anaconda3\lib\json\decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None

JSONDecodeError: Expecting value

例如, 完美运行:抛出15个位置id的50个位置,它不能JSON数据,我有与所有其他加盟商资产负债表附加数据框。你知道吗

不正确的运行:每次运行脚本时,它抛出5(或)6(或)3个无法获取JSON数据的id位置,并停止运行,出现上述错误。你知道吗

我不明白为什么脚本有时运行得很完美,而在其他时间(大多数时间)表现得很怪异。是因为互联网连接还是Spyder 3.7的问题?你知道吗

我想我的整个剧本没有错误,但不确定为什么我要面对上述问题。请帮帮我。你知道吗

代码如下:

import requests
import json
#import DataFrame
import pandas as pd
#from pandas.io.json import json_normalize
#import json_normalize
access_token = 'XXXXXXXXX'
url = 'https://api.XXXX.com/v1/setup'
url_company = "https://api.*****.com/v1/Reporting/ProfitAndLoss?CompanyId=1068071&RelativeDateRange=LastMonth&DateFrequency=Monthly&UseAccountMapping=true&VerticalAnalysisType=None"
url_locations_trend = "https://api.*****.com/v1/location/search?CompanyId=1068071"
url_locations_mu = "https://api.*****.com/v1/location/search?CompanyId=2825826"
url_locations_3yrs = "https://api.qvinci.com/v1/location/search?CompanyId=1328328"
ult_result = requests.get(url_locations_trend,
      headers={
               'X-apiToken': '{}'.format(access_token)})


#decoded_result= result.read().decode("UTF-8")
json_data_trend = ult_result.text
dict_trend = json.loads(json_data_trend)

locations_trend = {}

#Name
locations_trend["Name"] = []
for i in dict_trend["Items"]:
    locations_trend["Name"].append(i["Name"])

#ID
locations_trend["ID"] = []
for i in dict_trend["Items"]:
    locations_trend["ID"].append(i["Id"])

#creates dataframe for locations under trend transformations
df_trend = pd.DataFrame(locations_trend)

#making a call to get locations data for under 3 yrs
ul3_result = requests.get(url_locations_3yrs,
      headers={
               'X-apiToken': '{}'.format(access_token)})

#decoded_result= result.read().decode("UTF-8")
json_data_3yrs= ul3_result.text
dict_3yrs = json.loads(json_data_3yrs)

locations_3yrs = {}

#Name
locations_3yrs["Name"] = []
for i in dict_3yrs["Items"]:
    locations_3yrs["Name"].append(i["Name"])

#ID
locations_3yrs["ID"] = []
for i in dict_3yrs["Items"]:
    locations_3yrs["ID"].append(i["Id"])

#creates dataframe for locations under 3 yrs  
df_3yrs = pd.DataFrame(locations_3yrs)

#making a call to get locations data for under 3 yrs
ulm_result = requests.get(url_locations_mu,
      headers={
               'X-apiToken': '{}'.format(access_token)})

#decoded_result= result.read().decode("UTF-8")
json_data_mu = ulm_result.text
dict_mu = json.loads(json_data_mu)

locations_mu = {}

#Name
locations_mu["Name"] = []
for i in dict_mu["Items"]:
    locations_mu["Name"].append(i["Name"])

#ID
locations_mu["ID"] = []
for i in dict_mu["Items"]:
    locations_mu["ID"].append(i["Id"])

#creates dataframe for locations under 3 yrs  
df_mu = pd.DataFrame(locations_mu)


locations_df = pd.concat([df_mu, df_3yrs, df_trend])

df_final = pd.DataFrame()
count = 0
for i in locations_df["ID"]:
    if count < 3:
        url_bs = "https://api.******.com/v1/Reporting/BalanceSheet?DateFrequency=Monthly&UseAccountMapping=true&VerticalAnalysisType=None&IncludeComputedColumns=true&RelativeDateRange=LastTwoCYTD&UseCustomDateRange=false&CompanyId=2825826&Locations=" + i
    elif 2 < count < 12:
        url_bs = "https://api.******.com/v1/Reporting/BalanceSheet?DateFrequency=Monthly&UseAccountMapping=true&VerticalAnalysisType=None&IncludeComputedColumns=true&RelativeDateRange=LastTwoCYTD&UseCustomDateRange=false&CompanyId=1328328&Locations=" + i
    else :
        url_bs = "https://api.******.com/v1/Reporting/BalanceSheet?DateFrequency=Monthly&UseAccountMapping=true&VerticalAnalysisType=None&IncludeComputedColumns=true&RelativeDateRange=LastTwoCYTD&UseCustomDateRange=false&CompanyId=1068071&Locations=" + i

    result = requests.get(url_bs,
          headers={
                   'X-apiToken': '{}'.format(access_token)})
    #decoded_result= result.read().decode("UTF-8")

    json_data = result.text
    if(json_data != ""): 
        final = {}

        dict1 = json.loads(json_data)

        final["Months"] = dict1["ReportModel"]["ColumnNames"]
        final["Location"] = [dict1["SelectedOptions"]["Locations"][0]]*len(final["Months"])
        set = {"Total 10000 Cash","Total 12000 Inventory Asset","Total Other Current Assets","Total Fixed Assets","Total ASSETS",
               "Total Accounts Payable","Total Credit Cards","24004 Customer Deposits","Total Liabilities","Total Equity","Total Long Term Liabilities"}

        def search(dict2):
            if len(dict2["Children"]) == 0:
                return
            for i in dict2["Children"]:
                if(i["Name"] in set):
                    final[i["Name"]] = []
                    for j in i["Values"]:
                        final[i["Name"]].append(j["Value"])
                search(i)

            if ("Total " + dict2["Name"]) in set:
                final["Total " + dict2["Name"]] = []
                for j in dict2["TotalRow"]["Values"]:
                    final["Total " + dict2["Name"]].append(j["Value"])
            return

        for total in dict1["ReportModel"]["TopMostRows"]:
            search(total)

        df_final = pd.concat([df_final,pd.DataFrame(final)], sort = False)
    else: print(i)
    count = count + 1

#exporting dataframe to pdf    
#df_final.to_csv(, sep='\t', encoding='utf-8')
df_final.to_csv('file1.csv')

谢谢你。你知道吗


Tags: nameinidjsonurldffordata
1条回答
网友
1楼 · 发布于 2024-04-20 09:23:49

您应该张贴代码和整个例外情况,以获得更准确的答案。然而,在我看来,API最终并没有返回JSON(例如,您可以在很短的时间内对许多请求进行处理,因此API返回404)

尝试在解码前触发/记录API响应以验证这一点。你知道吗

编辑: 根据反馈,在每次迭代之间设置一个间隔应该可以解决您的问题。您可以在for循环中使用time.sleep(0.5)。(记住添加import time

您还应该考虑在代码中使用try/except,以便更广泛地处理异常。你知道吗

相关问题 更多 >