目前,我正在使用bigrquery
包从bigquery查询表,因为它简单&;直率的天性。尽管对于以下简单查询很有效:
SELECT * FROM project_salsa_101.dashboards.product_20190501
,当我尝试访问日期分区表时,它会遇到困难:SELECT * FROM project_salsa_101.dashboards.product_*
。就像我跑步时一样:
project_id <- "project_salsa_101"
#query
sql_string <- "SELECT * FROM `project_salsa_101.dashboards.product_*`
LIMIT 10000000"
#Execute the query and storing the result
query_results <- query_exec(sql_string, project = project_id, useLegacySql = FALSE)
query_results
它抛出以下错误:
Error: Invalid table name: `project_salsa_101.dashboards.product_*`
[Try using standard SQL (https://cloud.google.com/bigquery/docs/reference/standard-sql/enabling-standard-sql)]. [invalid]
Traceback:
1. query_exec(sql_string, project = project_id, useLegacySql = FALSE)
2. run_query_job(query = query, project = project, destination_table = destination_table,
. default_dataset = default_dataset, create_disposition = create_disposition,
. write_disposition = write_disposition, use_legacy_sql = use_legacy_sql,
. quiet = quiet, ...)
3. wait_for(job, quiet = quiet)
4. signal_reason(err$reason, err$message)
5. rlang::abort(message, class = paste0("bigrquery_", reason))
6. signal_abort(cnd)
设置参数:useLegacySql= TRUE
也没有帮助
查询本身很好,因为当我尝试在python中以如下方式运行它时,它在控制台中运行得非常完美:
from datetime import datetime
from google.cloud import bigquery
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file(
'credentials.json')
project_id = 'project_salsa_101'
client = bigquery.Client(credentials= credentials,project=project_id)
query_job = client.query("""
SELECT * FROM `project_salsa_101.dashboards.product_*`
LIMIT 10000000""")
results = query_job.result()
Query进入BigQuery,我可以在queryhistory下看到它被提交到那里。它说在那里的查询也失败了。下面列出了相同的错误消息:
Error: Invalid table name: `project_salsa_101.dashboards.product_*`
[Try using standard SQL (https://cloud.google.com/bigquery/docs/reference/standard-sql/enabling-standard-sql)]. [invalid]
有趣的是,当我从同一个出错的作业历史复制查询并将查询粘贴到控制台时,它运行得非常好
目前没有回答
相关问题 更多 >
编程相关推荐