如何使用SailPoint自适应响应设置具有多个IIQ SailPoint环境和Splunk TA配置的Splunk

2024-03-29 13:06:15 发布

您现在位置:Python中文网/ 问答频道 /正文

我注意到这个站点上的Splunk文档说,它应该支持多个环境——查看python脚本中的代码,尽管它看起来不支持

SailPoint IIQ版本:8.1p3 Splunk版本:8.0.9 TA版本:2.0.5

在查看Splunk插件代码(Splunk用于从SailPoint读取数据的Python代码)后,我注意到以下信息:

Splunk/etc/apps/Splunk_TA_sailpoint是插件导出其文件的插件目录。 Splunk/etc/apps/Splunk_TA_sailpoint/bin/input_module_sailpoint_identityq_auditevents.py–这就是引起我注意的文件。 代码的运行方式是,它定义了一个单独的文件,用于按照下面概述的逻辑存储历元日期:

1.最初,有一个文件检查(audit\u events\u checkpoint.txt)来查看文件是否存在
2.如果Python找不到,它将尝试创建它
3.如果再次失败,将创建文件夹结构,然后添加文件
4.在前三个步骤之后,Python将打开该文件。
5.Python然后读取文件并提取第一个值(Unix/Epoch时间戳)
6.然后将其用作其出站查询的一部分。

#Read the timestamp from the checkpoint file, and create the checkpoint file if necessary
    #The checkpoint file contains the epoch datetime of the 'created' date of the last event seen in the previous execution of the script. 
    checkpoint_file = os.path.join(os.environ['SPLUNK_HOME'], 'etc', 'apps', 'Splunk_TA_sailpoint', 'tmp', "audit_events_checkpoint.txt")
    try:
        file = open(checkpoint_file, 'r')
    except IOError:
        try:
            file = open(checkpoint_file, 'w')
        except IOError:
            os.makedirs(os.path.dirname(checkpoint_file))
            file = open(checkpoint_file, 'w')
            
    with open(checkpoint_file, 'r') as f:
         checkpoint_time = f.readlines()
     
    #current epoch time in milliseconds 
    # new_checkpoint_time = int((datetime.datetime.utcnow() - datetime.datetime(1970, 1, 1)).total_seconds() *1000)
    
    if len(checkpoint_time) == 1:
        checkpoint_time =int(checkpoint_time[0])
    else:
        checkpoint_time = 1562055000000
        helper.log_info("No checkpoint time available. Setting it to default value.")
    
    #Standard query params, checkpoint_time value is set from what was saved in the checkpoint file
    queryparams= {
         "startTime" : checkpoint_time,
         "startIndex" : 1,
         "count" : 100
    }

1.跳转到下一个引用,我们发现拉入的JSON对象用于创建系统将在下一个请求中使用的新时间戳。
2.然后,它获取该值并将其写入文件,下次调用时将重用该文件。

#Iterate the audit events array and create Splunk events for each one
    invalid_response = isListEmpty(auditEvents)
    if not invalid_response:
        for auditEvent in auditEvents:
 
            data = json.dumps(auditEvent)
            event = helper.new_event(data=data, time=None, host=None, index=helper.get_output_index(), source=helper.get_input_type(), sourcetype=helper.get_sourcetype(), done=True, unbroken=True)
            ew.write_event(event)
 
        #Get the created date of the last audit event in the run and save it as a checkpoint key in the checkpoint file
        list_of_created_date = extract_element_from_json(results, ["auditEvents", "created"])
 
        new_checkpoint_created_date = list_of_created_date[-1]
        helper.log_info("DEBUG New checkpoint date \n{}".format(new_checkpoint_created_date))
 
    #Write new checkpoint key to the checkpoint file
        with open(checkpoint_file, 'r+') as f:
            f.seek(0)
            f.write(str(new_checkpoint_created_date))
            f.truncate()

因此,我的想法是:当我们在Splunk中输入每个环境的连接器的信息时(我们总共有6个),检查点_文件被重写了。我还假设每个连接的env调用相同的时间戳,因为它们似乎都从同一个文件中提取信息。我们是否错过了配置,或者这是一个编码间隙