正则表达式提取CSV数据,部分字段带引号并含逗号

0 投票
3 回答
675 浏览
提问于 2025-04-18 11:17

我有以下类型的输入数据(用于Splunk)

svr28pr,Linux File System-ALL,success,32.87,2638.259,26/06/14 19:00,26/06/14 21:03,avamar xxxxx1.network.local,Activity completed with exceptions.,26/06/14 19:00
SVr44PR:Staging_SyncDB,incr,success,1271,1271,27/06/14 11:28,27/06/14 11:28,SQL,,,1/01/70 09:59

我需要把这些数据拆分成不同的字段 - 下面这个表达式效果不错。

(?<client>[^,]+),(?<backuptype>[^,]+),(?<status>[^,]+),(?<size>[^,]+),(?<dump>[^,]+),(?<start>[^,]+),(?<complete>[^,]+),(?<application>[^,]+),(?<server>[^,]+),(?<comment>[^,]+)

不过,由于备份类型的名称发生了变化,第二个字段可能会被引号包起来,并且里面可能包含逗号,比如说:

svr08ts,"Windows VSS-ALL,ALL",success,0.067,39.627,26/06/14 21:32,26/06/14 21:38,avamar,xxxxxxx2.network.local,Activity completed with exceptions.,26/06/14 20:00

有没有办法用正则表达式来判断一个字段是否使用了引号,如果是的话,能否把引号之间的数据提取出来放到一个命名的组里?

3 个回答

0

处理CSV文件时,你不需要使用正则表达式,直接用csv模块就可以了。默认情况下,这个模块会处理带引号的字段。你可以使用csv.DictReader来生成一系列字典,这些字典的结构和你用groupdict()从正则匹配对象中得到的结果类似。

如果你的输入文件包含:

svr28pr,Linux File System-ALL,success,32.87,2638.259,26/06/14 19:00,26/06/14 21:03,avamar,xxxxx1.network.local,Activity completed with exceptions.,26/06/14 19:00
SVr44PR:Staging_SyncDB,incr,success,1271,1271,27/06/14 11:28,27/06/14 11:28,SQL,,,1/01/70 09:59
svr08ts,"Windows VSS-ALL,ALL",success,0.067,39.627,26/06/14 21:32,26/06/14 21:38,avamar,xxxxx1.network.local,Activity completed with exceptions.,26/06/14 20:00

这个脚本

import csv
from pprint import pprint

fields = 'client backuptype status size dump start complete application server comment'.split()
with open('input.csv') as f:
    reader = csv.DictReader(f)
    reader.fieldnames = fields
    for row_dict in reader:
        pprint(row_dict)    # process the row here

将会输出:

{None: ['26/06/14 19:00'],
 'application': 'avamar',
 'backuptype': 'Linux File System-ALL',
 'client': 'svssi0000028pr',
 'comment': 'Activity completed with exceptions.',
 'complete': '26/06/14 21:03',
 'dump': '2638.259',
 'server': 'xxxxx1.network.local',
 'size': '32.87',
 'start': '26/06/14 19:00',
 'status': 'success'}
{None: ['1/01/70 09:59'],
 'application': 'SQL',
 'backuptype': 'incr',
 'client': 'SVr44PR:Staging_SyncDB',
 'comment': '',
 'complete': '27/06/14 11:28',
 'dump': '1271',
 'server': '',
 'size': '1271',
 'start': '27/06/14 11:28',
 'status': 'success'}
{None: ['26/06/14 20:00'],
 'application': 'avamar',
 'backuptype': 'Windows VSS-ALL,ALL',
 'client': 'svctx0000008ts',
 'comment': 'Activity completed with exceptions.',
 'complete': '26/06/14 21:38',
 'dump': '39.627',
 'server': 'xxxxxxx2.network.local',
 'size': '0.067',
 'start': '26/06/14 21:32',
 'status': 'success'}

而且具体来说,

>>> print row_dict['backuptype']
Windows VSS-ALL,ALL

如所需。

0

正如@thimoty-shields所说,使用csv模块。

import csv
csvfile='backups.csv'
with open(csvfile) as csvfile:
    backups = csv.reader(csvfile)
    for row in backups:
        for cell in row:
            #do what you need
            print cell
0

你可以在Splunk中使用multikv处理表格数据

撰写回答