从二维列表中移除连续重复项,python?
我想知道怎么从一个二维列表中去掉连续重复的元素,按照特定的元素来判断(这里是第二个元素)。
我试过用一些itertools的组合,但没成功。
有没有人能告诉我该怎么解决这个问题?
输入
192.168.1.232 >>>>> 173.194.36.64 , 14 , 15 , 16
192.168.1.232 >>>>> 173.194.36.64 , 14 , 15 , 17
192.168.1.232 >>>>> 173.194.36.119 , 23 , 30 , 31
192.168.1.232 >>>>> 173.194.36.98 , 24 , 40 , 41
192.168.1.232 >>>>> 173.194.36.98 , 24 , 40 , 62
192.168.1.232 >>>>> 173.194.36.74 , 25 , 42 , 43
192.168.1.232 >>>>> 173.194.36.74 , 25 , 42 , 65
192.168.1.232 >>>>> 173.194.36.74 , 26 , 44 , 45
192.168.1.232 >>>>> 173.194.36.74 , 26 , 44 , 66
192.168.1.232 >>>>> 173.194.36.78 , 27 , 46 , 47
输出
192.168.1.232 >>>>> 173.194.36.64 , 14 , 15 , 16
192.168.1.232 >>>>> 173.194.36.119 , 23 , 30 , 31
192.168.1.232 >>>>> 173.194.36.98 , 24 , 40 , 41
192.168.1.232 >>>>> 173.194.36.74 , 25 , 42 , 43
192.168.1.232 >>>>> 173.194.36.78 , 27 , 46 , 47
这是我期望的输出结果。
更新
上面给出的列表是经过美化的形式。
实际上,列表长这样。
>>> for x in connection_frame:
print x
['192.168.1.232', '173.194.36.64', 14, 15, 16]
['192.168.1.232', '173.194.36.64', 14, 15, 17]
['192.168.1.232', '173.194.36.119', 23, 30, 31]
['192.168.1.232', '173.194.36.98', 24, 40, 41]
['192.168.1.232', '173.194.36.98', 24, 40, 62]
['192.168.1.232', '173.194.36.74', 25, 42, 43]
['192.168.1.232', '173.194.36.74', 25, 42, 65]
['192.168.1.232', '173.194.36.74', 26, 44, 45]
['192.168.1.232', '173.194.36.74', 26, 44, 66]
['192.168.1.232', '173.194.36.78', 27, 46, 47]
['192.168.1.232', '173.194.36.78', 27, 46, 67]
['192.168.1.232', '173.194.36.78', 28, 48, 49]
['192.168.1.232', '173.194.36.78', 28, 48, 68]
['192.168.1.232', '173.194.36.79', 29, 50, 51]
['192.168.1.232', '173.194.36.79', 29, 50, 69]
['192.168.1.232', '173.194.36.119', 32, 52, 53]
['192.168.1.232', '173.194.36.119', 32, 52, 74]
3 个回答
0
Pandas.groupby
是一个可以替代 itertools.groupby
的工具,它的好处在于可以跟踪原始列表中连续或不连续的元素——通过给出行号而不是迭代器。就像这样:
df = pandas.DataFrame(connection_frame)
print df
Out:
0 1 2 3 4
0 '192.168.1.232' '173.194.36.64' 14 15 16
1 '192.168.1.232' '173.194.36.64' 14 15 17
2 '192.168.1.232' '173.194.36.119' 23 30 31
3 '192.168.1.232' '173.194.36.98' 24 40 41
4 '192.168.1.232' '173.194.36.98' 24 40 62
5 '192.168.1.232' '173.194.36.74' 25 42 43
6 '192.168.1.232' '173.194.36.74' 25 42 65
7 '192.168.1.232' '173.194.36.74' 26 44 45
8 '192.168.1.232' '173.194.36.74' 26 44 66
9 '192.168.1.232' '173.194.36.78' 27 46 47
10 '192.168.1.232' '173.194.36.78' 27 46 67
11 '192.168.1.232' '173.194.36.78' 28 48 49
12 '192.168.1.232' '173.194.36.78' 28 48 68
13 '192.168.1.232' '173.194.36.79' 29 50 51
14 '192.168.1.232' '173.194.36.79' 29 50 69
15 '192.168.1.232' '173.194.36.119' 32 52 53
16 '192.168.1.232' '173.194.36.119' 32 52 74
然后,你可以根据第二列进行分组,并将这些分组打印出来,像这样:
gps = df.groupby(2).groups
print gps
Out:
{' 14': [0, 1],
' 23': [2],
' 24': [3, 4],
' 25': [5, 6],
' 26': [7, 8],
' 27': [9, 10],
' 28': [11, 12],
' 29': [13, 14],
' 32': [15, 16]}
看到每一行的行号了吗?在每个 gps
列表中,有很多方法可以去掉连续的重复项。这里有一种方法:
valid_rows = list()
for g in gps.values():
old_row = g[0]
valid_rows.append(old_row)
for row_id in range(1, len(g)):
new_row = g[row_id]
if new_row - old_row != 1:
valid_rows.append(new_row)
old_row = new_row
print valid_rows
Out: [5, 3, 9, 7, 0, 2, 15, 13, 11]
最后,用 valid_rows
来索引 pandas 的数据框。
print df.ix[sorted(valid_rows)]
Out:
0 '192.168.1.232' '173.194.36.64' 14 15 16
2 '192.168.1.232' '173.194.36.119' 23 30 31
3 '192.168.1.232' '173.194.36.98' 24 40 41
5 '192.168.1.232' '173.194.36.74' 25 42 43
7 '192.168.1.232' '173.194.36.74' 26 44 45
9 '192.168.1.232' '173.194.36.78' 27 46 47
11 '192.168.1.232' '173.194.36.78' 28 48 49
13 '192.168.1.232' '173.194.36.79' 29 50 51
15 '192.168.1.232' '173.194.36.119' 32 52 53
3
import itertools
data = """192.168.1.232 >>>>> 173.194.36.64 , 14 , 15 , 16
192.168.1.232 >>>>> 173.194.36.64 , 14 , 15 , 17
192.168.1.232 >>>>> 173.194.36.119 , 23 , 30 , 31
192.168.1.232 >>>>> 173.194.36.98 , 24 , 40 , 41
192.168.1.232 >>>>> 173.194.36.98 , 24 , 40 , 62
192.168.1.232 >>>>> 173.194.36.74 , 25 , 42 , 43
192.168.1.232 >>>>> 173.194.36.74 , 25 , 42 , 65
192.168.1.232 >>>>> 173.194.36.74 , 26 , 44 , 45
192.168.1.232 >>>>> 173.194.36.74 , 26 , 44 , 66
192.168.1.232 >>>>> 173.194.36.78 , 27 , 46 , 47""".split("\n")
for k, g in itertools.groupby(data, lambda l:l.split()[2]):
print next(g)
这段代码会输出:
192.168.1.232 >>>>> 173.194.36.64 , 14 , 15 , 16
192.168.1.232 >>>>> 173.194.36.119 , 23 , 30 , 31
192.168.1.232 >>>>> 173.194.36.98 , 24 , 40 , 41
192.168.1.232 >>>>> 173.194.36.74 , 25 , 42 , 43
192.168.1.232 >>>>> 173.194.36.78 , 27 , 46 , 47
(这里用的是一个字符串的列表,但很容易改成一个列表的列表。)
3
因为你想保持顺序,并且只想一次性删除连续的条目,我不知道有什么特别的内置方法可以用。所以这里有一个“笨办法”:
>>> remList = []
>>> for i in range(len(connection_frame)):
... if (i != len(connection_frame)-)1 and (connection_frame[i][1] == connection_frame[i+1][1]):
... remList.append(i)
...
for i in remList:
connection_frame.pop(i)
['192.168.1.232', '173.194.36.119', 32, 52, 53]
['192.168.1.232', '173.194.36.79', 29, 50, 51]
['192.168.1.232', '173.194.36.78', 28, 48, 49]
['192.168.1.232', '173.194.36.78', 27, 46, 67]
['192.168.1.232', '173.194.36.78', 27, 46, 47]
['192.168.1.232', '173.194.36.74', 26, 44, 45]
['192.168.1.232', '173.194.36.74', 25, 42, 65]
['192.168.1.232', '173.194.36.74', 25, 42, 43]
['192.168.1.232', '173.194.36.98', 24, 40, 41]
['192.168.1.232', '173.194.36.64', 14, 15, 16]
>>>
>>> for conn in connection_frame:
... print conn
...
['192.168.1.232', '173.194.36.64', 14, 15, 17]
['192.168.1.232', '173.194.36.119', 23, 30, 31]
['192.168.1.232', '173.194.36.98', 24, 40, 62]
['192.168.1.232', '173.194.36.74', 26, 44, 66]
['192.168.1.232', '173.194.36.78', 28, 48, 68]
['192.168.1.232', '173.194.36.79', 29, 50, 69]
['192.168.1.232', '173.194.36.119', 32, 52, 74]
>>>
或者,如果你想一次性搞定,可以用列表推导式:
>>> new_frame = [conn for conn in connection_frame if not connection_frame.index(conn) in [i for i in range(len(connection_frame)) if (i != len(connection_frame)-1) and (connection_frame[i][1] == connection_frame[i+1][1])]]
>>>
>>> for conn in new_frame:
... print conn
...
['192.168.1.232', '173.194.36.64', 14, 15, 17]
['192.168.1.232', '173.194.36.119', 23, 30, 31]
['192.168.1.232', '173.194.36.98', 24, 40, 62]
['192.168.1.232', '173.194.36.74', 26, 44, 66]
['192.168.1.232', '173.194.36.78', 28, 48, 68]
['192.168.1.232', '173.194.36.79', 29, 50, 69]
['192.168.1.232', '173.194.36.119', 32, 52, 74]