在字典中寻找相似之处

2024-05-14 17:32:32 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图在多个.txt文件之间找到相似之处。我把所有这些文件都放在一本字典里,以文件名为关键字。你知道吗

当前代码:

import pandas as pd
from os import listdir, chdir, getcwd
path = (r'C:\...path')
chdir(path)
files = [f for f in listdir(path)]
files_dict = {}

for filename in files:
    if filename.lower().endswith(('.txt')):
        files_dict[str(filename)] = pd.read_csv(filename).to_dict('split')

for key, value in files_dict.items():
    print(key + str(value) +'\n')

本例中的键是文件名。值是标头和数据。 我想找出多个文件之间的值是否存在重复,这样我就可以在SQL中连接它们。我不知道怎么做

编辑示例文件:

timestamp,Name,Description,Default Column Layout,Analysis View Name
00000000B42852FA,ADM_EIG,Administratief eigenaar,ADM_EIG,ADM_EIG
000000005880959E,OPZ,Opzeggingen,STANDAARD,

根据代码:

Acc_ Schedule Name.txt{'index': [0, 1], 'columns': ['timestamp', 'Name', 'Description', 'Default Column Layout', 'Analysis View Name'], 'data': [['00000000B42852FA', 'ADM_EIG', 'Administratief eigenaar', 'ADM_EIG', 'ADM_EIG'], ['000000005880959E', 'OPZ', 'Opzeggingen', 'STANDAARD', nan]]}

编辑2:建议代码

for key, value in files_dict.items():
    data = value['data']
    counter = Counter([item for sublist in data for item in sublist])
    print([value for value, count in counter.items()])

输出:['00000000B99BD831', 5050, 'CK102', '0,00000000000000000000', 'Thuiswonend', 0, '00000000B99BD832', ........


Tags: 文件pathkey代码nameintxtfor
2条回答

如果所有文件中的所有列都相同,我想您可以按以下方式使用pd.duplicated()

import pathlib
import pandas as pd


def read_txt_files(dir_path):
    df_list = []
    for filename in pathlib.Path(dir_path).glob('*.txt'):
        # print(filename)
        df = pd.read_csv(filename, index_col=0)
        df['filename'] = filename  # just to save filename as an optional key
        df_list.append(df)

    return pd.concat(df_list)

df = read_txt_files(r'C:\...path')  # probably you should change path in this line     
df.set_index('filename', append=True, inplace=True)
print(df)
                                Name              Description  ...
timestamp        filename                                       
00000000B42852FA first.txt   ADM_EIG  Administratief eigenaar  ...
000000005880959E first.txt       OPZ              Opzeggingen  ... 
00000000B42852FA second.txt  ADM_EIG  Administratief eigenaar  ... 
000000005880959K second.txt      XYZ              Opzeggingen  ... 

因此,您可以使用重复数据获取索引:

df.duplicated(keep='first')

Out:
timestamp         filename  
00000000B42852FA  first.txt     False
000000005880959E  first.txt     False
00000000B42852FA  second.txt     True
000000005880959K  second.txt    False
dtype: bool

并使用它来过滤数据:

df[~df.duplicated(keep='first')]

Out:
                                Name              Description  ...
timestamp        filename                                       
00000000B42852FA first.txt   ADM_EIG  Administratief eigenaar  ...  
000000005880959E first.txt       OPZ              Opzeggingen  ... 
000000005880959K second.txt      XYZ              Opzeggingen  ...

编辑:不同文件中不同列但方案相同的示例。 第一.txt地址:

timestamp,Name,Descr,Column Layout,Analysis View Name
00000000B42852FA,ADM_EIG,Administratief eigenaar,ADM_EIG,ADM_EIG
000000005880959E,OPZ,Opzeggingen,STANDAARD,

你知道吗秒.txt地址:

timestamp,Descr,Default Column Layout,Analysis View Name
00000000B42852FA,Administratief,ADM_EIG,ADM_EIG
000000005880959K,Opzeggingen,STANDAARD,

你知道吗第三.txt你知道吗

timestamp,Descr,Default Column Layout,Analysis View Name
00000000B42852FA,Administratief eigenaar,ADM_EIG,ADM_EIG
000000005880959K,Opzeggingen,STANDAARD,

最后一行秒.txt以及第三.txt是重复的。你知道吗

应用相同代码:

...
print(df)

Out:  # partial because it's to wide
                            Analysis View Name Column Layout  ...
timestamp        filename                                      
00000000B42852FA first.txt             ADM_EIG       ADM_EIG  ... 
000000005880959E first.txt                 NaN     STANDAARD  ... 
00000000B42852FA second.txt            ADM_EIG           NaN  ... 
000000005880959K second.txt                NaN           NaN  ... 
00000000B42852FA third.txt             ADM_EIG           NaN  ... 
000000005880959K third.txt                 NaN           NaN  ...

缺少的值(如果.txt中没有这样的列)用NaN填充。 定位重复列:

df.duplicated(keep='first')

Out:
timestamp         filename  
00000000B42852FA  first.txt     False
000000005880959E  first.txt     False
00000000B42852FA  second.txt    False
000000005880959K  second.txt    False
00000000B42852FA  third.txt     False
000000005880959K  third.txt      True

Counter计算项目的频率,因此将告诉您任何出现多次的内容。从字典中取出data

from Collections import Counter

data = [
   ['00000000B42852FA', 'ADM_EIG', 'Administratiefeigenaar', 'ADM_EIG', 'ADM_EIG'],
   ['000000005880959E', 'OPZ', 'Opzeggingen', 'STANDAARD', nan]
]

您需要展平列表列表:

[item for sublist in data for item in sublist]

计数器将为您提供每个项目的频率:

>>> Counter([item for sublist in data for item in sublist])
Counter({'ADM_EIG': 3, '00000000B42852FA': 1, 'Administratief eigenaar': 1, '000000005880959E': 1, 'OPZ': 1, 'Opzeggingen': 1, 'STANDAARD': 1, nan: 1})

然后您可以根据需要进行筛选:

counter = Counter([item for sublist in data for item in sublist])
[value for value, count in counter.items() if count > 1]

它给出了['ADM_EIG']


编辑以匹配问题编辑

要查看所有行,请获取所有数据并查找重复项:

data = []
for key, value in files_dict.items():
    data.extend(value['data'])

counter = Counter([item for sublist in data for item in sublist])
print([value for value, count in counter.items() if count > 1])

相关问题 更多 >

    热门问题