如何通过Python中的pandas库在csv中删除重复项?

2024-04-26 12:43:32 发布

您现在位置:Python中文网/ 问答频道 /正文

我一直在四处寻找示例,但无法按我希望的方式进行

我想按“OrderID”进行重复数据消除,并提取重复数据以分离CSV。 主要的事情是我需要能够更改我要通过其进行重复数据消除的列,在本例中是它的“订单ID”

示例数据集:

ID    Fruit   Order ID    Quantity    Price
1     apple      1111        11       £2.00
2     banana     2222        22       £3.00
3     orange     3333        33       £5.00
4     mango      4444        44       £7.00
5     Kiwi       3333        55       £5.00

输出:

ID    Fruit   Order ID    Quantity    Price
5     Kiwi       3333        55       £5.00

我试过这个:

import pandas as pd

df = pd.read_csv('C:/Users/shane/PycharmProjects/PythonTut/deduping/duplicate example.csv')

new_df = df[['ID','Fruit','Order ID','Quantity','Price']].drop_duplicates()

new_df.to_csv('C:/Users/shane/PycharmProjects/PythonTut/deduping/duplicate test.csv', index=False)

我的问题是,它不会删除任何重复项


Tags: csv数据id示例dforderusersprice
2条回答

您可以通过创建一个值为_counts()的新数据帧、合并和过滤来实现这一点

# value_counts returns a Series, to_frame() makes it into DataFrame
df_counts = df['OrderID'].value_counts().to_frame()
# rename the column
df_counts.columns = ['order_counts']

# merging original on column "OrderID" and the counts by it's index
df_merged = pd.merge(df, df_counts, left_on='OrderID', right_index=True)

# Then to get the ones which are duplicate is just the ones that count is higher than 1
df_filtered = df_merged[df_merged['order_counts']>1]

# if you want everything else that isn't a duplicate
df_not_duplicates = df_merged[df_merged['order_counts']==1]

编辑:拖放重复项()只保留唯一的值,但如果发现重复项,则将删除除一个以外的所有值。您可以通过参数“keep”设置它,该参数可以是“first”或“last”

edit2:从您的评论中,您希望将结果导出到csv。 请记住,如上所述,我在两个数据帧中进行了分离:

a)删除重复项的所有项目(df_非重复项)

b)只有重复的项目仍然重复(df_过滤)

# Type 1 saving all OrderIds that had duplicates but still with duplicates:
df_filtered.to_csv("path_to_my_csv//filename.csv", sep=",", encoding="utf-8")

# Type 2, all OrderIDs that had duplicate values, but only 1 line per OrderID
df_filtered.drop_duplicates(subset="OrderID", keep='last').to_csv("path_to_my_csv//filename.csv", sep=",", encoding="utf-8")

如果要使用drop\u duplicates方法,则错误在第二行代码中(应该使用pd.DataFrame)

df = pd.read_csv('C:/Users/shane/PycharmProjects/PythonTut/deduping/duplicateexample.csv')

# Create dataframe with duplicates
raw_data = {'ID': [1,2,3,4,5], 
            'Fruit': ['apple', 'Banana', 'Orange','Mango', 'Kiwi'], 
            'Order ID': [1111, 2222, 3333, 4444, 5555], 
        'Quantity': [11, 22, 33, 44, 55],
        'Price': [ 2, 3, 5, 7, 5]}

new_df = pd.DataFrame(raw_data, columns = ['ID','Fruit','Order ID','Quantity','Price']).drop_duplicates()

new_df.to_csv('C:/Users/shane/PycharmProjects/PythonTut/deduping/duplicate test.csv', index=False)

希望能有帮助

相关问题 更多 >