为基于Kmeansb的聚类算法创建条形图的问题

2024-06-16 12:10:50 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在努力为基于KMeans的聚类算法绘制条形图。问题是,我想以这样一种方式演示集群,即非常离群的集群可以在x轴的末尾进行描述&;其余的集群相对相邻。我认为问题在于{},它们在x轴上均匀分布:

---|---|---|-----------------> x-axis
0  1   2   3 

在这种情况下,我想说明,例如,基于距离稍远的Score预测的带有标签的3的集群,需要对存储箱宽度进行一些调整,可能如下所示:

---|---|--------------|------> x-axis
0  1   2              3 

到目前为止,我获得了以下结果,以演示基于知识管理的异常检测算法的结果: img

from sklearn.cluster import KMeans
import seaborn as sns
import numpy as np
from pandas import DataFrame
from math import pow
import math

class ODKM:
    
    def __init__(self,n_clusters=15,effectiveness=500,max_iter=2):
        self.n_clusters=n_clusters
        self.effectiveness=effectiveness
        self.max_iter=max_iter
        self.kmeans = {}
        self.cluster_score = {}
        #self.labels = {}
        
    def fit(self, data):
        length = len(data)
        for column in data.columns:
            kmeans = KMeans(n_clusters=self.n_clusters,max_iter=self.max_iter)
            self.kmeans[column]=kmeans
            kmeans.fit(data[column].values.reshape(-1,1))
            assign = DataFrame(kmeans.predict(data[column].values.reshape(-1,1)),columns=['cluster'])
            cluster_score=assign.groupby('cluster').apply(len).apply(lambda x:x/length)
            ratio=cluster_score.copy()
        
            sorted_centers = sorted(kmeans.cluster_centers_)
            max_distance = ( sorted_centers[-1] - sorted_centers[0] )[ 0 ]
        
            for i in range(self.n_clusters):
                for k in range(self.n_clusters):
                    if i != k:
                        dist = abs(kmeans.cluster_centers_[i] - kmeans.cluster_centers_[k])/max_distance
                        effect = ratio[k]*(1/pow(self.effectiveness,dist))
                        cluster_score[i] = cluster_score[i]+effect
                        
            self.cluster_score[column] = cluster_score
                    
    def predict(self, data):
        length = len(data)
        score_array = np.zeros(length)
        for column in data.columns:
            kmeans = self.kmeans[ column ]
            cluster_score = self.cluster_score[ column ]
            #labels = kmeans.labels_ 
            assign = kmeans.predict( data[ column ].values.reshape(-1,1) )
            #print(assign)
            
            for i in range(length):
                score_array[i] = score_array[i] + math.log10( cluster_score[assign[i]] )
            
        return score_array #,labels
    
    def fit_predict(self,data):
        self.fit(data)
        return self.predict(data)

测试结果:

import pandas as pd

df = pd.DataFrame(data={'attr1':[1,1,1,1,2,2,2,2,2,2,2,2,3,5,5,6,6,7,7,7,7,7,7,7,15],
                        'attr2':[1,1,1,1,2,2,2,2,2,2,2,2,3,5,5,6,6,7,7,7,13,13,13,14,15]})

#generate score from KM-based algorithm via class ODKM
odkm_model = ODKM(n_clusters=3, max_iter=1)
result = odkm_model.fit_predict(df)

#include generated scores to the main frame to reach desired plot
df['ODKM_Score']= result 
df

#for i in result:
#    print(round(i,2))

#results
#-0.51, -0.51 , -0.51 , -0.51, -0.51, -0.51, -0.51, -0.51, -0.51, -0.51, -0.51, -0.51, -0.51
#-0.78, -0.78, -0.78, -0.78, -0.78, -0.78, -0.78
#-0.99, -0.99, -0.99, -0.99
#-1.99

您可以在colab notebook中找到我的整个代码,包括这个基于KM的算法,以便快速调试。如果需要,请随时在笔记本上实现您的解决方案或在单元格上发表评论,或者ODKM算法本身(执行KM集群)中的一些更改可以@class ODKM的形式访问。为了更好地访问条形图,最好提取预测的簇标签,并在ODKM算法Score旁边的Cluster_label标题下添加一个新列

预期的输出应该是这样的(相同集群中更好的容器具有相同的颜色,例如第一个集群C1):

img

更新:除了条形图解决方案外,我还可以绘制历史&;分布,但我不知道如何着色和传递聚类标签,以在直方图中的容器上反映聚类结果

##left output
# just plot 'Score' column (not all columsn in 1st phase) to simply the problem
#cols_ = df.columns[-1:] 
ax1 = plt.subplot2grid((1,1), (0,0))
df['Score'].plot(kind='hist', ax=ax1 , color='b', alpha=0.4)
df['Score'].plot(kind='kde', ax=ax1, secondary_y=True, label='distribution', color='b', lw=2)

##Right output
sns.distplot(df['Score'] , color='b')

尽管在图表上反映了聚类结果,但我注意到,正如我在下图中强调的,这两个图之间存在一些差异。Gy轴的比例&;靠近x轴原点的主料仓之间的间隙问题:

img

我也发现了这个post,但是我不能适应@class ODKM来动态地解决我的问题。 我最近也可以做到这一点:

df['Score'] = df['Score'].abs()
sns.displot(df, 
            x='Score',
            hue='Cluster_labels',
            palette=["#00f0f0","#ff0000","#00ff00"],
             alpha=1)

img


Tags: inimportselfdfdata集群columnmax
2条回答
import pandas as pd


df = pd.DataFrame(data={'attr1':[1,1,1,1,2,2,2,2,2,2,2,2,3,5,5,6,6,7,7,7,7,7,7,7,15],
                        'attr2':[1,1,1,1,2,2,2,2,2,2,2,2,3,5,5,6,6,7,7,7,13,13,13,14,15]
                        })

import matplotlib.pyplot as plt
from sklearn.cluster import DBSCAN

def kmeans_scatterplot(df):

    column_i = 'attr1'
    column_j = 'attr2'

    df_temp = df[[column_i, column_j]]
    
    # model
    y_pred = DBSCAN(eps = 3, min_samples = 1).fit_predict(df_temp)
    
    # plot
    plt.scatter(df_temp[column_i], df_temp[column_j], c=y_pred, cmap='rainbow', alpha=0.7, edgecolors='b')

    plt.show()
kmeans_scatterplot(df)

这种聚类只需要指定距离,然后我们就可以根据类别标记颜色

它将帮助您快速理解此算法的原理:https://www.naftaliharris.com/blog/visualizing-dbscan-clustering/

对于1D情况,可以使用簇的中心作为条形图的x位置

n_clusters=3

km = KMeans(init='k-means++', n_clusters=n_clusters).fit(df[['Score']])

counts = np.bincount(km.labels_)

for center, count, label in zip(km.cluster_centers_, counts, range(n_clusters)):
    print(center, count)
    plt.bar(center, count, width=0.2, label=label)

相关问题 更多 >