<p>我假设您正在寻找访问transformer结果的方法,它生成一个numpy数组</p>
<p>ColumnTransformer有一个名为<a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/compose/_column_transformer.py" rel="nofollow noreferrer">^{<cd1>}</a>的属性:`</p>
<p>从文件中:</p>
<blockquote>
<pre><code>transformers_ : list
The collection of fitted transformers as tuples of
(name, fitted_transformer, column). `fitted_transformer` can be an
estimator, 'drop', or 'passthrough'. In case there were no columns
selected, this will be the unfitted transformer.
If there are remaining columns, the final element is a tuple of the
form:
('remainder', transformer, remaining_columns) corresponding to the
``remainder`` parameter. If there are remaining columns, then
``len(transformers_)==len(transformers)+1``, otherwise
``len(transformers_)==len(transformers)``.
</code></pre>
</blockquote>
<p>因此,不幸的是,这只提供了有关变压器本身及其应用到的列的信息,但不提供结果数据位置的信息,以下情况除外:</p>
<blockquote>
<p>notes: The order of the columns in the transformed feature matrix follows the
order of how the columns are specified in the <code>transformers</code> list.</p>
</blockquote>
<p>因此,我们知道输出列的顺序与transformers列表中指定列的顺序相同。另外,对于transformer步骤,我们还知道它们产生了多少列,因为StandardScaler()产生的列数与原始数据相同,OneHotEncoder()产生的列数等于类别数</p>
<pre><code>import numpy as np
import pandas as pd
from sklearn.preprocessing import OneHotEncoder,StandardScaler
from sklearn.compose import ColumnTransformer, make_column_transformer
df = pd.DataFrame({'brand' : ['aaaa', 'asdfasdf', 'sadfds', 'NaN'],
'category' : ['asdf','asfa','asdfas','asd'],
'num1' : [1, 1, 0, 0] ,
'target' : [0.2,0.11,1.34,1.123]})
train_continuous_cols = df.select_dtypes(include=["int64","float64"]).columns.tolist()
train_categorical_cols = df.select_dtypes(include=["object"]).columns.tolist()
# get n_categories for categorical features
n_categories = [df[x].nunique() for x in train_categorical_cols]
preprocess = make_column_transformer(
(StandardScaler(),train_continuous_cols),
(OneHotEncoder(), train_categorical_cols)
)
preprocessed_df = preprocess.fit_transform(df)
# the scaler yield 1 column each
indexes_scaler = list(range(0,len(train_continuous_cols)))
# the encoder yields a number of columns equal to the number of categories in the data
cum_index_encoder = [0] + list(np.cumsum(n_categories))
# the encoder indexes come after the scaler indexes
start_index_encoder = indexes_scaler[-1]+1
indexes_encoder = [x + start_index_encoder for x in cum_index_encoder]
# get both lower and uper bound of index
index_pairs= zip (indexes_encoder[:-1],indexes_encoder[1:])
</code></pre>
<p>这将产生以下输出:</p>
<pre><code>print ('Transformed {} continious cols resulting in a df with shape:'.format(len(train_continuous_cols)))
print (preprocessed_df[: , indexes_scaler].shape)
</code></pre>
<blockquote>
<p>Transformed 2 continious cols resulting in a df with shape:
(4, 2)</p>
</blockquote>
<pre><code>for column, (start_id, end_id) in zip (train_categorical_cols,index_pairs):
print('Transformed column {} resulted in a df with shape:'.format(column))
print(preprocessed_df[:, start_id:end_id].shape)
</code></pre>
<blockquote>
<p>Transformed column brand resulted in a df with shape:
(4, 4)</p>
<p>Transformed column category resulted in a df with shape:
(4, 4)</p>
</blockquote>