`ClassificationDataSet`中的`target`有什么用?
我一直在尝试弄清楚ClassificationDataSet
中的target
参数到底有什么用,但我还是不太明白。
我尝试过的
>>> from pybrain.datasets import ClassificationDataSet
>>> help(ClassificationDataSet)
Help on class ClassificationDataSet in module pybrain.datasets.classification:
class ClassificationDataSet(pybrain.datasets.supervised.SupervisedDataSet)
| Specialized data set for classification data. Classes are to be numbered from 0 to nb_classes-1.
|
| Method resolution order:
| ClassificationDataSet
| pybrain.datasets.supervised.SupervisedDataSet
| pybrain.datasets.dataset.DataSet
| pybrain.utilities.Serializable
| __builtin__.object
|
| Methods defined here:
|
| __add__(self, other)
| Adds the patterns of two datasets, if dimensions and type match.
|
| __init__(self, inp, target=1, nb_classes=0, class_labels=None)
| Initialize an empty dataset.
|
| `inp` is used to specify the dimensionality of the input. While the
| number of targets is given by implicitly by the training samples, it can
| also be set explicity by `nb_classes`. To give the classes names, supply
| an iterable of strings as `class_labels`.
|
| __reduce__(self)
这个参数的说明里没有关于target
的信息(除了默认值是1),所以我查看了ClassificationDataSet的源代码:
class ClassificationDataSet(SupervisedDataSet):
""" Specialized data set for classification data. Classes are to be numbered from 0 to nb_classes-1. """
def __init__(self, inp, target=1, nb_classes=0, class_labels=None):
"""Initialize an empty dataset.
`inp` is used to specify the dimensionality of the input. While the
number of targets is given by implicitly by the training samples, it can
also be set explicity by `nb_classes`. To give the classes names, supply
an iterable of strings as `class_labels`."""
# FIXME: hard to keep nClasses synchronized if appendLinked() etc. is used.
SupervisedDataSet.__init__(self, inp, target)
self.addField('class', 1)
self.nClasses = nb_classes
if len(self) > 0:
# calculate class histogram, if we already have data
self.calculateStatistics()
self.convertField('target', int)
if class_labels is None:
self.class_labels = list(set(self.getField('target').flatten()))
else:
self.class_labels = class_labels
# copy classes (may be changed into other representation)
self.setField('class', self.getField('target'))
但还是不太清楚,所以我又看了SupervisedDataSet的源代码:
class SupervisedDataSet(DataSet):
"""SupervisedDataSets have two fields, one for input and one for the target.
"""
def __init__(self, inp, target):
"""Initialize an empty supervised dataset.
Pass `inp` and `target` to specify the dimensions of the input and
target vectors."""
DataSet.__init__(self)
if isscalar(inp):
# add input and target fields and link them
self.addField('input', inp)
self.addField('target', target)
else:
self.setField('input', inp)
self.setField('target', target)
self.linkFields(['input', 'target'])
# reset the index marker
self.index = 0
# the input and target dimensions
self.indim = self.getDimension('input')
self.outdim = self.getDimension('target')
看起来这个参数是和输出的维度有关。但那target
不应该是nb_classes
吗?
1 个回答
5
target
参数是训练样本输出的维度。为了更好地理解它和 nb_classes
之间的区别,我们来看一下 _convertToOneOfMany
方法:
def _convertToOneOfMany(self, bounds=(0, 1)):
"""Converts the target classes to a 1-of-k representation, retaining the
old targets as a field `class`.
To supply specific bounds, set the `bounds` parameter, which consists of
target values for non-membership and membership."""
if self.outdim != 1:
# we already have the correct representation (hopefully...)
return
if self.nClasses <= 0:
self.calculateStatistics()
oldtarg = self.getField('target')
newtarg = zeros([len(self), self.nClasses], dtype='Int32') + bounds[0]
for i in range(len(self)):
newtarg[i, int(oldtarg[i])] = bounds[1]
self.setField('target', newtarg)
self.setField('class', oldtarg)
从理论上讲,target
是输出的维度,而 nb_classes
是分类的类别数量。这对于数据转换很有用。
举个例子,假设我们有用于训练网络的 xor
函数的数据,如下所示:
IN OUT
[0,0],0
[0,1],1
[1,0],1
[1,1],0
在这个例子中,输出的维度是1,但有两个输出类别:0和1。
所以我们可以把数据改成:
IN OUT
[0,0],(0,1)
[0,1],(1,0)
[1,0],(1,0)
[1,1],(0,1)
现在输出的第一个参数是 True
的值,第二个参数是 False
的值。
这种做法在处理更多类别时很常见,比如在手写识别中。
希望这能让你更清楚一些。