索引6超出大小为1的轴1的界限

2024-05-16 08:47:50 发布

您现在位置:Python中文网/ 问答频道 /正文

我目前正在运行PySarms,从refhttps://pyswarms.readthedocs.io/en/development/examples/custom_objective_function.html#constructing-a-custom-objective-function训练神经网络代码

我有一个形状为(5035,10)的连续类型的数据,但当我尝试运行此程序时,它出现以下错误

# Initialize swarm
options = {'c1': 0.5, 'c2': 0.3, 'w':0.8}

# Call instance of PSO
dimensions = (10 * 20) + (20 * 1) + 20 + 1
optimizer = ps.single.GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options)

# Perform optimization
cost, pos = optimizer.optimize(f, iters=100, verbose=3)

然后像这样进行追踪

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-65-55d754bbaf44> in <module>
      7 
      8 # Perform optimization
----> 9 cost, pos = optimizer.optimize(f, iters=1000, verbose=3)

C:\ProgramData\Anaconda3\lib\site-packages\pyswarms\single\global_best.py in optimize(self, objective_func, iters, n_processes, verbose, **kwargs)
    207             # Compute cost for current position and personal best
    208             # fmt: off
--> 209             self.swarm.current_cost = compute_objective_function(self.swarm, objective_func, pool=pool, **kwargs)
    210             self.swarm.pbest_pos, self.swarm.pbest_cost = compute_pbest(self.swarm)
    211             # Set best_cost_yet_found for ftol

C:\ProgramData\Anaconda3\lib\site-packages\pyswarms\backend\operators.py in compute_objective_function(swarm, objective_func, pool, **kwargs)
    237     """
    238     if pool is None:
--> 239         return objective_func(swarm.position, **kwargs)
    240     else:
    241         results = pool.map(

<ipython-input-63-2ff909935664> in f(x)
     14     """
     15     n_particles = x.shape[0]
---> 16     j = [forward_prop(x[i]) for i in range(n_particles)]
     17     return np.array(j)

<ipython-input-63-2ff909935664> in <listcomp>(.0)
     14     """
     15     n_particles = x.shape[0]
---> 16     j = [forward_prop(x[i]) for i in range(n_particles)]
     17     return np.array(j)

<ipython-input-23-899810f06ffb> in forward_prop(params)
     41     # Compute for the negative log likelihood
     42     N = 5035 # Number of samples
---> 43     corect_logprobs = -np.log(probs[range(N), y])
     44     loss = np.sum(corect_logprobs) / N
     45 

IndexError: index 6 is out of bounds for axis 1 with size 1

f函数

def f(x):
    n_particles = x.shape[0]
    j = [forward_prop(x[i]) for i in range(n_particles)]
    return np.array(j)

前向函数

def forward_prop(params):
    # Neural network architecture
    n_inputs = 10
    n_hidden = 20
    n_classes = 1

    # Roll-back the weights and biases
    W1 = params[0:200].reshape((n_inputs,n_hidden))
    b1 = params[200:220].reshape((n_hidden,))
    W2 = params[220:240].reshape((n_hidden,n_classes))
    b2 = params[240:241].reshape((n_classes,))

    # Perform forward propagation
    z1 = X.dot(W1) + b1  # Pre-activation in Layer 1
    a1 = np.tanh(z1)     # Activation in Layer 1
    z2 = a1.dot(W2) + b2 # Pre-activation in Layer 2
    logits = z2          # Logits for Layer 2

    # Compute for the softmax of the logits
    exp_scores = np.exp(logits)
    probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)

    # Compute for the negative log likelihood
    N = 5035 # Number of samples
    corect_logprobs = -np.log(probs[range(N), y])
    loss = np.sum(corect_logprobs) / N

    return loss

我尝试使用LocalBestPSO进行更改,但效果不佳。最后,我希望看到最终成本和最佳价值。谢谢


Tags: ofinselfforreturnnprangeparams