用于预测粒子角度变化的Keras神经网络无法正确预测

2024-05-13 16:42:40 发布

您现在位置:Python中文网/ 问答频道 /正文

我已经建立了一个keras回归模型,当提供关于单个粒子的数据时,可以预测单个粒子角度的变化。为了获得数据,我创建了一个程序来模拟n个粒子之间的布朗运动。以及随机的角噪声,取决于粒子之间的距离,它们会引起彼此角度的变化

我的代码如何工作并不重要,但本质上它输出一个数组,其中包含所有粒子相对于单个粒子的x、y坐标、所有粒子的θ值以及所有粒子与单个粒子之间的距离。所有这些参数都可以在每个时间步中找到。我用来训练网络的每个“图像”都是某个时间点的所有这些参数。总的来说,输入变量是x,y,角度,距离,输出变量是目标粒子θ的变化

对于我的神经网络,我首先将所有数据标准化为-1和1之间,然后将其重塑为输入NN:

import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout

## NORMALIZE IMAGES ##########################################################

# all images and labels imported, so obviously wont run without data. This is
# designed for running data with m iterations, n particles, 4 parameters
# (size of test data array is [m,n,4]). 

L = 5
# length of 'box' that houses particles
n = 10
# number of particles

train_images[:,:,0:2] = train_images[:,:,0:2]/L
# normalise [x,y] from -L:L to -1:1.
train_images[:,:,2:3] = train_images[:,:,2:3]/(2*np.pi)
# normalise theta value from -2pi:2pi to -1:1
train_images[:,:,3:4] = (train_images[:,:,3:4]/(L*np.sqrt(2))*2)-1
# normalise distance value from 0:sqrt(2)L to -1:1

test_images[:,:,0:2] = test_images[:,:,0:2]/L
test_images[:,:,2:3] = test_images[:,:,2:3]/(2*np.pi)
test_images[:,:,3:4] = (test_images[:,:,3:4]/(L*np.sqrt(2))*2)-1

## FLATTEN IMAGES ############################################################

train_images = train_images.reshape((-1, 4*(n-1))) 
# reshape so each input is a single dimension
# 4*(n-1) due to 4 parameters, adn n-1 particles (since one is redundant info)
test_images = test_images.reshape((-1, 4*(n-1)))

## BUILDING THE MODEL ########################################################

model = Sequential([
  Dense(64, activation='tanh', input_shape=(4*(n-1),)),
  Dense(16, activation='tanh'),
  Dropout(0.25),
  Dense(1, activation='tanh'),
])

## COMPILING THE MODEL #######################################################

model.compile(
  optimizer='adam',
  loss='mean_squared_error',
  #metrics=['mean_squared_error'],
)

## TRAINING THE MODEL ########################################################

history = model.fit(
  train_images, # training data
  train_labels, # training targets
  epochs=10,
  batch_size=32,
  #validation_data=(test_images, test_labels),
  shuffle=True,
  validation_split=0.2,
)

我对不同的层使用了多种激活类型(relu、sigmoid、tanh…),但似乎都没有给出正确的结果。我的数据的真实值(粒子角度的变化)是介于0.02到-0.02之间的值,但是我得到的值要小得多,并且往往主要是一个符号(pos/neg)

我目前正在使用损失函数“平均绝对误差”,因为我希望最小化实际值和预测值之间的差异。我注意到,在这样做的时候,仅仅一个时代之后,损失已经非常小:

Epoch 1/10
12495/12495 [==============================] - 13s 1ms/step - loss: 0.0010 - val_loss: 3.3794e-05
Epoch 2/10
12495/12495 [==============================] - 13s 1ms/step - loss: 3.4491e-05 - val_loss: 3.3769e-05
Epoch 3/10
12495/12495 [==============================] - 13s 1ms/step - loss: 3.4391e-05 - val_loss: 3.3883e-05
Epoch 4/10
12495/12495 [==============================] - 13s 1ms/step - loss: 3.4251e-05 - val_loss: 3.4755e-05
Epoch 5/10
12495/12495 [==============================] - 13s 1ms/step - loss: 3.4183e-05 - val_loss: 3.4273e-05
Epoch 6/10
12495/12495 [==============================] - 13s 1ms/step - loss: 3.4175e-05 - val_loss: 3.3770e-05
Epoch 7/10
12495/12495 [==============================] - 13s 1ms/step - loss: 3.4160e-05 - val_loss: 3.3646e-05
Epoch 8/10
12495/12495 [==============================] - 13s 1ms/step - loss: 3.4131e-05 - val_loss: 3.3629e-05
Epoch 9/10
12495/12495 [==============================] - 14s 1ms/step - loss: 3.4145e-05 - val_loss: 3.3581e-05
Epoch 10/10
12495/12495 [==============================] - 13s 1ms/step - loss: 3.4148e-05 - val_loss: 3.4647e-05

以下是我从中得到的结果示例:

Prediction:  4.8542774e-05
Actual:  0.006994473448353978

为了得到这些结果,我做了什么明显的错误吗?对不起,如果我没有提供足够的信息


Tags: 数据fromtestimportdatastepnp粒子