使用预定义函数python拟合数据

2024-05-14 22:38:47 发布

您现在位置:Python中文网/ 问答频道 /正文

你能帮我完成这项任务吗。 我有一些数据。我知道数据必须用一些预定义的方程拟合,比如y=(1-x/1+x)。 我知道,从源头上讲,有可能做到这一点。这件事能不能用 使用scikit learn或任何其他软件包?如果有的话,请告知包装和确切功能。 非常感谢


Tags: 数据功能scikitlearn方程事能源头
1条回答
网友
1楼 · 发布于 2024-05-14 22:38:47

您可以使用Scikit学习软件包轻松完成此任务,您可以尝试各种型号。您可以查看这些,例如,它们显示了如何使用各种模型拟合曲线

Curve Fitting with Bayesian Ridge Regression

Polynomial interpolation

Validation curves: plotting scores to evaluate models

您还可以使用神经网络,使用像Tensorflow/PyTorch这样的框架来非常轻松地拟合曲线。下面我给出了一个使用Pytork拟合曲线的示例

PyTorch曲线拟合(函数-Sin(2πx))

import random
import torch
from torch import nn, optim
import numpy as np

# Generating data, that is X and Y values for the model to train on
X = torch.unsqueeze(torch.linspace(-1, 1, 100), dim=1).to(device)  # x-axis values
y = (1-X/(1+X)) + torch.rand(X.size()) # adding a bit noise to the actual curve Sin(2πx), as real world is noisy and need our model to generalize

# some hyperparameters and global vars
seed = 1
random.seed(seed)
torch.manual_seed(seed)
N = 1000  # num_samples_per_class
D = 1  # dimensions
C = 1  # num_classes
H = 200  # num_hidden_units
learning_rate = 1e-2  # How big of a step the model should take towards solution *very important parameter for neural networks
lambda_l2 = 1e-5  # Regularizes the network


# Bulding the model
# nn package to create our linear model
# each Linear module has a weight and bias
model = nn.Sequential(
    nn.Linear(D, H),
    nn.ReLU(),
    nn.Linear(H, C)
)

# nn package also has different loss functions.
# we use MSE loss for our regression task
criterion = torch.nn.MSELoss()

# we use the optim package to apply
# stochastic gradient descent for our parameter updates
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, weight_decay=lambda_l2) # built-in L2

# Training
for t in range(2000):
    
    # Feed forward to get the logits
    y_pred = model(X)
    
    # Compute the loss (MSE)
    loss = criterion(y_pred, y)
    print("[EPOCH]: %i, [LOSS or MSE]: %.6f" % (t, loss.item()))
    display.clear_output(wait=True)
    
    # zero the gradients before running
    # the backward pass.
    optimizer.zero_grad()
    
    # Backward pass to compute the gradient
    # of loss w.r.t our learnable params. 
    loss.backward()
    
    # Update params
    optimizer.step()

# After 2000 epochs/iterations of training
# Output: [EPOCH]: 1999, [LOSS or MSE]: 0.062801

从函数Sin(2πx)生成的数据(添加了一些噪声)

Generated data

通过我们的神经网络模型拟合曲线,误差约为6%

Fitted Curve


注意:如果您不熟悉PyTorch,只需使用您的规范更改上述代码中的X,Y值,它就可以正常工作。您可以尝试不同的学习速率、激活(如双曲正切函数)等,以获得更好的效果。

相关问题 更多 >

    热门问题