当在Scipy中使用最小最小最小平方时,如何修复“在…中遇到溢出”类型的各种运行时警告?

2024-06-11 09:37:49 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在使用scipy.optimize模块中的least_squares()函数来校准树冠结构动力学模型(CSDM)。然后根据热时间(tt)数据,利用校准后的模型预测叶面积指数(lai)。 我尝试了两种变体,第一种变体没有使用least_squares()函数的“loss”参数,而第二种变体设置此参数以生成健壮的模型。 第一个模型与第二个模型拟合得不太好。 但是,对于第二个模型,我得到以下警告:

res_robust = least_squares(fun, x0, loss='soft_l1', f_scale=0.1, args=(tt_train, lai_train))
__main__:2: RuntimeWarning: overflow encountered in power
__main__:2: RuntimeWarning: overflow encountered in exp
C:\Anaconda3\envs\geo\lib\site-packages\scipy\optimize\_lsq\least_squares.py:220: RuntimeWarning: overflow encountered in square
  z = (f / f_scale) ** 2 

然而,从第二个模型(绿色)得到的拟合在绘制时看起来很好。 enter image description here

我的数据集在小数点后有太多的字符,实际上我并不需要这些字符。这些警告可能与此有关吗?如果不需要此精度,是否可以忽略警告?或者问题更严重,计算也受到影响

这是我的密码。我很抱歉硬编码了这么长的数据集

# Canopy structural dinamic model (CSDM)

import numpy as np
from scipy.optimize import least_squares
import matplotlib.pyplot as plt

tt_train = np.array([394.926, 629.43017, 681.39683, 921.36142, 979.08705, 1042.42455, 1109.76622, 1191.00372, 1348.94747, 1445.08913, 1631.68705, 1986.46622])
lai_train = np.array([0.35391, 0.77602, 0.78485, 1.11895, 3.12987, 3.21052, 4.85756, 5.1311, 6.22953, 7.33323, 7.38312, 3.86341])

# The CSDM formula (Duveiller et al. 2011. Retrieving wheat Green Area Index during the growing season...)
# LAI = k * (1 / ((1 + Exp(-a * (tt - T0 - Ta))) ^ c) - Exp(b * (tt - T0 - Tb)))

# initial estimates of parameters
To = 50      # plant emergence (x[0])
Ta = 1000    # midgrowth (x[1])
Tb = 2000    # end of cenescence (x[2])
k = 6        # scaling factor (arox. max LAI) (x[3])
a = 0.01     # rate of growth (x[4])
b = 0.01     # rate of senescence (x[5])
c = 1        # parameter allowing some plasticity to the shape of the curv (x[6])
x0 = np.array([To, Ta, Tb, k, a, b, c])

def model(x, tt):
    return x[3] * (1 / ((1 + np.exp(-x[4] * (tt - x[0] - x[1]))) ** x[6]) - np.exp(x[5] * (tt - x[0] - x[2])))

#Define the function computing residuals for least-squares minimization
def fun(x, tt, lai):
    return model(x, tt) - lai

# calibrate two models
# first is the simpler one - no error but poor fit    
res_lsq = least_squares(fun, x0, args=(tt_train, lai_train))
# then the robust model - gives overflow RuntimeWarning but the result seems well when plotted
res_robust = least_squares(fun, x0, loss='soft_l1', f_scale=0.1, args=(tt_train, lai_train))

# termal time data for full season
tt_test = np.array([11.79375, 22.98125, 34.47708333, 45.46875, 56.95416667, 69.475, 84.39583333, 98.66875, 107.0416667, 116.7875, 129.7458333, 
141.04375, 152.9333333, 165.0791667, 180.425, 195.0395833, 209.71875, 224.3958333, 238.4020833, 252.1166667, 266.0625, 
281.0541667, 295.6270833, 310.4291667, 322.2916667, 331.6375, 338.11875, 346.7729167, 358.5770833, 369.5375, 380.3135, 
388.6364167, 394.926, 401.8093333, 409.1926667, 418.4239167, 425.3176667, 430.351, 436.0093333, 443.4780833, 451.61975,
460.3905833, 468.851, 475.3926667, 484.2051667, 497.26975, 506.9989167, 513.2426667, 519.7780833, 525.5176667, 531.9343333, 
539.2551667, 544.1426667, 549.8780833, 558.7655833, 565.49475, 568.7426667, 572.1030833, 575.55725, 578.0864167, 580.3155833,
583.0426667, 586.9280833, 592.651, 598.5155833, 602.5218333, 604.65725, 606.4968333, 610.776, 615.4135, 620.0718333, 627.8051667,
629.4301667, 629.776, 631.9176667, 635.83225, 643.8489167, 652.3989167, 662.1030833, 664.5718333, 666.3676667, 666.476, 
666.476, 666.476, 666.476, 667.5801667, 673.551, 681.3968333, 683.98225, 690.0614167, 697.301, 702.6093333, 704.9905833, 706.326,
710.39475, 713.3218333, 718.3093333, 721.90725, 724.0989167, 726.5364167, 729.7905833, 732.7635, 736.4780833, 739.0676667, 
742.4551667, 746.3114167, 746.5655833, 746.5655833, 746.5655833, 746.5655833, 746.5655833, 748.7864167, 751.9280833, 754.0614167,
754.0614167, 754.6801667, 754.6801667, 754.6801667, 754.6801667, 754.6801667, 754.6801667, 754.6801667, 760.0489167, 766.0593333,
771.8551667, 775.7968333, 782.6843333, 795.9135, 801.8239167, 804.5530833, 807.4280833, 808.3218333, 811.5530833, 816.31975,
817.4114167, 818.1405833, 820.3364167, 823.2301667, 825.2468333, 827.9676667, 831.8093333, 835.48225, 838.2280833, 840.1135,
840.8405833, 842.8385, 843.66975, 844.0385, 844.9551667, 844.9551667, 844.9551667, 844.9551667, 844.9551667, 844.9551667,
844.9551667, 845.8093333, 845.8093333, 845.8093333, 846.6739167, 849.451, 852.89475, 860.151, 867.6780833, 878.5593333,
889.301, 900.2155833, 911.3155833, 921.3614167, 931.2989167, 944.3405833, 947.4405833, 947.4405833, 947.4405833, 948.4426667,
948.4426667, 948.4426667, 948.4426667, 948.6078841, 949.8453841, 954.3495507, 960.8703841, 967.9453841, 979.0870507, 996.1058007,
1009.132884, 1019.322467, 1029.478717, 1042.424551, 1057.395384, 1069.930801, 1082.962051, 1095.920384, 1109.766217, 1124.191217,
1140.557884, 1156.843301, 1173.526634, 1191.003717, 1206.739134, 1221.112051, 1233.226634, 1247.184967, 1263.372467, 1278.449551,
1293.843301, 1311.147467, 1329.207884, 1348.947467, 1368.468301, 1388.553717, 1407.682884, 1426.270384, 1445.089134, 1461.795384,
1481.364134, 1499.868301, 1518.032884, 1538.305801, 1559.057884, 1578.743301, 1596.380801, 1614.578717, 1631.687051, 1648.178717,
1665.380801, 1682.168301, 1699.143301, 1713.870384, 1731.584967, 1749.303717, 1766.366217, 1784.191217, 1801.568301, 1818.239134,
1835.541217, 1853.730801, 1872.880801, 1891.470384, 1910.580801, 1929.641217, 1948.868301, 1967.932884, 1986.466217, 2005.532884,
2024.989134])

# apply the two models on the full season data    
lai_lsq = model(res_lsq.x, tt_test)
lai_robust = model(res_robust.x, tt_test)

plt.plot(tt_train, lai_train, 'o', markersize=4, label='training data')
plt.plot(tt_test, lai_lsq, label='fitted lsq model')
plt.plot(tt_test, lai_robust, label='fitted robust model')
plt.xlabel("tt")
plt.ylabel("LAI")
plt.legend(loc='upper left')
plt.show()

Tags: ofthe模型testmodelnptrainres
1条回答
网友
1楼 · 发布于 2024-06-11 09:37:49

enter image description here

根据上述情况,为f_scale设置一个小值(在公式中表示为C)会增加给rho的参数值。这可能会产生溢出。增加f_scale的值有助于处理警告,但它给出的拟合效果不太令人满意。也许使用不同的损失函数,例如cauchy会有所帮助

相关问题 更多 >