基本优化的numpy和symmy lambdi结果的差异

2024-03-29 13:05:42 发布

您现在位置:Python中文网/ 问答频道 /正文

我写了以下基于牛顿法的基本优化代码,通过显式地写导数并使用sympy计算它们。为什么结果不同?你知道吗

明确写导数:

import numpy as np
def g(x):
    return 1.95 - np.exp(-2/x) - 2*np.exp(-np.power(x,4))
# gd: Derivative of g

def gd(x):
    return -2*np.power(x,-2)*np.exp(-2/x) + 8*np.power(x,3)*np.exp(-np.power(x,4))

def gdd(x):
return -4*np.power(x,-3)*np.exp(-2/x)-4*np.power(x,-4)*np.exp(-2/x)+24*np.power(x,2)*np.exp(-np.power(x,4))-32*np.power(x,6)*np.exp(-np.power(x,4))

# Newton's
def newton_update(x0,g,gd):
    return x0 - g(x0)/gd(x0)
# Main func
x0 = 1.00
condition = True
loops = 1
max_iter = 20
while condition and loops<max_iter:   
    x1 = newton_update(x0,gd,gdd)
    loops += 1
    condition = np.abs(x0-x1) >= 0.001 
    x0 = x1
    print('x =',x0)


if loops == max_iter:
    print('Solution failed to converge. Try another starting value!')

输出:

x = 1.66382322329
x = 1.38056881356
x = 1.43948432592
x = 1.46207570893
x = 1.46847791968
x = 1.46995571549
x = 1.47027303095

使用sympy和lambdify:

import sympy as sp
x = sp.symbols('x',real=True)
f_expr = 1.95 - exp(-2/x) - 2*exp(-x**4)
dfdx_expr = sp.diff(f_expr, x)
ddfdx_expr = sp.diff(dfdx_expr, x)

# lambidify
f = sp.lambdify([x],f_expr,"numpy")
dfdx = sp.lambdify([x], dfdx_expr,"numpy")
ddfdx = sp.lambdify([x], ddfdx_expr,"numpy")

# Newton's
x0 = 1.0
condition = True
loops = 1
max_iter = 20
while condition and loops<max_iter:   
    x1 = newton_update(x0,dfdx,ddfdx)
    loops += 1
    condition = np.abs(x0-x1) >= 0.001 
    x0 = x1
    print('x =',x0)


if loops == max_iter:
    print('Solution failed to converge. Try another starting value!')

输出:

x = 1.90803013971
x = 3.96640484492
x = 6.6181614689
x = 10.5162392894
x = 16.3269006983
x = 25.0229734288
x = 38.0552735534
x = 57.5964036862
x = 86.9034400129
x = 130.860980508
x = 196.795321033
x = 295.695535237
x = 444.044999522
x = 666.568627836
x = 1000.35369299
x = 1501.03103981
x = 2252.04689304
x = 3378.57056168
x = 5068.35599056
Solution failed to converge. Try another starting value!

我使它的工作步骤减半在牛顿更新函数时,导数的迹象变化的更新。但我不明白为什么在同一个起点上结果会如此不同。也有可能从两者得到相同的结果吗?你知道吗


Tags: numpyreturndefnpconditionspmaxpower
1条回答
网友
1楼 · 发布于 2024-03-29 13:05:42

函数gdd的二阶导数公式有误。改变

def gdd(x):
     return -4*np.power(x,-3)*np.exp(-2/x)-4*np.power(x,-4)*np.exp(-2/x)+24*np.power(x,2)*np.exp(-np.power(x,4))-32*np.power(x,6)*np.exp(-np.power(x,4))

def gdd(x):
    return 4*np.power(x,-3)*np.exp(-2/x)-4*np.power(x,-4)*np.exp(-2/x)+24*np.power(x,2)*np.exp(-np.power(x,4))-32*np.power(x,6)*np.exp(-np.power(x,4))

应该解决问题,并在两种情况下产生相同的结果,这将是

x = 1.90803013971
x = 3.96640484492
x = 6.6181614689
x = 10.5162392894
x = 16.3269006983
x = 25.0229734288
x = 38.0552735534
x = 57.5964036862
x = 86.9034400129
x = 130.860980508
x = 196.795321033
x = 295.695535237
x = 444.044999522
x = 666.568627836
x = 1000.35369299
x = 1501.03103981
x = 2252.04689304
x = 3378.57056168
x = 5068.35599056
Solution failed to converge. Try another starting value!

这指出了根据注释选择步长的问题

相关问题 更多 >