Qlearning for optimal order placemen最佳订购地点

2024-04-28 06:53:58 发布

您现在位置:Python中文网/ 问答频道 /正文

所以我最后一条关于强化学习的线索被标记为过于宽泛,我完全理解。我以前从来没有用过它,所以我试着自己学习它-到目前为止不是一件容易的事。现在,我已经读了一些论文,并试图把我的方法建立在我从他们那里学到的东西的基础上,但我不确定我所做的是否有意义,所以我很感谢大家的帮助!你知道吗

基本上我想用Q-learning计算每天要订购多少。下面是代码的相关部分-Ip,Im已经在代码的前面计算过了,通过计算每天的顺序而不需要强化学习,这样我就可以把它输入到我的算法中并训练它。你知道吗

我把我的状态分为9(取决于我有多少库存),把我的行为分为9(每一个都意味着我应该订购一个特定的值)。我的报酬函数是我的目标函数,它是我想要最小化的总成本(所以它实际上是一个“损失”函数)。然而,这并没有得到真正的优化,我认为是因为Q矩阵没有得到适当的训练,似乎太随机。关于如何改进/修复此代码有什么想法吗?你知道吗

# Training

# Ip - on-hand inventory
# Im - Lost orders
# T - no. of days (360)
# Qt - order quantity

def reward(t):
    return h*Ip[t]+b*Im[t]

Q = np.matrix(np.zeros([9,9]))

iteration = 0
t = 0
MAX_ITERATION = 500
alp = 0.2 # learning rate (between 0 and 1)
exploitation_p = 0.15 # exploitation probability (incresed after each iteration until it reaches 1)

while iteration <= MAX_ITERATION:
    while t < T-1:
        if Ip[t] <= 8:
            state = 0
        if Ip[t] > 8 and Ip[t] <= 14:
            state = 1
        if Ip[t] > 14 and Ip[t] <= 20:
            state = 2
        if Ip[t] > 20 and Ip[t] <= 26:
            state = 3
        if Ip[t] > 26 and Ip[t] <= 32:
            state = 4
        if Ip[t] > 32 and Ip[t] <= 38:
            state = 5
        if Ip[t] > 38 and Ip[t] <= 44:
            state = 6
        if Ip[t] > 44 and Ip[t] <= 50:
            state = 7
        if Ip[t] > 50:
            state = 8

        rd = random.random()
        if rd < exploitation_p:
            action = np.where(Q[state,] == np.max(Q[state,]))[1]
            if np.size(action) > 1:
                action = np.random.choice(action,1)
        elif rd >= exploitation_p:
            av_act = np.where(Q[state,] < 999999)[1]
            action = np.random.choice(av_act,1)
        action = int(action)
        rew = reward(t+1)

        if Ip[t+1] <= 8:
            next_state = 0
        if Ip[t+1] > 8 and Ip[t+1] <= 14:
            next_state = 1
        if Ip[t+1] > 14 and Ip[t+1] <= 20:
            next_state = 2
        if Ip[t+1] > 20 and Ip[t+1] <= 26:
            next_state = 3
        if Ip[t+1] > 26 and Ip[t+1] <= 32:
            next_state = 4
        if Ip[t+1] > 32 and Ip[t+1] <= 38:
            next_state = 5
        if Ip[t+1] > 38 and Ip[t+1] <= 44:
            next_state = 6
        if Ip[t+1] > 44 and Ip[t+1] <= 50:
            next_state = 7
        if Ip[t+1] > 50:
            next_state = 8

        next_action = np.where(Q[next_state,] == np.max(Q[next_state,]))[1]
        if np.size(next_action) > 1:
            next_action = np.random.choice(next_action,1)
        next_action = int(next_action)

        Q[state, action] = Q[state, action] + alp*(-rew+Q[next_state, next_action]-Q[state, action])

        t += 1
    if (exploitation_p < 1):
        exploitation_p = exploitation_p + 0.05
    t = 0
    iteration += 1

# Testing

Ip = [0] * T
Im = [0] * T

It[0] = I0 - d[0]

if (It[0] >= 0):
    Ip[0] = It[0]
else:
    Im[0] = -It[0]

Qt[0] = 0
Qbase = 100

sumIp = Ip[0]
sumIm = Im[0]

i = 1
while i < T:
    if (i - LT >= 0):
        It[i] = Ip[i-1] - d[i] + Qt[i-LT]
    else:
        It[i] = Ip[i-1] - d[i]
    It[i] = round(It[i], 0)
    if It[i] >= 0:
        Ip[i] = It[i]
    else:
        Im[i] = -It[i]

    if Ip[i] <= 8:
        state = 0
    if Ip[i] > 8 and Ip[i] <= 14:
        state = 1
    if Ip[i] > 14 and Ip[i] <= 20:
        state = 2
    if Ip[i] > 20 and Ip[i] <= 26:
        state = 3
    if Ip[i] > 26 and Ip[i] <= 32:
        state = 4
    if Ip[i] > 32 and Ip[i] <= 38:
        state = 5
    if Ip[i] > 38 and Ip[i] <= 44:
        state = 6
    if Ip[i] > 44 and Ip[i] <= 50:
        state = 7
    if Ip[i] > 50:
        state = 8

    action = np.where(Q[state,] == np.max(Q[state,]))[1]
    if np.size(action) > 1:
        action = np.random.choice(action,1)
    action = int(action)

    if action == 0:
        Qt[i] = Qbase
    if action == 1:
        Qt[i] = Qbase * 0.95
    if action == 2:
        Qt[i] = Qbase * 0.9
    if action == 3:
        Qt[i] = Qbase * 0.85
    if action == 4:
        Qt[i] = Qbase * 0.8
    if action == 5:
        Qt[i] = Qbase * 0.75
    if action == 6:
        Qt[i] = Qbase * 0.7
    if action == 7:
        Qt[i] = Qbase * 0.65
    if action == 8:
        Qt[i] = Qbase * 0.6

    sumIp = sumIp + Ip[i]
    sumIm = sumIm + Im[i]

    i += 1

objfunc = h*sumIp+b*sumIm

print(objfunc)

如果您想/需要运行它,这是我的完整代码:https://pastebin.com/vU5V0ehg

提前谢谢!你知道吗

另外,我想我的MAX_迭代应该更高(大多数论文似乎使用10000),但我的计算机在这种情况下运行程序需要太长时间,因此我使用500。你知道吗


Tags: and代码ipifnpitactionrandom