如何将参数传递到线程中?

2024-05-28 19:13:47 发布

您现在位置:Python中文网/ 问答频道 /正文

我想为范围(1100)内的每个元素添加5个线程模块, 看哪个罗素在哪根线里。 我几乎完成了所有的代码,但是如何将参数传递到threading.Thread?

import threading,queue
x=range(1,100)
y=queue.Queue()
for i in x:
    y.put(i)

def myadd(x):
    print(x+5)


for i in range(5):
    print(threading.Thread.getName())
    threading.Thread(target=myadd,args=x).start() #it is wrong here
    y.join()

对达诺说,现在可以了,为了以交互方式运行,我将其重写为:

方法一:交互式运行。

from concurrent.futures import ThreadPoolExecutor
import threading
x = range(1, 100)

def myadd(x):
    print("Current thread: {}. Result: {}.".format(threading.current_thread(), x+5))

def run():
    t = ThreadPoolExecutor(max_workers=5)
    t.map(myadd, x)
    t.shutdown()
run()

方法2:

from concurrent.futures import ThreadPoolExecutor
import threading
x = range(1, 100)
def myadd(x):
    print("Current thread: {}. Result: {}.".format(threading.current_thread(), x+5))
def run():
    t = ThreadPoolExecutor(max_workers=5)
    t.map(myadd, x)
    t.shutdown()
if __name__=="__main__":
    run()

如果有更多的参数要传递给ThreadPoolExecutor呢? 我想用多处理模块计算1+3,2+4,3+45到100+102。 那么20+1,20+2,20+3到20+100的多处理模块呢?

from multiprocessing.pool import ThreadPool
do = ThreadPool(5)
def myadd(x,y):
    print(x+y)

do.apply(myadd,range(3,102),range(1,100))

怎么解决?


Tags: 模块runinfromimportforqueuedef
3条回答

发件人:

import threading,queue
x=range(1,100)
y=queue.Queue()
for i in x:
    y.put(i)

def myadd(x):
    print(x+5)


for i in range(5):
    print(threading.Thread.getName())
    threading.Thread(target=myadd,args=x).start() #it is wrong here
    y.join()

致:

import threading
import queue

# So print() in various threads doesn't garble text; 
# I hear it is better to use RLock() instead of Lock().
screen_lock = threading.RLock() 

# I think range() is an iterator or generator. Thread safe?
argument1 = range(1, 100)
argument2 = [100,] * 100 # will add 100 to each item in argument1

# I believe this creates a tuple (immutable). 
# If it were a mutable object then perhaps it wouldn't be thread safe.
data = zip(argument1, argument2)

# object where multiple threads can grab data while avoiding deadlocks.
q = queue.Queue()

# Fill the thread-safe queue with mock data
for item in data:
    q.put(item)

# It could be wiser to use one queue for each inbound data stream.
# For example one queue for file reads, one queue for console input,
# one queue for each network socket. Remembering that rates of 
# reading files and console input and receiving network traffic all
# differ and you don't want one I/O operation to block another.
# inbound_file_data = queue.Queue()
# inbound_console_data = queue.Queue() # etc.

# This function is a thread target
def myadd(thread_name, a_queue):

    # This thread-targetted function blocks only within each thread;
    # at a_queue.get() and at a_queue.put() (if queue is full).
    #
    # Each thread targetting this function has its own copy of
    # this functions local() namespace. So each thread will 
    # pause when the queue is empty, on queue.get(), or when 
    # the queue is full, on queue.put(). With one queue, this 
    # means all threads will block at the same time, when the 
    # single queue is full or when the single queue is empty 
    # unless we check for the number of remaining items in the
    # queue before we do a queue.get() and if none remain in the 
    # queue just exit this function. This presumes the data is 
    # not a continues and slow stream like a network connection 
    # or a rotating log file but limited like a closed file.

    # Let each thread continue to read from the global 
    # queue until it is empty. 
    # 
    # This is a bad use-case for using threading. 
    # 
    # If each thread had a separate queue it would be 
    # a better use-case. You don't want one slow stream of 
    # data blocking the processing of a fast stream of data.
    #
    # For a single stream of data it is likely better just not 
    # to use threads. However here is a single "global" queue 
    # example...

    # presumes a_queue starts off not empty
    while a_queue.qsize():
        arg1, arg2 = a_queue.get() # blocking call

        # prevent console/tty text garble
        if screen_lock.acquire():
            print('{}: {}'.format(thread_name, arg1 + arg2))
            print('{}: {}'.format(thread_name, arg1 + 5))
            print()
            screen_lock.release()
        else:
            # print anyway if lock fails to acquire
            print('{}: {}'.format(thread_name, arg1 + arg2))
            print('{}: {}'.format(thread_name, arg1 + 5))
            print()

        # allows .join() to keep track of when queue finished
        a_queue.task_done()


# create threads and pass in thread name and queue to thread-target function
threads = []
for i in range(5):
    thread_name = 'Thread-{}'.format(i)
    thread = threading.Thread(
        name=thread_name, 
        target=myadd, 
        args=(thread_name, q))

    # Recommended:
    # queues = [queue.Queue() for index in range(len(threads))] # put at top of file 
    # thread = threading.Thread(
    #   target=myadd, 
    #   name=thread_name, 
    #   args=(thread_name, queues[i],))
    threads.append(thread)

# some applications should start threads after all threads are created.
for thread in threads:
   thread.start()

# Each thread will pull items off the queue. Because the while loop in 
# myadd() ends with the queue.qsize() == 0 each thread will terminate 
# when there is nothing left in the queue.

看起来您正在尝试手动创建线程池,以便使用五个线程将所有100个结果相加。如果是这种情况,我建议对此使用multiprocessing.pool.ThreadPool

from multiprocessing.pool import ThreadPool
import threading
import queue

x = range(1, 100)

def myadd(x):
    print("Current thread: {}. Result: {}.".format(
               threading.current_thread(), x+5))

t = ThreadPool(5)
t.map(myadd, x)
t.close()
t.join()

如果您使用的是Python 3.x,则可以使用^{}代替:

from concurrent.futures import ThreadPoolExecutor
import threading

x = range(1, 100)

def myadd(x):
    print("Current thread: {}. Result: {}.".format(threading.current_thread(), x+5))

t = ThreadPoolExecutor(max_workers=5)
t.map(myadd, x)
t.shutdown()

我认为你的原始代码有两个问题。首先,需要将元组传递给args关键字参数,而不是单个元素:

threading.Thread(target=myadd,args=(x,))

但是,您还试图将由range(1,100)返回的整个列表(或者range对象,如果使用Python 3.x)传递给myadd,这并不是您真正想要做的。也不清楚您使用队列的目的。也许你是想把这个传给myadd

最后一个注意事项:Python使用全局解释器锁(GIL),它防止一次有多个线程使用CPU。这意味着在线程中执行CPU绑定的操作(比如加法)不会提高Python的性能,因为一次只能运行一个线程。因此,在Python中,最好使用多个进程来并行化CPU绑定的操作。通过将第一个示例中的ThreadPool替换为from mulitprocessing import Pool,可以使上述代码使用多个进程。在第二个示例中,您将使用ProcessPoolExecutor,而不是ThreadPoolExecutor。您可能还想用os.getpid()替换threading.current_thread()

编辑:

下面是如何处理要传递两个不同参数的情况:

from multiprocessing.pool import ThreadPool

def myadd(x,y):
    print(x+y)

def do_myadd(x_and_y):
    return myadd(*x_and_y)

do = ThreadPool(5)    
do.map(do_myadd, zip(range(3, 102), range(1, 100)))

我们使用zip创建一个列表,将范围内的每个变量组合在一起:

[(3, 1), (4, 2), (5, 3), ...]

我们使用map对该列表中的每个元组调用do_myadd,并且do_myadd使用元组扩展(*x_and_y),将元组扩展为两个单独的参数,这些参数将传递给myadd

这里需要传递元组,而不是使用单个元素。

做元组的代码应该是。

dRecieved = connFile.readline();
processThread = threading.Thread(target=processLine, args=(dRecieved,)); 
processThread.start();

请参考here了解更多说明

相关问题 更多 >

    热门问题