从Python多进程访问MySQL连接池
我正在尝试建立一个MySQL连接池,让我的工作进程可以使用已经建立好的连接,而不是每次都新建一个连接。
我有点困惑,不知道是应该把数据库的游标传给每个进程,还是有其他的方法?难道MySql.connector不应该自动处理连接池吗?当我查看日志文件时,发现打开和关闭的连接非常多……每个进程都有一个。
我的代码大致是这样的:
PATH = "/tmp"
class DB(object):
def __init__(self):
connected = False
while not connected:
try:
cnxpool = mysql.connector.pooling.MySQLConnectionPool(pool_name = "pool1",
**config.dbconfig)
self.__cnx = cnxpool.get_connection()
except mysql.connector.errors.PoolError:
print("Sleeping.. (Pool Error)")
sleep(5)
except mysql.connector.errors.DatabaseError:
print("Sleeping.. (Database Error)")
sleep(5)
self.__cur = self.__cnx.cursor(cursor_class=MySQLCursorDict)
def execute(self, query):
return self.__cur.execute(query)
def isValidFile(self, name):
return True
def readfile(self, fname):
d = DB()
d.execute("""INSERT INTO users (first_name) VALUES ('michael')""")
def main():
queue = multiprocessing.Queue()
pool = multiprocessing.Pool(None, init, [queue])
for dirpath, dirnames, filenames in os.walk(PATH):
full_path_fnames = map(lambda fn: os.path.join(dirpath, fn),
filenames)
full_path_fnames = filter(is_valid_file, full_path_fnames)
pool.map(readFile, full_path_fnames)
if __name__ == '__main__':
sys.exit(main())
4 个回答
如果你打算重复使用由连接池管理的MySQLConnection
实例,可能会出现同步问题。不过,只要在工作进程之间共享一个MySQLConnectionPool
实例,并通过调用get_connection()
方法来获取连接,这样做是没问题的,因为每个MySQLConnection
实例都会创建一个专用的连接。
import multiprocessing
from mysql.connector import pooling
def f(cnxpool: pooling.MySQLConnectionPool) -> None:
# Dedicate connection instance for each worker process.
cnx = cnxpool.get_connection()
...
if __name__ == '__main__':
cnxpool = pooling.MySQLConnectionPool(
pool_name='pool',
pool_size=2,
)
p0 = multiprocessing.Process(target=f, args=(cnxpool,))
p1 = multiprocessing.Process(target=f, args=(cnxpool,))
p0.start()
p1.start()
你创建了多个数据库对象实例。在mysql.connector.pooling.py中,pool_name只是一个属性,用来让你识别这是哪个连接池。其实在mysql的连接池里并没有什么特别的映射。
所以,如果你在def readfile()
里创建了多个数据库实例,那你就会有好几个连接池。
在这种情况下,使用单例模式会很有帮助。
(我花了好几个小时才弄明白这一点。在Tornado框架中,每次http请求都会创建一个新的处理器,这就导致每次都要建立一个新的连接。)
#!/usr/bin/python
# -*- coding: utf-8 -*-
import time
import mysql.connector.pooling
dbconfig = {
"host":"127.0.0.1",
"port":"3306",
"user":"root",
"password":"123456",
"database":"test",
}
class MySQLPool(object):
"""
create a pool when connect mysql, which will decrease the time spent in
request connection, create connection and close connection.
"""
def __init__(self, host="172.0.0.1", port="3306", user="root",
password="123456", database="test", pool_name="mypool",
pool_size=3):
res = {}
self._host = host
self._port = port
self._user = user
self._password = password
self._database = database
res["host"] = self._host
res["port"] = self._port
res["user"] = self._user
res["password"] = self._password
res["database"] = self._database
self.dbconfig = res
self.pool = self.create_pool(pool_name=pool_name, pool_size=pool_size)
def create_pool(self, pool_name="mypool", pool_size=3):
"""
Create a connection pool, after created, the request of connecting
MySQL could get a connection from this pool instead of request to
create a connection.
:param pool_name: the name of pool, default is "mypool"
:param pool_size: the size of pool, default is 3
:return: connection pool
"""
pool = mysql.connector.pooling.MySQLConnectionPool(
pool_name=pool_name,
pool_size=pool_size,
pool_reset_session=True,
**self.dbconfig)
return pool
def close(self, conn, cursor):
"""
A method used to close connection of mysql.
:param conn:
:param cursor:
:return:
"""
cursor.close()
conn.close()
def execute(self, sql, args=None, commit=False):
"""
Execute a sql, it could be with args and with out args. The usage is
similar with execute() function in module pymysql.
:param sql: sql clause
:param args: args need by sql clause
:param commit: whether to commit
:return: if commit, return None, else, return result
"""
# get connection form connection pool instead of create one.
conn = self.pool.get_connection()
cursor = conn.cursor()
if args:
cursor.execute(sql, args)
else:
cursor.execute(sql)
if commit is True:
conn.commit()
self.close(conn, cursor)
return None
else:
res = cursor.fetchall()
self.close(conn, cursor)
return res
def executemany(self, sql, args, commit=False):
"""
Execute with many args. Similar with executemany() function in pymysql.
args should be a sequence.
:param sql: sql clause
:param args: args
:param commit: commit or not.
:return: if commit, return None, else, return result
"""
# get connection form connection pool instead of create one.
conn = self.pool.get_connection()
cursor = conn.cursor()
cursor.executemany(sql, args)
if commit is True:
conn.commit()
self.close(conn, cursor)
return None
else:
res = cursor.fetchall()
self.close(conn, cursor)
return res
if __name__ == "__main__":
mysql_pool = MySQLPool(**dbconfig)
sql = "select * from store WHERE create_time < '2017-06-02'"
p = Pool()
for i in range(5):
p.apply_async(mysql_pool.execute, args=(sql,))
上面的代码一开始创建了一个连接池,然后在execute()
中从这个连接池获取连接。一旦连接池创建好,接下来的工作就是保持这个连接池的存在,因为连接池只需要创建一次,这样每次想要连接MySQL时,就不需要再花时间去请求连接了。希望这对你有帮助!
首先,你为每个 DB
类的实例创建了一个不同的连接池。虽然这些池的名字相同,但这并不意味着它们是同一个池。
根据文档:
多个池有相同的名字并不是错误。如果一个应用需要通过
pool_name
属性来区分池,应该为每个池创建一个独特的名字。
除此之外,在不同的进程之间共享数据库连接(或连接池)并不是个好主意(而且我很怀疑这样做是否能正常工作),所以每个进程使用自己的连接其实是更好的选择。
你可以在 init
初始化器中将池作为全局变量来初始化,然后使用这个全局变量。
这是一个非常简单的例子:
from multiprocessing import Pool
from mysql.connector.pooling import MySQLConnectionPool
from mysql.connector import connect
import os
pool = None
def init():
global pool
print("PID %d: initializing pool..." % os.getpid())
pool = MySQLConnectionPool(...)
def do_work(q):
con = pool.get_connection()
print("PID %d: using connection %s" % (os.getpid(), con))
c = con.cursor()
c.execute(q)
res = c.fetchall()
con.close()
return res
def main():
p = Pool(initializer=init)
for res in p.map(do_work, ['select * from test']*8):
print(res)
p.close()
p.join()
if __name__ == '__main__':
main()
或者直接使用一个简单的连接,而不是连接池,因为每个进程中一次只会有一个连接处于活动状态。
同时使用的连接数量实际上是由 multiprocessing.Pool
的大小隐含限制的。