在数据库中训练PyTorch模型的大型数据集的最佳方法是什么?
我有一个很大的数据集,为了方便管理,我把它放在了一个sqlite数据库里。这个数据集大约有27万行,每一行都是一段长达10,000个碱基对的DNA序列。一次性加载整个数据集几乎是不可能的,更别提用这些数据训练模型了(我试过,结果我的笔记本电脑的GPU根本没反应,连声音都没有)。
所以现在我在运行一个循环。在每次循环中,我从数据库中选择一部分数据,使用偏移量和限制(比如每次取500个序列),然后用这500个序列训练模型,比如训练10个周期。
我代码中相关的部分是:
my_offset = 0
my_limit = 500
my_batch_size = 100
some_model = MyModel()
db_page_number = 0
# pagination
while db_page_number < 100:
db_page_number += 1
my_offset += my_limit
query = f"SELECT sequences, classification FROM table ORDER BY id LIMIT {my_limit} OFFSET {my_offset}"
paged_data = pd.read_sql_query(query)
x, y = get_x_y_from(paged_data)
train_dataset = torch.utils.data.TensorDataset(x, y)
train_loader = DataLoader(train_dataset, batch_size=my_batch_size)
for epoch in range(0, 10):
# ... typical pytorch code...
for data in train_loader:
x1, y1 = data
predicted_outputs = self.pytorch_model(x1) # predict output from the model
train_loss = self.loss_function(predicted_outputs, y1) # calculate loss for the predicted output
train_loss.backward() # back propagate the loss
optimizer.step() # adjust params based on the calculated gradients
# ... typical pytorch code...
在这里,我是手动进行分页的(最外层的while循环)。我成功运行了我的代码,但我在想有没有更好的做法?也许可以使用pandas或者其他库来处理分页的部分?我很乐意听取建议。
2 个回答
一个选择是创建一个叫做“pagination_control”的表,这样就可以获取限制和偏移量。
这个表里有两个主要的列,比如说 p_limit
和 p_offset
。
这样做可以让我们灵活地调整限制和偏移量,以适应不同的需求。不过,在提取完一组数据后,我们可以更新“pagination_control”表,为下一组数据做好准备,这样就可以通过把限制加到偏移量上来获取下一组数据。
示例
你可以考虑以下的示例:
/* Cleanup demo environment just in case */
DROP TABLE IF EXISTS `table`;
DROP TABLE IF EXISTS pagination_control;
/* Create the core table */
CREATE TABLE IF NOT EXISTS `table` (id INTEGER PRIMARY KEY, sequences TEXT, classification TEXT, other_if_any TEXT DEFAULT 'ooops');
/* load the core table with some 300000 rows (after doing playing around) */
WITH
/* Create a CTE (temp table) with the core sequence indentifiers */
/* note very little knowledge of DNA */
sequences(seq) AS (
SELECT 'A' UNION ALL SELECT 'B' UNION ALL SELECT 'C' UNION ALL SELECT 'G' UNION ALL SELECT 'T'
),
/* create CTE with groups of indentifiers (which could potentially compress the stored sequences (intended just as a hint)) */
/* i.e. all permutations (3125) of 5 identifiers */
grouped_sequences_by_5 AS (
SELECT DISTINCT
groupof5.seq ||s2.seq||s3.seq||s4.seq||s5.seq AS groupof5
FROM sequences AS groupof5
JOIN sequences AS s2
JOIN sequences AS s3
JOIN sequences AS s4
JOIN sequences AS s5
)
,
/* Create another CTE of random rows (300000 rows) */
ready_to_insert(sequences,classification) AS (
SELECT
(SELECT groupof5 FROM grouped_sequences_by_5 ORDER BY random() LIMIT 1 ),
'CAT'||(abs(random()) % 10)
UNION ALL SELECT
(SELECT groupof5 FROM grouped_sequences_by_5 ORDER BY random() LIMIT 1 ),
'CAT'||(abs(random()) % 10)
FROM ready_to_insert LIMIT 300000
)
INSERT INTO `table` (sequences,classification) SELECT * FROM ready_to_insert;
SELECT * FROM `table`;
/*----------------------------------------*/
/* Now demonstrate the pagination_control */
CREATE TABLE IF NOT EXISTS pagination_control (id INTEGER PRIMARY KEY,p_limit,p_offset);
/* initialise pagination table */
INSERT OR REPLACE INTO pagination_control VALUES(1,1000,0);
SELECT * FROM `table` ORDER BY id LIMIT (SELECT p_limit FROM pagination_control) OFFSET (SELECT p_offset FROM pagination_control);
/* Always update after selection to ready for next block */
UPDATE pagination_control SET p_offset = p_offset + p_limit;
SELECT * FROM `table` ORDER BY id LIMIT (SELECT p_limit FROM pagination_control) OFFSET (SELECT p_offset FROM pagination_control);
/* Always update after selection to ready for next block (again) */
UPDATE pagination_control SET p_offset = p_offset + p_limit;
/* optional to alter e.g. set blocks to 500 rows per block */
UPDATE pagination_control SET p_limit = 500;
/* and so on */
SELECT * FROM `table` ORDER BY id LIMIT (SELECT p_limit FROM pagination_control) OFFSET (SELECT p_offset FROM pagination_control);
UPDATE pagination_control SET p_offset = p_offset + p_limit;
SELECT * FROM `table` ORDER BY id LIMIT (SELECT p_limit FROM pagination_control) OFFSET (SELECT p_offset FROM pagination_control);
UPDATE pagination_control SET p_offset = p_offset + p_limit;
/* Cleanup the demo environment */
DROP TABLE IF EXISTS pagination_control;
DROP TABLE IF EXISTS `table`;
- 首先插入大约300000行数据(当然只是为了演示pagination_control的用法)。
不过,在上述操作之后,pagination_control表的用法就会被演示出来。
首先创建这个表,包含两个核心列(p_limit
和 p_offset
),还有一个id列用来维护这一行数据。
接下来的SELECT(结果2)展示了如何使用pagination_control来确定选中的行。
接下来的UPDATE,通常在SELECT之后立即进行,展示了如何为下一个SELECT准备pagination表(结果3)。
在第二次SELECT之后,UPDATE将LIMIT从1000改为500。第三次SELECT(结果4)和第四次SELECT(结果5)则获取下一组500条数据。
当然,你可以轻松地操作“pagination_control”表,灵活控制,比如要重置的话,可以更新它,把 p_limit
设置为1000,p_offset
设置为0,这样就可以重新以1000为单位进行分页。