Pythorch的Keras进度条
pkbar的Python项目详细描述
pkbar
pytorch的keras风格progressbar(pk bar)
一。显示
pkbar.Pbar
(进度条)
loading and processing dataset
10/10 [==============================] - 1.0s
pkbar.Kbar
(路缘石酒吧)
Epoch: 1/3
100/100 [========] - 10s 102ms/step - loss: 3.7782 - rmse: 1.1650 - val_loss: 0.1823 - val_rmse: 0.4269
Epoch: 2/3
100/100 [========] - 10s 101ms/step - loss: 0.1819 - rmse: 0.4265 - val_loss: 0.1816 - val_rmse: 0.4261
Epoch: 3/3
100/100 [========] - 10s 101ms/step - loss: 0.1813 - rmse: 0.4258 - val_loss: 0.1810 - val_rmse: 0.4254
2.安装
pip install pkbar
三。使用量
pkbar.Pbar
(进度条)
importpkbarimporttimepbar=pkbar.Pbar(name='loading and processing dataset',target=10)foriinrange(10):time.sleep(0.1)pbar.update(i)
loading and processing dataset
10/10 [==============================] - 1.0s
pkbar.Kbar
(路缘石坝)for a concreate example
importpkbarimporttorch# training looptrain_per_epoch=num_of_batches_per_epochforepochinrange(num_epochs):print('Epoch: %d/%d'%(epoch+1,num_epochs))kbar=pkbar.Kbar(target=train_per_epoch,width=8)# trainingforiinrange(train_per_epoch):outputs=model(inputs)train_loss=criterion(outputs,targets)train_rmse=torch.sqrt(train_loss).detach().cpu().numpy()optimizer.zero_grad()train_loss.backward()optimizer.step()kbar.update(i,values=[("loss",train_loss.detach().cpu().numpy()),("rmse",train_rmse)])# validationoutputs=model(inputs)val_loss=criterion(outputs,targets)val_rmse=torch.sqrt(val_loss).detach().cpu().numpy()kbar.add(1,values=[("loss",train_loss.detach().cpu().numpy()),("rmse",train_rmse),("val_loss",val_loss.detach().cpu().numpy()),("val_rmse",val_rmse)])
Epoch: 1/3
100/100 [========] - 10s 102ms/step - loss: 3.7782 - rmse: 1.1650 - val_loss: 0.1823 - val_rmse: 0.4269
Epoch: 2/3
100/100 [========] - 10s 101ms/step - loss: 0.1819 - rmse: 0.4265 - val_loss: 0.1816 - val_rmse: 0.4261
Epoch: 3/3
100/100 [========] - 10s 101ms/step - loss: 0.1813 - rmse: 0.4258 - val_loss: 0.1810 - val_rmse: 0.4254
四。确认
keras progbar的代码来自^{