使用QLoRA和Peft训练变换器时出错
我正在尝试使用Peft和QLoRA对谷歌的Gemma模型进行微调。昨天我成功地进行了1个周期的微调,算是测试一下。不过,今天我打开笔记本,运行加载模型的代码时,出现了一个很大的错误:
代码:
model_id = "google/gemma-7b"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer =
AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=bnb_config,
device_map={0:""})
#model.gradient_checkpointing_enable()
train_dataset, val_dataset, data_collator = load_dataset(train_data_path, val_data_path, tokenizer)
错误信息(简化版):
RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=
.....
DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=
.....
RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):
CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "../aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=1, num_gpus=
.....
我把错误信息简化了一下,方便阅读。有没有人遇到过类似的情况?我似乎解决不了这个问题。非常感谢大家的帮助。
1 个回答
1
看起来当你打开这个笔记本的时候,内存被重置了,所以之前的CUDA初始化设置都没了,这就导致了上面的错误。解决这个错误的唯一办法就是重新运行整个笔记本,而不仅仅是那一个特定的代码块。