LLaMA-Factory源码解读(一)

1. 安装LLaMA Factory

首先,安装LLaMA Factory,先从github上克隆下来,然后进入文件夹,最后pip命令中的-e参数表明是将当前项目以软链接并且可修改的形式安装到当前python环境中(该命令会执行当前目录下的setup.py文件),安装完成后可以通过执行命令pip list来查看。

git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e ".[torch,metrics]"

2. 微调、推理和合并

安装之后我们就可以使用如下命令来分别微调、推理和合并参数了

llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml

下面我们详细看以下这三个文件中的内容,主要关注模型名称或路径model_name_or_pathstagedo_trainfinetuning_typetemplate

# examples/train_lora/llama3_lora_sft.yaml

### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct

### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all

### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16

### output
output_dir: saves/llama3-8b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true

### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000

### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# examples/inference/llama3_lora_sft.yam

model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft
template: llama3
finetuning_type: lora
# examples/merge_lora/llama3_lora_sft.ya

### Note: DO NOT use quantized model or quantization_bit when merging lora adapters

### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft
template: llama3
finetuning_type: lora

### export
export_dir: models/llama3_lora_sft
export_size: 2
export_device: cpu
export_legacy_format: false

3. 微调