Language-only Parameter-Efficient Finetuning#
Bias&Norm Tuning of LLaMA2-7B on Alpaca#
Script:
Data:
Model Release:
Host Local Demo:
torchrun --nproc-per-node=1 demos/single_turn.py \
--llama_type llama_peft \
--llama_config /path/to/params.json --tokenizer_path /path/to/tokenizer.model \
--pretrained_path /path/to/alpaca_finetuned
Bias&Norm&LoRA Tuning of LLaMA2-7B on Alpaca#
Script:
Explanation: This experiment assigns two filenames to llama_config
simultaneously. The first filename, like most other experiments, points to the params.json
file released by META that distinguishes model sizes. The second filename, on the other hand, defines the inner dimension of LoRA. Through this separated design, one may simply change the first filename to switch to other model sizes, without the need to create new model configuration files.
Data:
Host Local Demo:
torchrun --nproc-per-node=1 demos/single_turn.py \
--llama_type llama_peft \
--llama_config /path/to/params.json configs/model/finetune/sg/llamaPeft_normBiasLora.json \
--tokenizer_path /path/to/tokenizer.model \
--pretrained_path /path/to/alpaca_finetuned
Note that --llama_config
should be consistent with training, i.e. include both configuration files.
LLaMA-Adapter of LLaMA2-7B on Alpaca#
Script:
Data:
Host Local Demo:
torchrun --nproc-per-node=1 demos/single_turn.py \
--llama_type llama_adapter \
--llama_config /path/to/params.json --tokenizer_path /path/to/tokenizer.model \
--pretrained_path /path/to/alpaca_finetuned