Finetune# Prerequisites Language-only Full-Parameter Finetuning Single-turn LLaMA2-7B on Alpaca Single-turn InternLM-7B on Alpaca Single-turn LLaMA2-7B on Gorilla Multi-turn LLaMA2-7B on ShareGPT Multi-turn LLaMA2-70B on ShareGPT Multi-turn LLaMA2-7B on LIMA Single-turn LLaMA2-7B on WizardLM Single-turn CodeLLaMA-7B on WizardCode Language-only Parameter-Efficient Finetuning Bias&Norm Tuning of LLaMA2-7B on Alpaca Bias&Norm&LoRA Tuning of LLaMA2-7B on Alpaca LLaMA-Adapter of LLaMA2-7B on Alpaca Bias&Norm&LoRA Tuning of LLaMA2-7B on Multi-turn ShareGPT Multi-Modal Full-Parameter Finetuning Two-Stage Training of Multi-Modal LLaMA2 Stage1 Stage2 Finetuning with Quantization Best Practice Comparison Inference