写点什么

教你如何基于 MindSpore 进行 ChatGLM 微调

  • 2023-10-16
    广东
  • 本文字数:4073 字

    阅读完需:约 13 分钟

教你如何基于MindSpore进行ChatGLM微调

本文分享自华为云社区《基于MindSpore的ChatGLM微调》,作者: JeffDing 。

基于 MindSpore 的 ChatGLM 微调

克隆 Hugging Face 模型

克隆 chatglm-6b 代码仓,下载分布式的模型文件


git lfs installgit clone https://huggingface.co/THUDM/chatglm-6b
复制代码

准备环境


安装 Transformer


pip install transformers
复制代码


执行 python 脚本,合并模型权重。


from transformers import AutoModelimport torch as pt
pt_ckpt_path="./models/chatglm-6b"model = AutoModel.from_pretrained(pt_ckpt_path, trust_remote_code=True).half()pt_pth_path = "models/mindspore/pt_glm_6b.pth"pt.save(model.state_dict(), pt_pth_path)
复制代码


执行转换脚本,得到转换后的输出文件 ms_glm_6b.ckpt


python mindformers/models/glm/convert_weight.py --pt_ckpt_path /home/ma-user/work/models/mindspore/pt_glm_6b.pth --ms_ckpt_path ../models/mindspore/ms_glm_6b.ckpt
复制代码


注意可能会遇到以下错误:


执行转换脚本,得到转换后的输出文件ms_glm_6b.ckpt
复制代码


解决方法:


export LD_PRELOAD=$LD_PRELOAD:/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/torch/lib/libgomp-d22c30c5.so.1 
复制代码


原理:找到 torch 中的 libgomp-d22c30c5.so.1 然后赋值给 LD_PRELOAD 环境变量,这个报错好像只有 ARM 平台会有

微调

数据处理


ADGEN 数据集任务为根据输入(content)生成一段广告词(summary)。数据集可选离线生成 Mindrecord 或者实时生成两种方式,两种方式选其一即可。


下载地址:https://cloud.tsinghua.edu.cn/f/b3f119a008264b1cabd1/?dl=1


将任务配置文件 configs/glm/run_glm_6b_*.yaml中的 ==== dataset config ==== 部分中的 dataset_dir 指向 *.json文件,vocab_file 指向词表文件

LoRA 低参微调


run_mindformers 脚本启动 LoRA 低参微调


使用 LoRA 算法进行低参微调时,使用 configs/glm/run_glm_6b_lora.yaml 配置文件,该配置文件包含了 lora 低参微调算法所需的配置项


修改数据集/模型权重配置路径


数据集:修改 mindformers/configs/glm/run_glm_6b_lora.yaml 脚本中 train_dataset 的 dataset_dir 为前文生成的数据集路径。


加载预训练模型权重:修改 mindformers/configs/glm/run_glm_6b_lora.yaml 脚本中的 load_checkpoint 为预训练模型权重路径。


** 安装 jieba**


pip install -r requirements.txt
复制代码


启动 LoRA 低参微调脚本(1 卡):


python run_mindformer.py --config=./configs/glm/run_glm_6b_lora.yaml --use_parallel=False --run_mode=finetune
复制代码

附录 run_glm_6b_lora.yaml


seed: 0run_mode: 'finetune'load_checkpoint: "/home/ma-user/work/models/mindspore/ms_glm_6b.ckpt"src_strategy_path_or_dir: ''auto_trans_ckpt: False  # If true, auto transform load_checkpoint to load in distributed modelonly_save_strategy: Falseresume_training: Falseoutput_dir: './output'  # 当前不支持自定义修改,请勿修改该默认值
# ==== context config ====context: mode: 0 #0--Graph Mode; 1--Pynative Mode device_target: "Ascend" enable_graph_kernel: False graph_kernel_flags: "--disable_expand_ops=Softmax,Dropout --enable_parallel_fusion=true --reduce_fuse_depth=8 --enable_auto_tensor_inplace=true" max_call_depth: 10000 max_device_memory: "30GB" save_graphs: False device_id: 0
# aiccremote_save_url: "Please input obs url on AICC platform."
# ==== model config ====model: model_config: type: GLMConfig vocab_size: 130528 hidden_size: 4096 num_layers: 28 num_heads: 32 inner_hidden_size: 16384 seq_length: 512 # 推理时, 输入pad到的长度, model里的最大句长 embedding_dropout_prob: 0.0 attention_dropout_rate: 0.0 hidden_dropout_rate: 0.0 hidden_size_per_attention_head: # default "None" means hidden-size/num-attention-heads. layernorm_order: "post" layernorm_epsilon: 1.0e-5 use_final_layernorm: True use_past: False activation_func: 'GELU' position_encoding_2d: True param_init_type: "float16" layernorm_compute_type: "float32" softmax_compute_type: "float32" compute_dtype: "float16" bos_token_id: 130004 eos_token_id: 130005 mask_token_id: 130000 gmask_token_id: 130001 pad_token_id: 3 max_decode_length: 2048 # The maximum length of the generated words. is_enhanced_encoder: True is_sample_acceleration: False checkpoint_name_or_path: "glm_6b_lora" top_k: 1 top_p: 1 repetition_penalty: 1 do_sample: True pet_config: pet_type: lora lora_rank: 8 lora_alpha: 32 lora_dropout: 0.1 arch: type: GLMForPreTrainingWithLora
trainer: type: CausalLanguageModelingTrainer model_name: 'glm_6b_lora'# if True, do evaluate during the training process. if false, do nothing.# note that the task trainer should support _evaluate_in_training function.do_eval: False
metric: type: ADGENMetric
processor: return_tensors: ms tokenizer: type: ChatGLMTokenizer bos_token: '<sop>' eos_token: '<eop>' end_token: '</s>' mask_token: '[MASK]' gmask_token: '[gMASK]' pad_token: '<pad>' unk_token: '<unk>' type: GLMProcessor
# ==== dataset config ====train_dataset: &train_dataset data_loader: type: ADGenDataLoader dataset_dir: "/home/ma-user/work/data/AdvertiseGen/train.json" shuffle: True phase: "train" origin_columns: ["content", "summary"] tokenizer: type: ChatGLMTokenizer vocab_file: "/home/ma-user/work/data/AdvertiseGen/ice_text.model" input_columns: ["input_ids", "labels", "position_ids", "attention_mask"] max_source_length: 64 max_target_length: 64 ignore_pad_token_for_loss: True num_parallel_workers: 8 python_multiprocessing: False drop_remainder: True batch_size: 1 repeat: 1 numa_enable: False prefetch_size: 1 seed: 0
train_dataset_task: type: KeyWordGenDataset dataset_config: *train_dataset
eval_dataset: &eval_dataset data_loader: type: ADGenDataLoader dataset_dir: "/home/ma-usr/work/data/AdvertiseGen/dev.json" shuffle: False phase: "eval" origin_columns: ["content", "summary"] tokenizer: type: ChatGLMTokenizer vocab_file: "/home/ma-usr/work/data/AdvertiseGen/ice_text.model" max_source_length: 256 max_target_length: 256 ignore_pad_token_for_loss: True input_columns: ["input_ids", "labels"] num_parallel_workers: 8 python_multiprocessing: False drop_remainder: True batch_size: 1 repeat: 1 numa_enable: False prefetch_size: 1 seed: 0
eval_dataset_task: type: KeyWordGenDataset dataset_config: *eval_dataset
# ==== runner config ====runner_config: epochs: 1 batch_size: 8 sink_mode: True sink_size: 4
runner_wrapper: type: MFTrainOneStepCell scale_sense: type: DynamicLossScaleUpdateCell loss_scale_value: 4294967296 scale_factor: 2 scale_window: 1000 use_clip_grad: True
# lr sechdulelr_schedule: type: polynomial learning_rate: 5.e-5 lr_end: 1.e-6 warmup_steps: 2000 total_steps: -1 # -1 means it will load the total steps of the dataset
# optimizeroptimizer: type: FusedAdamWeightDecay beta1: 0.9 beta2: 0.95 eps: 1.e-8 weight_decay: 0.1layer_scale: Falselr_scale: False
# parallel configuse_parallel: Falseparallel: parallel_mode: 0 # 0-data parallel, 1-semi-auto parallel, 2-auto parallel, 3-hybrid parallel gradients_mean: False loss_repeated_mean: True enable_alltoall: False full_batch: True search_mode: "sharding_propagation" enable_parallel_optimizer: False # optimizer shard strategy_ckpt_save_file: "./ckpt_strategy.ckpt"parallel_config: data_parallel: 1 model_parallel: 1 pipeline_stage: 1 expert_parallel: 1 optimizer_shard: False # optimizer shard micro_batch_num: 1 vocab_emb_dp: True gradient_aggregation_group: 4micro_batch_interleave_num: 1
# moemoe_config: expert_num: 1 capacity_factor: 1.05 aux_loss_factor: 0.05 num_experts_chosen: 1
# recomputerecompute_config: recompute: False parallel_optimizer_comm_recompute: False mp_comm_recompute: True recompute_slice_activation: False
# autotuneauto_tune: Falsefilepath_prefix: './autotune'autotune_per_step: 10
# profileprofile: Falseprofile_start_step: 1profile_stop_step: 10init_start_profile: Trueprofile_communication: Trueprofile_memory: True
# callbackscallbacks: - type: MFLossMonitor - type: CheckpointMointor prefix: "glm-6b-lora" save_checkpoint_steps: 500 keep_checkpoint_max: 2 integrated_save: False async_save: False - type: ObsMonitor keep_last: Falseeval_callbacks: - type: ObsMonitor keep_last: False
复制代码

点击关注,第一时间了解华为云新鲜技术~

发布于: 刚刚阅读数: 4
用户头像

提供全面深入的云计算技术干货 2020-07-14 加入

生于云,长于云,让开发者成为决定性力量

评论

发布
暂无评论
教你如何基于MindSpore进行ChatGLM微调_人工智能_华为云开发者联盟_InfoQ写作社区