最近 MindIE 开始支持 DeepSeek MTP(multi token prediction)特性了,用于推理加速。但是有些开发者打开 MTP 开关后,没有发现明显的性能提升。这篇文章提供一种定位策略。
原理很简单,就是看一下每次 MTP 推理后,模型是输出 1 个 token 还是多个 token。由于 MTP 的 token 处理算法是用 python 实现的,所以可以在镜像的 python 代码中添加日志,可以在 2 个地方加日志查看 MTP 的采信率(也就是 verify 的成功比例)。
首先可以在 MindIE 镜像的/usr/local/lib/python3.11/site-packages/mindie_llm/text_generator/plugins/mtp/mtp_plugin.py 路径中找到 verify_greedy_one_batch()函数,然后打印相关参数。
@staticmethod def verify_greedy_one_batch(verify_guess_tokens, next_guess_tokens): gg = 0 for eg, guess_tokens in enumerate(verify_guess_tokens): correct = next_guess_tokens[eg] guess = guess_tokens if guess != correct: break gg += 1
复制代码
或者在 /usr/local/lib/python3.11/site-packages/mindie_llm/text_generator/plugins/plugin_manager.py 里面的 init 函数和 generate_token 函数增加如下代码:
from ...utils.log.logging import logger # 新增class PluginManager: def __init__(...): self.all_token_num = 0 self.all_decode_count = 0 self.all_prefill_count = 0 ... @timer.track_time_async('generate_token') def generate_token(self, input_metadata: InputMetadata): ... span_end(prof)
if not input_metadata.is_dummy_batch: if not input_metadata.is_prefill: for i in range(input_metadata.batch_size): next_tokens = generation_output.token_ids[i] if -1 in next_tokens: first_neg_one_index = np.argmax(next_tokens == -1) next_tokens = next_tokens[:first_neg_one_index] self.all_token_num += len(next_tokens) self.all_decode_count += 1 logger.error(f"self.all_token_num is {self.all_token_num}, self.all_decode_count is {self.all_decode_count}, self.all_prefill_count is {self.all_prefill_count}. Ratio is {self.all_token_num / self.all_decode_count}")
if input_metadata.is_prefill: for _ in range(input_metadata.batch_size): self.all_prefill_count += 1 generation_output.trace_ids = trace_ids
复制代码
评论