大规模语言 LLaVA:多模态 GPT-4 智能助手,融合语言与视觉,满足用户复杂需求
大规模语言 LLaVA:多模态 GPT-4 智能助手,融合语言与视觉,满足用户复杂需求
一个面向多模式 GPT-4 级别能力构建的助手。它结合了自然语言处理和计算机视觉,为用户提供了强大的多模式交互和理解。LLaVA 旨在更深入地理解和处理语言和视觉信息,从而实现更复杂的任务和对话。这个项目代表了下一代智能助手的发展方向,它能够更好地理解和应对用户需求。
效果展示
demo 链接:https://llava.hliu.cc/
当我来这里旅游时,我应该注意哪些事情?
在平静的湖面上游览码头时,有几件事你应该小心。首先,要注意天气状况,因为图像显示的是多云的天空。这可能表明天气突然变化的可能性,如下雨或强风,这可能使码头湿滑或不安全。其次,要注意水深和水位突然变化的可能性,因为码头位于湖中。最后,要小心任何可能出现在该地区的野生动物或昆虫,因为它们可能会对你的安全或健康构成威胁。为了确保安全愉快的体验,保持警惕和尊重环境是必不可少的。
主流大模型之间对比
1.安装
Clone this repository and navigate to LLaVA folder
Install Package
Install additional packages for training cases
1.1 升级到最新的代码库
2.LLaVA 权重
Please check out our Model Zoo for all public LLaVA checkpoints, and the instructions of how to use the weights.
2.1 Demo
To run our demo, you need to prepare LLaVA checkpoints locally. Please follow the instructions here to download the checkpoints.
2.2 基于 Gradio Web UI
要在本地启动 Gradio demo,请依次运行以下命令。如果你计划启动多个模型工作者来比较不同的检查点,你只需要启动控制器和 web 服务器一次。
Launch a controller
Launch a gradio web server.
您刚刚启动了 grado web 界面。现在,您可以打开带有打印在屏幕上的 URL 的 web 界面。您可能会注意到在模型列表中没有模型。别担心,我们还没有推出劳模。当你启动一个模型工作者时,它将被自动更新。
Launch a model worker
This is the actual worker that performs the inference on the GPU. Each worker is responsible for a single model specified in --model-path
.
Wait until the process finishes loading the model and you see "Uvicorn running on ...". Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.
You can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the --controller
the same, and modify the --port
and --worker
to a different port number for each worker.
If you are using an Apple device with an M1 or M2 chip, you can specify the mps device by using the --device
flag: --device mps
.
Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)
如果 GPU 的 VRAM 小于 24GB(例如,RTX 3090, RTX 4090 等),您可以尝试在多个 GPU 上运行它。如果您有多个 GPU,我们最新的代码库将自动尝试使用多个 GPU。你可以使用' CUDA_VISIBLE_DEVICES '来指定使用哪个 gpu。下面是使用前两个 gpu 运行的示例。
Launch a model worker (4-bit, 8-bit inference, quantized)
您可以使用量化位(4 位,8 位)启动模型工作器,这允许您在减少 GPU 内存占用的情况下运行推理,可能允许您在只有 12GB VRAM 的 GPU 上运行。请注意,使用量子化位的推理可能不如全精度模型准确。只需将'——load-4bit '或'——load-8bit '附加到您正在执行的 model worker 命令。下面是一个使用 4 位量化运行的示例。
Launch a model worker (LoRA weights, unmerged)
您可以使用 LoRA 权重启动模型工作器,而不将它们与基本检查点合并,以节省磁盘空间。会有额外的加载时间,而推理速度与合并的检查点相同。未合并的 LoRA 检查点在模型名称中没有“LoRA -merge”,并且通常比合并的检查点小得多(小于 1GB) (7B 为 13G, 13B 为 25G)。
要加载未合并的 LoRA 权重,您只需要传递一个额外的参数'——model-base ',这是用于训练 LoRA 权重的基本 LLM。您可以在模型动物园中查看每个 LoRA 权重的基本 LLM。
3.CLI 推理
使用 LLaVA 讨论图像,而不需要使用 Gradio 接口。支持多 gpu、4 位和 8 位量化推理。使用 4 位量化,对于我们的 LLaVA-1.5-7B,它在单个 GPU 上使用不到 8GB 的 VRAM。
<img src="images/demo_cli.gif" width="70%">
4.模型训练
以下是 LLaVA v1.5 的最新培训配置。对于遗留模型,请参考此版本的 README。稍后我们将把它们添加到一个单独的文档中
LLaVA 训练包括两个阶段:(1)特征对齐阶段:使用 LAION-CC-SBU 数据集的 558K 子集将“冻结预训练”视觉编码器连接到“冻结 LLM”;(2)视觉指令调整阶段:使用 150K gpt 生成的多模态指令跟随数据,加上 515K 左右的学术任务 VQA 数据,来教模型遵循多模态指令。
LLaVA is trained on 8 A100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the per_device_train_batch_size
and increase the gradient_accumulation_steps
accordingly. Always keep the global batch size the same: per_device_train_batch_size
x gradient_accumulation_steps
x num_gpus
.
4.1 超参数
We use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.
Pretraining
Finetuning
4.2 下载 Vicuna checkpoints (automatically)
我们的基本模型 Vicuna v1.5,这是一个指令调整聊天机器人,将自动下载,当你运行我们提供的训练脚本。不需要任何操作。
4.3 预训练 (特征对齐)
请下载我们在论文中使用的带有 BLIP 标题的 LAION-CC-SBU 数据集的 558K 子集在这里。
在 8x A100 (80G)上,由于分辨率增加到 336px, LLaVA-v1.5-13B 的预训练大约需要 5.5 小时。LLaVA-v1.5-7B 大约需要 3.5 小时。
Training script with DeepSpeed ZeRO-2: pretrain.sh
.
--mm_projector_type mlp2x_gelu
: the two-layer MLP vision-language connector.--vision_tower openai/clip-vit-large-patch14-336
: CLIP ViT-L/14 336px.
4.4 可视化训练调试
Prepare data
Please download the annotation of the final mixture our instruction tuning data llava_v1_5_mix665k.json, and download the images from constituting datasets:
COCO: train2017
GQA: images
OCR-VQA: download script
TextVQA: train_val_images
After downloading all of them, organize the data as follows in ./playground/data
,
Start training!
You may download our pretrained projectors in Model Zoo. It is not recommended to use legacy projectors, as they may be trained with a different version of the codebase, and if any option is off, the model will not function/train as we expected.
Visual instruction tuning takes around 20 hours for LLaVA-v1.5-13B on 8x A100 (80G), due to the increased resolution to 336px. It takes around 10 hours for LLaVA-v1.5-7B on 8x A100 (40G).
Training script with DeepSpeed ZeRO-3: finetune.sh
.
New options to note:
--mm_projector_type mlp2x_gelu
: the two-layer MLP vision-language connector.--vision_tower openai/clip-vit-large-patch14-336
: CLIP ViT-L/14 336px.--image_aspect_ratio pad
: this pads the non-square images to square, instead of cropping them; it slightly reduces hallucination.--group_by_modality_length True
: this should only be used when your instruction tuning dataset contains both language (e.g. ShareGPT) and multimodal (e.g. LLaVA-Instruct). It makes the training sampler only sample a single modality (either image or language) during training, which we observe to speed up training by ~25%, and does not affect the final outcome.
5.模型评估
In LLaVA-1.5, we evaluate models on a diverse set of 12 benchmarks. To ensure the reproducibility, we evaluate the models with greedy decoding. We do not evaluate using beam search to make the inference process consistent with the chat demo of real-time outputs.
See Evaluation.md.
5.1 基于 GPT 协助的评估
我们的 gpt 辅助的多模态建模评估管道提供了对视觉语言模型能力的全面理解。详情请参阅我们的文章。
Generate LLaVA responses
Evaluate the generated responses. In our case,
answer-file-ref.jsonl
is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.
Summarize the evaluation results
6.模型合集
要使用 llava -1.5 检查点,您的 llava 软件包版本必须高于 1.1.0。说明如何升级。
如果您有兴趣在模型动物园中加入任何其他细节,请打开一个问题:)
下面的模型权重是合并的权重。你不需要应用。LLaVA 检查点的使用应该符合基本 LLM 的模型许可:Llama 2。
LLaVA-v1.5
LLaVA-v1
Note: We recommend using the most capable LLaVA-v1.5 series above for the best performance.
Projector weights
These are projector weights we have pretrained. You can use these projector weights for visual instruction tuning. They are just pretrained on image-text pairs, and are NOT instruction tuned, which means they do NOT follow instructions as good as our official models, and can output repetitive, lengthy, and garbled outputs. If you want to have nice conversations with LLaVA, use the checkpoints above (LLaVA v1.5).
NOTE: These projector weights are only compatible with the llava>=1.0.0
, please check out the latest code base if your local code version is below v1.0.0
.
NOTE: When you use our pretrained projector for visual instruction tuning, it is very important to use the same base LLM and vision encoder as the one we used for pretraining the projector. Otherwise, the performance will be very bad.
When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,
Science QA Checkpoints
Legacy Models (merged weights)
The model weights below are merged weights. You do not need to apply delta. The usage of LLaVA checkpoints should comply with the base LLM's model license.
Legacy Models (delta weights)
The model weights below are delta weights. The usage of LLaVA checkpoints should comply with the base LLM's model license: LLaMA.
You can add our delta to the original LLaMA weights to obtain the LLaVA weights.
Instructions:
Get the original LLaMA weights in the huggingface format by following the instructions here.
Use the following scripts to get LLaVA weights by applying our delta. It will automatically download delta weights from our Hugging Face account. In the script below, we use the delta weights of
liuhaotian/LLaVA-7b-delta-v0
as an example. It can be adapted for other delta weights by changing the--delta
argument (and base/target accordingly).
Legacy Projector weights
The following projector weights are deprecated, and the support for them may be removed in the future. They do not support zero-shot inference. Please use the projector weights in the table above if possible.
NOTE: When you use our pretrained projector for visual instruction tuning, it is very important to use the same base LLM and vision encoder as the one we used for pretraining the projector. Otherwise, the performance will be very bad.
When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,
When using these projector weights to instruction tune your LMM, please make sure that these options are correctly set as follows,
7.数据集介绍
7.1 Pretraining Dataset
The pretraining dataset used in this release is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution. Please see here for a detailed description of the dataset structure and how to download the images.
If you already have CC-3M dataset on your disk, the image names follow this format: GCC_train_000000000.jpg
. You may edit the image
field correspondingly if necessary.
Important notice: Upon the request from the community, as ~15% images of the original CC-3M dataset are no longer accessible, we upload images.zip
for better reproducing our work in research community. It must not be used for any other purposes. The use of these images must comply with the CC-3M license. This may be taken down at any time when requested by the original CC-3M dataset owner or owners of the referenced images.
7.2 GPT-4 Prompts
我们为 GPT-4 查询提供提示和少量样本,以更好地促进该领域的研究。请查看' prompts '文件夹中的三种问题:对话、细节描述和复杂推理。
它们以' system_message.txt '的格式组织,用于系统消息,' abc_caps.txt '对用于少数几个示例用户输入,' abc_conf .txt '用于少数几个示例参考输出。
请注意,它们的格式可能不同。例如,' conversation '在' json '中,详细描述是只回答的。我们在初步实验中选择的格式比我们尝试的一组有限的替代方案稍微好一些:“json”,更自然的格式,只有答案。如果有兴趣,您可以尝试其他变体或对此进行更仔细的研究。欢迎投稿!
更多优质内容请关注公号:汀丶人工智能;会提供一些相关的资源和优质文章,免费获取阅读。
版权声明: 本文为 InfoQ 作者【汀丶人工智能】的原创文章。
原文链接:【http://xie.infoq.cn/article/573ee1b46e67b540c82d4d572】。
本文遵守【CC-BY 4.0】协议,转载请保留原文出处及本版权声明。
评论