Large Model Systems Organization 07月17日 00:49
How to support new VLMs into SGLang: A Case Study with NVILA
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了如何将新的视觉语言模型(VLM)集成到 SGLang 生态系统中,重点关注了 NVILA 模型的案例研究。通过 SGLang 的 RadixAttention 和内存优化,NVILA 在吞吐量和首次标记时间方面实现了显著的性能提升。文章详细阐述了 VLM 的工作原理,并提供了分步指南,包括模型注册、聊天模板匹配、多模态数据处理器构建以及核心模型定义,使 VLM 能够在 SGLang 中高效运行。

👁️ NVILA 是一种高效的视觉语言模型(VLM),它通过“先扩展后压缩”的策略来提高效率。它使用高分辨率的 SigLIP 视觉编码器来捕捉图像细节,并通过空间和时间池化来减少计算量,从而实现快速处理。

⚙️ 在 SGLang 中支持新模型需要几个关键步骤。首先,需要在配置文件中将模型注册为多模态模型。其次,为模型注册一个合适的聊天模板,以便正确处理包含图像和文本的提示。

💻 多模态数据处理器是关键,它负责将用户输入(文本和图像)转化为模型可处理的格式。这个处理器包括加载和处理数据、标记文本、转换图像以及组装最终数据包。

🚀 SGLang 提供了 RadixAttention 等优化,极大地提高了 VLM 的性能。例如,在并发为 8 的情况下,SGLang 的吞吐量提高了 4.4 倍以上,首次标记时间缩短了 2.2 倍。

The world of LLMs is evolving at a remarkable pace, with Visual Language Models (VLMs) at the forefront of this revolution. These models power applications that can understand and reason about both images and text. There are tons of new VLM models emerging daily, and we want to integrate them into SGLang to leverage its high-speed throughput. Today, we’ll provide a step-by-step walkthrough for integrating new VLMs into the SGLang ecosystem, using the recent NVILA model as a real-world case study.

Accelerating the NVILA Visual Language Model with SGLang

The benchmarks below compare the original VILA implementation against SGLang with different levels of concurrency

In real world VLM development, we focus on two important metrics to evaluate a serving systems’ performance: Throughput (Token per Second, TPS) and and Time to First Token (TTFT).

    For TPS, higher throughput means the system can generate more tokens simultaneously . SGLang's RadixAttention allows for efficient batching of requests, dramatically increasing the number of tokens generated per second. With a concurrency of 8, SGLang achieves over 4.4x higher throughput.For TTFT, a lower value means users get a faster response to receive the first token. SGLang's memory optimizations and efficient kernel implementations significantly reduce prefill latency. The benchmark shows SGLang responses up to 2.2x faster when concurrency is 8.

These performance gains make SGLang an excellent choice for deploying demanding VLMs like NVILA in production environments, and now can be easily deployed for sglang version ≥ 0.4.8

python3 -m sglang.launch_server \  --model-path Efficient-Large-Model/NVILA-Lite-2B-hf-0626 \  --host 0.0.0.0 --port 30000 --trust-remote-code \  --attention-backend fa3

The Big Picture: How VLMs like NVILA Work

Before diving into code, it helps to understand what is happening under the hood. Most Vision–Language Models (llava-based) share a three-component architecture:

    Vision encoder – a convolutional network or, more commonly, a Vision Transformer (ViT) that converts pixels into a sequence of visual tokens.Projector – a lightweight adapter (often an MLP) that aligns those tokens with the embedding space of a pretrained language model.Token processor – the language model itself, which mixes visual and textual tokens and autoregressively generates the answer.

NVILA follows the paradigm while pushing the efficiency frontier through its scale-then-compress strategy. It first scales the spatial and temporal resolution so the vision tower sees high-fidelity images or long videos, then compresses the resulting token stream so the language model only attends to a handful of high-information tokens:

    High-resolution SigLIP vision tower captures fine-grained details from images and multi-frame clips.Spatial token pooling reduces dense patches to a much smaller grid, preserving text and structure while cutting quadratic attention cost.Temporal pooling keeps just the most informative frames, so video inputs add only a few extra tokens.Two-layer MLP projector maps the compressed visual embeddings into LLM’s embedding space.FP8 training, dataset pruning, and memory-efficient fine-tuning trim training cost by up to 5× and deliver sub-second prefilling on a single 4090 GPU.

NVILA Architecture

This blueprint is generic enough to cover most modern VLMs, yet the compression blocks highlighted in green are what make NVILA both accurate and fast at scale.

Supporting New Models in SGLang

SGLang has a streamlined process for adding new models, which centers on creating a single model definition file that utilizes SGLang’s core components. The key is to replace standard Hugging Face components with their SGLang-optimized counterparts. For VLMs, this involves a few steps.

Step 1: Register the Model as Multimodal

First, SGLang needs to identify your model as multimodal. This is handled in sglang/srt/configs/model_config.py. SGLang determines this by checking if the model’s architecture class name is present in the multimodal_model_archs list.

For NVILA, we add "VILAForConditionalGeneration" to this list. This tells SGLang that any model with this architecture should be treated as a multimodal model.

multimodal_model_archs = [        "VILAForConditionalGeneration",]

Step 2: Register a New Chat Template

VLMs often require specific chat templates to handle prompts containing both images and text. SGLang allows you to either define a new template or, more conveniently, match your model to an existing one.

For NVILA, which uses a format similar to ChatML, we can reuse the existing chatml conversation template. To associate our model with this template, we register a matching function in python/sglang/srt/conversation.py. This function, match_vila, inspects the model path and returns "chatml" if it finds a match, telling SGLang to apply the ChatML format for NVILA models.

@register_conv_template_matching_functiondef match_vila(model_path: str):        if re.search(r"vila", model_path, re.IGNORECASE):        return "chatml"

This registration instructs SGLang on how to format the input string and where to expect image data.

To make this concrete, let’s examine the template structure that chatml provides and how it aligns with NVILA’s needs.

Understanding the ChatML Template for NVILA

The ChatML format, which NVILA uses, employs special tokens to structure the conversation. For a multimodal prompt, the template ensures that both text and images are correctly placed.

A typical conversation is a sequence of messages, each with a role (system, user, or assistant) and content. Here’s how a user’s query with an image is formatted:

Example User Message:

{  "role": "user",  "content": [    { "type": "text", "text": "What is in this image?" },    { "type": "image" }  ]}

Resulting Prompt String:

The template processor converts this into a single string for the model:

<|im_start|>systemYou are a helpful assistant.<|im_end|><|im_start|>userWhat is in this image?). In this case, MultiModalityDataPaddingPatternTokenPairs is the correct choice. This pattern identifies the content between the start and end tokens and replaces it with the image’s unique hash.

If a model used this pattern, the implementation would look like this:

def pad_input_ids(    self,    input_ids: List[int],    image_inputs: MultimodalInputs,) -> List[int]:        pattern = MultiModalityDataPaddingPatternTokenPairs(        data_token_pairs=[            (self.config.image_start_token_id, self.config.image_end_token_id)        ],    )    return pattern.pad_input_tokens(input_ids, image_inputs)

By selecting the appropriate padding pattern, you ensure that SGLang’s engine can correctly interpret the multimodal structure of your model’s input.

Handling Image Features

Once the input sequence is padded, SGLang’s engine must fetch the actual image features to substitute for the placeholder hashes. This is handled by the get_image_feature method. Its job is to take the raw image data (pixel_values) and generate embeddings that can be combined with the text embeddings.

This process for VILA involves a few steps:

    The pixel_values are sent to the vision_tower, which is a pre-trained vision encoder (e.g., a Vision Transformer). This extracts a rich feature representation from the image.The features are then passed through the mm_projector, a small network that aligns the vision features with the language model’s embedding space. The resulting image_embedding is then ready to be used by the model.

This function is called by mm_utils.general_mm_embed_routine, a utility in SGLang that manages the process of replacing placeholder hashes with these computed image embeddings before feeding them to the main language model.

Here is the implementation in python/sglang/srt/models/vila.py:

def get_image_feature(self, mm_input: List[MultimodalDataItem]) -> Tensor:    pixel_values = cast(Tensor, mm_input[0].pixel_values)    vision_tower_output: BaseModelOutputWithPooling = self.vision_tower.__call__(        pixel_values.to(            device=self.vision_tower.device, dtype=self.vision_tower.dtype        ),        output_hidden_states=True,    )    mm_projector_input = self._vision_tower_output_to_mm_projector_input(        vision_tower_output    )    image_embedding: Tensor = self.mm_projector.__call__(        mm_projector_input.to(            device=self.mm_projector.device, dtype=self.mm_projector.dtype        )    )    return image_embedding

This modular design ensures that image features are computed and seamlessly integrated into the language model’s input stream.

Defining the forward pass

The forward method is adapted to work with SGLang’s batching strategy. It takes the combined text and images and processes them through the decoder layers using RadixAttention.

def forward(    self,    input_ids: Tensor,    positions: Tensor,    forward_batch: ForwardBatch,    get_embedding: bool = False,) -> LogitsProcessorOutput:    output = mm_utils.general_mm_embed_routine(        input_ids=input_ids,        forward_batch=forward_batch,        language_model=self.llm,        image_data_embedding_func=self.get_image_feature,        get_embedding=get_embedding,        positions=positions,    )    return cast(LogitsProcessorOutput, output)

Implementing load_weights

Because SGLang uses custom-optimized layers, the load_weights function is responsible for carefully mapping and sometimes transforming weights from a Hugging Face checkpoint to fit the new model structure. This process is highly dependent on the model’s implementation in Transformers.

To load weights incrementally, we recommend using the load_weights method on the submodules. For other weights, weight_utils.default_weight_loader can be used.

def load_weights(self, weights: Iterable[Tuple[str, Tensor]]) -> None:    params_dict = dict(self.named_parameters())    for name, loaded_weight in weights:        if name.startswith("llm."):            self.llm.load_weights([(name[len("llm.") :], loaded_weight)])        else:            param = params_dict[name]            weight_loader = getattr(                param, "weight_loader", weight_utils.default_weight_loader            )            weight_loader(param, loaded_weight)

Finally, an EntryClass is added at the end of the file to tell the SGLang server which class is the main entry point for the model.

EntryClass = [VILAForConditionalGeneration]

Step 5: Add Integration Tests

No integration is complete without thorough testing. It is a best practice to validate the new model implementation in two key ways. For NVILA, we added a test case in test/srt/test_vision_openai_server_b.py.

Conclusion

Integrating a cutting-edge VLM like NVILA into a high-performance serving engine like SGLang is a detailed yet well-defined process. By replacing key components with SGLang’s optimized versions like RadixAttention, you can serve these powerful models with maximum efficiency and unlock advanced features like multi-user batching.

The key steps are:

    Configuration: Registering the model’s multimodal nature and its chat template.Data Handling: Creating a dedicated processor to manage image and text inputs.Model Definition: Porting the architecture, replacing standard layers with SGLang’s optimized versions, and correctly handling multimodal inputs.Testing: Rigorously verifying the implementation against reference outputs and adding integration tests.

We hope this detailed walkthrough has demystified the process and encourages you to contribute to the exciting open-source development happening at SGLang.

Acknowledgements

We thank all contributors for their efforts in developing and integrating NVILA into SGLang.

NVILA Team: Zijian Zhang, Ligeng Zhu

SGLang Team: Mick Qian, Xinyuan Tong, Qiujiang Chen, Xinpeng Wei, Chenyang Zhao


Further Reading:

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

SGLang NVILA 视觉语言模型 VLM 模型部署
相关文章