The world of LLMs is evolving at a remarkable pace, with Visual Language Models (VLMs) at the forefront of this revolution. These models power applications that can understand and reason about both images and text. There are tons of new VLM models emerging daily, and we want to integrate them into SGLang to leverage its high-speed throughput. Today, we’ll provide a step-by-step walkthrough for integrating new VLMs into the SGLang ecosystem, using the recent NVILA model as a real-world case study.
Accelerating the NVILA Visual Language Model with SGLang
The benchmarks below compare the original VILA implementation against SGLang with different levels of concurrency
In real world VLM development, we focus on two important metrics to evaluate a serving systems’ performance: Throughput (Token per Second, TPS) and and Time to First Token (TTFT).
- For TPS, higher throughput means the system can generate more tokens simultaneously . SGLang's
RadixAttention
allows for efficient batching of requests, dramatically increasing the number of tokens generated per second. With a concurrency of 8, SGLang achieves over 4.4x higher throughput.For TTFT, a lower value means users get a faster response to receive the first token. SGLang's memory optimizations and efficient kernel implementations significantly reduce prefill latency. The benchmark shows SGLang responses up to 2.2x faster when concurrency is 8.These performance gains make SGLang an excellent choice for deploying demanding VLMs like NVILA in production environments, and now can be easily deployed for sglang version ≥ 0.4.8
python3 -m sglang.launch_server \ --model-path Efficient-Large-Model/NVILA-Lite-2B-hf-0626 \ --host 0.0.0.0 --port 30000 --trust-remote-code \ --attention-backend fa3
The Big Picture: How VLMs like NVILA Work
Before diving into code, it helps to understand what is happening under the hood. Most Vision–Language Models (llava-based) share a three-component architecture:
- Vision encoder – a convolutional network or, more commonly, a Vision Transformer (ViT) that converts pixels into a sequence of visual tokens.Projector – a lightweight adapter (often an MLP) that aligns those tokens with the embedding space of a pretrained language model.Token processor – the language model itself, which mixes visual and textual tokens and autoregressively generates the answer.
NVILA follows the paradigm while pushing the efficiency frontier through its scale-then-compress strategy. It first scales the spatial and temporal resolution so the vision tower sees high-fidelity images or long videos, then compresses the resulting token stream so the language model only attends to a handful of high-information tokens:
- High-resolution SigLIP vision tower captures fine-grained details from images and multi-frame clips.Spatial token pooling reduces dense patches to a much smaller grid, preserving text and structure while cutting quadratic attention cost.Temporal pooling keeps just the most informative frames, so video inputs add only a few extra tokens.Two-layer MLP projector maps the compressed visual embeddings into LLM’s embedding space.FP8 training, dataset pruning, and memory-efficient fine-tuning trim training cost by up to 5× and deliver sub-second prefilling on a single 4090 GPU.
NVILA Architecture
This blueprint is generic enough to cover most modern VLMs, yet the compression blocks highlighted in green are what make NVILA both accurate and fast at scale.
Supporting New Models in SGLang
SGLang has a streamlined process for adding new models, which centers on creating a single model definition file that utilizes SGLang’s core components. The key is to replace standard Hugging Face components with their SGLang-optimized counterparts. For VLMs, this involves a few steps.
Step 1: Register the Model as Multimodal
First, SGLang needs to identify your model as multimodal. This is handled in sglang/srt/configs/model_config.py
. SGLang determines this by checking if the model’s architecture class name is present in the multimodal_model_archs
list.
For NVILA, we add "VILAForConditionalGeneration"
to this list. This tells SGLang that any model with this architecture should be treated as a multimodal model.
multimodal_model_archs = [ "VILAForConditionalGeneration",]
Step 2: Register a New Chat Template
VLMs often require specific chat templates to handle prompts containing both images and text. SGLang allows you to either define a new template or, more conveniently, match your model to an existing one.
For NVILA, which uses a format similar to ChatML, we can reuse the existing chatml
conversation template. To associate our model with this template, we register a matching function in python/sglang/srt/conversation.py
. This function, match_vila
, inspects the model path and returns "chatml"
if it finds a match, telling SGLang to apply the ChatML format for NVILA models.
@register_conv_template_matching_functiondef match_vila(model_path: str): if re.search(r"vila", model_path, re.IGNORECASE): return "chatml"
This registration instructs SGLang on how to format the input string and where to expect image data.
To make this concrete, let’s examine the template structure that chatml
provides and how it aligns with NVILA’s needs.
Understanding the ChatML Template for NVILA
The ChatML format, which NVILA uses, employs special tokens to structure the conversation. For a multimodal prompt, the template ensures that both text and images are correctly placed.
A typical conversation is a sequence of messages, each with a role
(system
, user
, or assistant
) and content
. Here’s how a user’s query with an image is formatted:
Example User Message:
{ "role": "user", "content": [ { "type": "text", "text": "What is in this image?" }, { "type": "image" } ]}
Resulting Prompt String:
The template processor converts this into a single string for the model:
<|im_start|>systemYou are a helpful assistant.<|im_end|><|im_start|>userWhat is in this image?
). In this case, MultiModalityDataPaddingPatternTokenPairs
is the correct choice. This pattern identifies the content between the start and end tokens and replaces it with the image’s unique hash.
If a model used this pattern, the implementation would look like this:
def pad_input_ids( self, input_ids: List[int], image_inputs: MultimodalInputs,) -> List[int]: pattern = MultiModalityDataPaddingPatternTokenPairs( data_token_pairs=[ (self.config.image_start_token_id, self.config.image_end_token_id) ], ) return pattern.pad_input_tokens(input_ids, image_inputs)
By selecting the appropriate padding pattern, you ensure that SGLang’s engine can correctly interpret the multimodal structure of your model’s input.
Handling Image Features
Once the input sequence is padded, SGLang’s engine must fetch the actual image features to substitute for the placeholder hashes. This is handled by the get_image_feature
method. Its job is to take the raw image data (pixel_values
) and generate embeddings that can be combined with the text embeddings.
This process for VILA involves a few steps:
- The
pixel_values
are sent to the vision_tower
, which is a pre-trained vision encoder (e.g., a Vision Transformer). This extracts a rich feature representation from the image.The features are then passed through the mm_projector
, a small network that aligns the vision features with the language model’s embedding space. The resulting image_embedding
is then ready to be used by the model.This function is called by mm_utils.general_mm_embed_routine
, a utility in SGLang that manages the process of replacing placeholder hashes with these computed image embeddings before feeding them to the main language model.
Here is the implementation in python/sglang/srt/models/vila.py
:
def get_image_feature(self, mm_input: List[MultimodalDataItem]) -> Tensor: pixel_values = cast(Tensor, mm_input[0].pixel_values) vision_tower_output: BaseModelOutputWithPooling = self.vision_tower.__call__( pixel_values.to( device=self.vision_tower.device, dtype=self.vision_tower.dtype ), output_hidden_states=True, ) mm_projector_input = self._vision_tower_output_to_mm_projector_input( vision_tower_output ) image_embedding: Tensor = self.mm_projector.__call__( mm_projector_input.to( device=self.mm_projector.device, dtype=self.mm_projector.dtype ) ) return image_embedding
This modular design ensures that image features are computed and seamlessly integrated into the language model’s input stream.
forward
pass
Defining the The forward
method is adapted to work with SGLang’s batching strategy. It takes the combined text and images and processes them through the decoder layers using RadixAttention
.
def forward( self, input_ids: Tensor, positions: Tensor, forward_batch: ForwardBatch, get_embedding: bool = False,) -> LogitsProcessorOutput: output = mm_utils.general_mm_embed_routine( input_ids=input_ids, forward_batch=forward_batch, language_model=self.llm, image_data_embedding_func=self.get_image_feature, get_embedding=get_embedding, positions=positions, ) return cast(LogitsProcessorOutput, output)
load_weights
Implementing Because SGLang uses custom-optimized layers, the load_weights
function is responsible for carefully mapping and sometimes transforming weights from a Hugging Face checkpoint to fit the new model structure. This process is highly dependent on the model’s implementation in Transformers.
To load weights incrementally, we recommend using the load_weights
method on the submodules. For other weights, weight_utils.default_weight_loader
can be used.
def load_weights(self, weights: Iterable[Tuple[str, Tensor]]) -> None: params_dict = dict(self.named_parameters()) for name, loaded_weight in weights: if name.startswith("llm."): self.llm.load_weights([(name[len("llm.") :], loaded_weight)]) else: param = params_dict[name] weight_loader = getattr( param, "weight_loader", weight_utils.default_weight_loader ) weight_loader(param, loaded_weight)
Finally, an EntryClass
is added at the end of the file to tell the SGLang server which class is the main entry point for the model.
EntryClass = [VILAForConditionalGeneration]
Step 5: Add Integration Tests
No integration is complete without thorough testing. It is a best practice to validate the new model implementation in two key ways. For NVILA, we added a test case in test/srt/test_vision_openai_server_b.py
.
Conclusion
Integrating a cutting-edge VLM like NVILA into a high-performance serving engine like SGLang is a detailed yet well-defined process. By replacing key components with SGLang’s optimized versions like RadixAttention
, you can serve these powerful models with maximum efficiency and unlock advanced features like multi-user batching.
The key steps are:
- Configuration: Registering the model’s multimodal nature and its chat template.Data Handling: Creating a dedicated processor to manage image and text inputs.Model Definition: Porting the architecture, replacing standard layers with SGLang’s optimized versions, and correctly handling multimodal inputs.Testing: Rigorously verifying the implementation against reference outputs and adding integration tests.
We hope this detailed walkthrough has demystified the process and encourages you to contribute to the exciting open-source development happening at SGLang.
Acknowledgements
We thank all contributors for their efforts in developing and integrating NVILA into SGLang.
NVILA Team: Zijian Zhang, Ligeng Zhu
SGLang Team: Mick Qian, Xinyuan Tong, Qiujiang Chen, Xinpeng Wei, Chenyang Zhao
Further Reading: