-
Notifications
You must be signed in to change notification settings - Fork 243
Support VLM calibration with image-text data #755
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #755 +/- ##
==========================================
- Coverage 74.13% 73.08% -1.05%
==========================================
Files 192 193 +1
Lines 19263 19583 +320
==========================================
+ Hits 14280 14312 +32
- Misses 4983 5271 +288 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
So, we only support image quantization for just nemotron-vl? If yes, why? |
|
@Edwardf0t1 do you have experiments evaluating the accuracy impact of using the new dataset? |
At this time, only Nemotron VL has been tested. We can extend the logic to support other VLMs later. Note that different VLMs may have different forward functions—e.g., the way the vision encoder interacts with the language decoder can vary across models. Do you have a preferred VL model you’d like us to support next? For instance, Qwen3-VL? |
Tested on two benchmarks DocVQA and InfoVQA for Nemotron Nano VL v2 with vLLM backend:
Image-text calibration is only marginally better in these cases, but the calibration flow in this PR should be ready. The follow-up experiments can be
|
| cfg = SUPPORTED_VLM_DATASET_CONFIG[dataset_name]["config"].copy() | ||
| streaming = bool(cfg.pop("streaming", False)) | ||
|
|
||
| if dataset_name == "nemotron_vlm_dataset_v2": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we move this logic to a different function like _get_nemotron_dataset()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably not, I think this logic is small and tightly coupled to _get_vlm_dataset’s flow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually like @ajrasane 's suggestion. Moving the following logic to a helper function improves the readability
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I generally agree on the helper function, but I don't think so in this case, as the logic is small.
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
…for Nemotron-VLM-Dataset-v2 Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
…for Nemotron-VLM-Dataset-v2 Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
794fdfa to
b313f93
Compare
📝 WalkthroughWalkthroughThis pull request introduces Vision-Language Model (VLM) calibration support for post-training quantization. It adds new dataset utilities for streaming Nemotron VLM data, implements image-text pair calibration loops, extends the quantization pipeline to handle multimodal models, and includes documentation and helper functions for Nemotron VL model processing. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant hf_ptq
participant ModelLoader
participant VLMProcessor
participant DataLoader
participant CalibLoop
participant Quantizer
User->>hf_ptq: Execute with --calib_with_images
hf_ptq->>ModelLoader: load_model() with calib_with_images=True
ModelLoader->>ModelLoader: Detect Nemotron VL model
ModelLoader->>VLMProcessor: Create AutoProcessor
VLMProcessor->>VLMProcessor: Configure padding tokens & side
ModelLoader->>ModelLoader: extract_and_prepare_language_model_from_vl()
ModelLoader->>hf_ptq: Return LM + default_pad_token
hf_ptq->>DataLoader: Load nemotron_vlm_dataset_v2
DataLoader->>DataLoader: Stream tar shards + JSONL
DataLoader->>DataLoader: Match images to messages
DataLoader->>hf_ptq: Yield {id, messages, image}
hf_ptq->>CalibLoop: create_vlm_calibration_loop(model, dataloader)
CalibLoop->>CalibLoop: Inspect model.forward signature
loop Per batch
CalibLoop->>CalibLoop: Extract pixel_values, input_ids, attention_mask
CalibLoop->>CalibLoop: safe_nemotron_vl_forward()
CalibLoop->>CalibLoop: Align vision embeddings with img_context_token_id
CalibLoop->>CalibLoop: Run LM forward (no grad, eval mode)
end
hf_ptq->>Quantizer: quantize_main() with calibrated stats
Quantizer->>hf_ptq: Export quantized LM
hf_ptq->>hf_ptq: Restore tokenizer.pad_token
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| return any("nemotron" in arch.lower() for arch in architectures) | ||
|
|
||
|
|
||
| def create_vlm_calibration_loop(full_model, calib_dataloader): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to move it to vlm_dataset_utils.py? In the dataset_utils.py we create the llm calibration loop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer to keep it here as it’s a PTQ calibration loop tied to the example workflow and nemotron_vl_calib, not a dataset utility. Moving it into vlm_dataset_utils.py would mix concerns (data loading vs. calibration execution) and could introduce awkward dependencies (a core util importing example-only logic).
We may create a new modelopt/torch/utils/vlm_calib_utils.py to host it but I don’t think it’s needed right now.
| if args.calib_with_images and is_nemotron_vl_model: | ||
| calibrate_loop = create_vlm_calibration_loop(full_model, calib_dataloader) | ||
| else: | ||
| calibrate_loop = create_forward_loop(dataloader=calib_dataloader) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
e.g. this is imported from the dataset_utils.py
What does this PR do?
Type of change: New feature
Overview:
The primary goal of this PR is to allow the model optimizer to use image-text pair data during the calibration phase of quantization, which is likely help improve accuracy of quantized VLMs like Nemotron VL on visual understanding tasks particularly, compared to text-only calibration data.
Nemotron-VLM-Dataset-v2.hf_ptq.py) clean.Nemotron-Nano-VL-12B-V2model with image data.This PR complements #347 and we will consolidate llm_ptq and vlm_ptq examples in follow-up PRs.
Usage
Testing
Before your PR is "Ready for review"
Additional Information
Summary by CodeRabbit
New Features
--calib_with_imagesCLI flag to enable image-based calibration workflows.Documentation
✏️ Tip: You can customize this high-level summary in your review settings.