Skip to content

Conversation

@RissyRan
Copy link
Collaborator

@RissyRan RissyRan commented Jan 13, 2026

Description

Main author by @shuningjin

Background

DeepSeek V3.2 differs from DeepSeek V3 solely in the attention mechanism, aiming for efficiency in long-context scenario. While DeepSeek V3 uses Multi-head Latent Attention (MLA), DeepSeek V3.2 uses DeepSeek Sparse Attention (DSA). DSA augments MLA with two components:

  • Indexer: parametric, qk product to get index score
  • Top-k token selection: non-parametric, select top-k key/value for each query, introducing sparsity to qkv attention

What this PR does

Fix: b/475925910

1. Naive implementation of DeepSeek Sparse Attention (DSA)

  • Indexer:

    • qk product: currently implemented with dot product to get index scores. To be optimized.
    • (minor) RoPE: indexer applies partial RoPE to q and k based on YaRN extension. It uses the same YaRN frequency as MLA, but with concatenated layout rather than interleaved layout.
    • Based on index scores, get top-k indices and index mask
  • Top-k selection for qkv attention:

    • This is currently implemented inside dot product attention, by adding index mask to regular attention mask. To be optimized.
  • training only (no prefill / decode)

  • See changes attention_mla.py, attention_op.py

2. Onboard deepseek3.2-671b config

  • deepseek3.2-671b.yml
  • deepseek v3.2 vs. v3: HF config diff: additional config for indexer
"index_head_dim": 128, "index_n_heads": 64, "index_topk": 2048,
  • number of parameter: (1) Similar to v3, HF safetensor of v3.2 contains an extra layer for MTP which we omit. (2) Note that indexer contains extra parameter. (3) By counting, v3 has 671026419200 (671.03B) and v3.2 has671877944064 (671.88B) parameters.

3. unit test: ahead-of-time train compile for deepseek3.2-671b

4. unit test: compare output against torch code for Indexer and MLA

Reference

Future work

  • verify end-to-end training logits for deepseek3.2
  • more efficient implementation of DSA

Tests

Unit test against torch code (adapted from reference): indexer, MLA

python3 -m pytest -v --pyargs tests.unit.deepseek32_vs_reference_test -rP -s

Unit test for train compile

python3 -m pytest -v --pyargs tests.unit.train_compile_test -rP -s -k "test_deepseek32"

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@codecov
Copy link

codecov bot commented Jan 13, 2026

Codecov Report

❌ Patch coverage is 53.84615% with 36 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/MaxText/layers/attention_mla.py 55.26% 31 Missing and 3 partials ⚠️
src/MaxText/layers/attention_op.py 0.00% 1 Missing and 1 partial ⚠️

📢 Thoughts on this report? Let us know!

Copy link
Collaborator Author

@RissyRan RissyRan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the change! I took a look at indexer part, and overall it looks good for functionality. It also has indexer logit kernel for performance, I will take a look there.

I will take a look at MLA part shortly.

@shuningjin shuningjin changed the title [DO NO MERGE] Draft for sparse DeepSeek3.2: Onboard sparse attention Jan 17, 2026
@shuningjin shuningjin marked this pull request as ready for review January 17, 2026 01:01
Copy link
Collaborator Author

@RissyRan RissyRan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the change! Great work! A few comments.

@RissyRan
Copy link
Collaborator Author

Also, don't forget to squash commits :)

Copy link
Collaborator Author

@RissyRan RissyRan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Please remove tests/unit/yarn_vs_reference_test.py if no changes there.

It seems I cannot approve it as I created the PR earlier. Could you make the approval for me?

Copy link
Collaborator

@shuningjin shuningjin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems I cannot approve it as I created the PR earlier. Could you make the approval for me?

Approval on behalf of @RissyRan

@shuningjin
Copy link
Collaborator

shuningjin commented Jan 23, 2026

LGTM! Please remove tests/unit/yarn_vs_reference_test.py if no changes there.

I am renaming tests/unit/mla_vs_reference_test.py to tests/unit/yarn_vs_reference_test.py, as the test is actually testing yarn rope embedding rather than mla

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants