-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Provide a mechanism for applications to invoke the single-pass downsampler. #22286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide a mechanism for applications to invoke the single-pass downsampler. #22286
Conversation
cabc716 to
2a34921
Compare
downsampler. The [AMD FidelityFX single-pass downsampler] (SPD) is the fastest way to generate mipmap levels of a texture. Bevy currently has two separate ports of that algorithm to WGSL: one for use in the environment map generation and one for use on the depth buffer for the purposes of occlusion culling (though the latter isn't the best use of it). Absent is any mechanism to use the single-pass downsampler to generate mipmap levels of a color texture for typical use in rendering. This is a standard feature in game engines: for example, Unity has [`GenerateMips`] and Unreal has [`bAutoGenerateMips`]. This PR adds a mechanism by which applications can invoke SPD to generate mipmap levels for any `Image`. Using this mechanism is a two step process. First, the application adds the `Handle<Image>` to a resource, `MipGenerationJobs` and associates it with a *phase*, which is an arbitrary ID chosen by the application. Second, the application adds a `MipGenerationNode` for that phase to the render graph. During rendering, the `MipGenerationNode` invokes SPD to generate a full mipmap chain for all textures in that phase. The reason why mipmap generation jobs are associated with phases is that the generation of mipmaps may need to occur at precise points in the application rendering cycle. For example, consider the common situation of a mipmapped portal texture. The mipmaps must be generated *after* the portal is rendered, but *before* the object in the main world displaying the portal texture is drawn. The phased approach taken in this PR allows complex dependencies like this to be expressed using the node graph feature that Bevy already possesses. (In the future, if render graphs are removed in favor of systems, this approach can naturally be reframed in terms of systems, so this patch contains no hazards in that regard.) Note that this patch by itself doesn't automatically generate mipmaps for imported textures that don't have them the way that [`bevy_mod_mipmap_generator`] does, in order to keep this patch relatively small and self-contained. However, it'd be straightforward to either (a) extend `bevy_mod_mipmap_generator`, (b) write another plugin, and/or (c) add a new feature to Bevy itself, all built on top of this PR, to support automatic GPU mip generation for image assets that don't have them. A new example, `dynamic_mip_generation`, has been added. This is a 2D example that produces a texture at runtime on the CPU and invokes the new `MipGenerationNode` that this patch adds to generate mipmaps for that texture at runtime. The colors of the texture are randomly generated, and UI for the example allows the texture to be regenerated and for the size to be adjusted; this proves that the mipmap levels for the texture are indeed generated at runtime and not pre-calculated at build time. Note that, although the example is 2D, the feature that this patch adds can be equally used in 2D and 3D. [AMD FidelityFX single-pass downsampler]: https://gpuopen.com/fidelityfx-spd/ [`GenerateMips`]: https://docs.unity3d.com/ScriptReference/Rendering.CommandBuffer.GenerateMips.html [`bAutoGenerateMips`]: https://dev.epicgames.com/documentation/en-us/unreal-engine/API/Plugins/DisplayClusterConfiguration/FDisplayClusterC-_33/bAutoGenerateMips [`bevy_mod_mipmap_generator`]: https://github.com/DGriffin91/bevy_mod_mipmap_generator
2a34921 to
97c6514
Compare
atlv24
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work :)
|
|
||
| // Otherwise, create a dummy texture and return a view to that. | ||
|
|
||
| let dummy_texture = render_device.create_texture(&TextureDescriptor { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this code path expected to ever run? is this a fallback for when mips arent generated yet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It simplifies generating the bindings so we don't have to create a permutation for each mip level count an image might request.
| // use a text-pasting hack. The `downsample.wgsl` shader is eagerly | ||
| // specialized for each texture format by replacing `##TEXTURE_FORMAT##` | ||
| // with each possible format. | ||
| // When we have WESL, we should probably revisit this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tychedelia @stefnotch @mighdoll thoughts on this? (not blocking)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great question, relevant for our discussion of a generics model.
tychedelia
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome documentation as always, tysm!
| // use a text-pasting hack. The `downsample.wgsl` shader is eagerly | ||
| // specialized for each texture format by replacing `##TEXTURE_FORMAT##` | ||
| // with each possible format. | ||
| // When we have WESL, we should probably revisit this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great question, relevant for our discussion of a generics model.
|
|
||
| // Otherwise, create a dummy texture and return a view to that. | ||
|
|
||
| let dummy_texture = render_device.create_texture(&TextureDescriptor { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It simplifies generating the bindings so we don't have to create a permutation for each mip level count an image might request.

The AMD FidelityFX single-pass downsampler (SPD) is the fastest way to generate mipmap levels of a texture. Bevy currently has two separate ports of that algorithm to WGSL: one for use in the environment map generation and one for use on the depth buffer for the purposes of occlusion culling (though the latter isn't the best use of it). Absent is any mechanism to use the single-pass downsampler to generate mipmap levels of a color texture for typical use in rendering. This is a standard feature in game engines: for example, Unity has
GenerateMipsand Unreal hasbAutoGenerateMips.This PR adds a mechanism by which applications can invoke SPD to generate mipmap levels for any
Image. Using this mechanism is a two step process. First, the application adds theHandle<Image>to a resource,MipGenerationJobsand associates it with a phase, which is an arbitrary ID chosen by the application. Second, the application adds aMipGenerationNodefor that phase to the render graph. During rendering, theMipGenerationNodeinvokes SPD to generate a full mipmap chain for all textures in that phase.The reason why mipmap generation jobs are associated with phases is that the generation of mipmaps may need to occur at precise points in the application rendering cycle. For example, consider the common situation of a mipmapped portal texture. The mipmaps must be generated after the portal is rendered, but before the object in the main world displaying the portal texture is drawn. The phased approach taken in this PR allows complex dependencies like this to be expressed using the node graph feature that Bevy already possesses. (In the future, if render graphs are removed in favor of systems, this approach can naturally be reframed in terms of systems, so this patch contains no hazards in that regard.)
Note that this patch by itself doesn't automatically generate mipmaps for imported textures that don't have them the way that
bevy_mod_mipmap_generatordoes, in order to keep this patch relatively small and self-contained. However, it'd be straightforward to either (a) extendbevy_mod_mipmap_generator, (b) write another plugin, and/or (c) add a new feature to Bevy itself, all built on top of this PR, to support automatic GPU mip generation for image assets that don't have them.A new example,
dynamic_mip_generation, has been added. This is a 2D example that produces a texture at runtime on the CPU and invokes the newMipGenerationNodethat this patch adds to generate mipmaps for that texture at runtime. The colors of the texture are randomly generated, and UI for the example allows the texture to be regenerated and for the size to be adjusted; this proves that the mipmap levels for the texture are indeed generated at runtime and not pre-calculated at build time. Note that, although the example is 2D, the feature that this patch adds can be equally used in 2D and 3D.