Skip to content

PRXPipeline.from_pretrained() broken with transformers 5.1.0 #13142

@DavidBert

Description

@DavidBert

Describe the bug

PRXPipeline.from_pretrained() fails when loading the T5GemmaEncoder text encoder component with transformers 5.1.0:

AttributeError: 'T5GemmaConfig' object has no attribute 'attention_dropout'

Root cause

In transformers 5.x, T5GemmaConfig was refactored into a composite config with encoder/decoder sub-configs (T5GemmaModuleConfig). T5GemmaEncoder.__init__ expects flat attributes like config.attention_dropout which only exist on T5GemmaModuleConfig, not on the composite T5GemmaConfig. When the pipeline loads the text encoder via from_pretrained, it passes the composite config instead of the encoder sub-config.

Reproduction

from diffusers import PRXPipeline

pipe = PRXPipeline.from_pretrained("Photoroom/prx-1024-t2i-beta")
# AttributeError: 'T5GemmaConfig' object has no attribute 'attention_dropout'

Logs

System Info

Environment
transformers 5.1.0

Who can help?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions