-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
PRXPipeline.from_pretrained() fails when loading the T5GemmaEncoder text encoder component with transformers 5.1.0:
AttributeError: 'T5GemmaConfig' object has no attribute 'attention_dropout'
Root cause
In transformers 5.x, T5GemmaConfig was refactored into a composite config with encoder/decoder sub-configs (T5GemmaModuleConfig). T5GemmaEncoder.__init__ expects flat attributes like config.attention_dropout which only exist on T5GemmaModuleConfig, not on the composite T5GemmaConfig. When the pipeline loads the text encoder via from_pretrained, it passes the composite config instead of the encoder sub-config.
Reproduction
from diffusers import PRXPipeline
pipe = PRXPipeline.from_pretrained("Photoroom/prx-1024-t2i-beta")
# AttributeError: 'T5GemmaConfig' object has no attribute 'attention_dropout'Logs
System Info
Environment
transformers 5.1.0
Who can help?
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working