Skip to content

Conversation

@MaxMotovilov
Copy link

I have sorted out the chaff from the wheat - so here's the substantial fixes we have in our repo

MaxMotovilov and others added 4 commits July 2, 2018 13:55
Fix: restore compatibility with single-GPU configs

Remove extraneous code (model is reassigned to CUDA device in __init__)

Restore GPU device check in the command shell
grad_clipping = params.pop_float("grad_clipping", None)
lr_scheduler_params = params.pop("learning_rate_scheduler", None)

if cuda_device >= 0:
Copy link
Owner

@murphp15 murphp15 Jul 2, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the model is always none now then maybe it should be removed as a parameter from the Trainer constructor?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That wasn't in my changes though, right? Pretty sure upstream modified the trainer.py quite a bit so you may want to diff against their master first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants