Commit graph

21 commits

Author SHA1 Message Date
Yin Li
1b1e0e82fa Fix (possibly) multiple DistributedDataParallel reduce interference 2020-01-10 14:39:16 -05:00
Yin Li
15384dc9bd Add optional adversary model and make validation optional 2020-01-09 20:24:46 -05:00
Yin Li
9cf97b3ac1 Fix seeding bug introduced in the completely wrong commit f64b1e4 2020-01-06 20:20:05 -05:00
Yin Li
848dc87169 Change __dict__ to getattr 2019-12-23 16:11:43 -05:00
Yin Li
77710bc8a3 Cache data during the first epoch 2019-12-18 17:51:11 -05:00
Yin Li
01b0c8b514 Add data division, good with data caching 2019-12-18 17:13:40 -05:00
Yin Li
843fe09a92 Add data caching, and new pad and crop features 2019-12-17 12:00:13 -05:00
Yin Li
d03bcb59a1 Fix bug from 0533150 2019-12-12 19:26:57 -05:00
Yin Li
0533150194 Save past best models by not overwriting them 2019-12-12 18:05:16 -05:00
Yin Li
341bdbff84 Fix issue that pytorch 1.1 does not flush tensorboard 2019-12-12 15:25:45 -05:00
Yin Li
bd3798222a Add weight decay 2019-12-12 12:04:39 -05:00
Yin Li
7b6ff73be1 Move UNet with ResBlock to VNet and Revert UNet to the previous simple version 2019-12-09 21:53:27 -05:00
Yin Li
0764a1006e Remove unnecessary arguments --in-channels and --out-channels 2019-12-09 10:19:21 -05:00
Yin Li
f64b1e42e9 Add synchronized random seed to training 2019-12-08 21:27:44 -05:00
Yin Li
11c9caa1e2 Fix unstable training by limiting pytorch version to 1.1 2019-12-08 21:02:08 -05:00
Yin Li
437126e296 Fix DistributedDataParallel model save and load during training, leave testing for later 2019-12-08 21:00:51 -05:00
Yin Li
f2e9af6d5f Revert scheduler to ReduceLROnPlateau 2019-12-08 20:58:46 -05:00
Yin Li
b253bb687b Change ReduceLROnPlateau to CyclicLR 2019-12-03 18:06:33 -05:00
Yin Li
0211eed0ec Add testing 2019-12-01 21:40:35 -05:00
Yin Li
bcf95275f3 Fix global_step in tensorboard summary to start from 1 2019-11-30 22:15:10 -05:00
Yin Li
88bfd11594 Add training 2019-11-30 16:31:10 -05:00