-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bert #158
Bert #158
Conversation
This reverts commit ae86109.
@@ -83,9 83,6 @@ def evaluate(self, trainer): | |||
total_masked = num_masked | |||
#torch.cuda.synchronize() | |||
dist_pytorch.barrier(config.vendor) | |||
if config.vendor == 'kunlunxin': | |||
import torch_xmlir.core.xpu_model as xm | |||
xm.mark_step() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
之前跑图模式需要这个xm.mark_step,现在是eager模式,不需要了
from apex.parallel import DistributedDataParallel as APEX_DDP | ||
from apex.parallel.distributed import flat_dist_call | ||
except ImportError: | ||
print("import apex error") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kunlunxin的机器没有apex,import会报错,加入try catch
from torch.cuda.amp import GradScaler | ||
from torch.nn.parallel import DistributedDataParallel as NativeDDP | ||
from torch.optim import Optimizer | ||
|
||
import utils | ||
import config | ||
#from converter import convert_model | ||
from .distributed_fused_lamb import _pipeline_block_reductions_patched, _pipeline_step_patched |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
from torch.optim import Optimizer | ||
from torch_xmlir.optimizer import Lamb | ||
from torch_xmlir.optimizer import FusedLAMB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
替换为fuse优化器
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
请提供kulunxin机器上跑通的截图
# e5m2_allgather=config.dwu_e5m2_allgather) | ||
#optimizer.set_global_scale(float(os.getenv("INIT_LOSS_SCALE", 2 ** 20))) | ||
else: | ||
optimizer = Lamb(optimizer_grouped_parameters, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
删除多余代码
use_ddp = dist.is_initialized() | ||
if use_ddp and config.use_xpu: | ||
from torch_xmlir.distributed import DistributedDataParallel as DDP | ||
model = DDP(model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
替换为torch原生的ddp,而不是自定的ddp
optimizer, | ||
delay_overflow_check=self.config. | ||
allreduce_post_accumulation) as scaled_loss: | ||
scaled_loss.backward() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
删除冗余代码
|
||
update_step = step % config.gradient_accumulation_steps == 0 | ||
if update_step: | ||
update_model_params(loss, optimizer, grad_scaler) | ||
else: | ||
xm.mark_step() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eager模式不用xm.mark_step
param.grad = None | ||
else: | ||
xm.optimizer_step(optimizer, barrier=True) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
删除多余代码
日志:root@p-kunlunxin-r480-005:/data/dufeilei/dev/code/FlagPerf/training/benchmarks/bert/pytorch/log/train_1x8.log
数据集:root@p-kunlunxin-r480-005:/data/datasets_ckpt/bert/train