未加星标

Writing your own loss function/module for pytorch

字体大小 | |
[开发(python) 所属分类 开发(python) | 发布者 店小二04 | 时间 2019 | 作者 红领巾 ] 0人收藏点击收藏

Yes, I am switching to PyTorch, and I am so far very happy with it.

Recently, I am working on a multilabel classification problem, where the evaluation metric is the macro f1 score. So, ideally, we would want the loss function to be aligned with our evaluation metric, instead of using standard BCE.

Initially, I was using the following function:

def f1_loss(output, target, epsilon=1e-7): probas = nn.Sigmoid()(output) TP = (probas * target).sum(dim=1) precision = TP / (probas.sum(dim=1) + epsilon) recall = TP / (target.sum(dim=1) + epsilon) f1 = 2 * precision * recall / (precision + recall + epsilon) f1 = f1.clamp(min=epsilon, max=1-epsilon) return 1 - f1.mean()

It is perfectlyusable for the purpose of a loss function, like your typical training code:

for batch_idx, (inp, target) in enumerate(dataloader): output = model(inp) optimizer.zero_grad() loss = f1_loss(output, target) loss.backward() optimizer.step()

According to my experiment, even though output and target tensors are on GPU, it seems that they have to be moved to CPU to perform the loss calculation, which caused some slowdown.

So after digging a bit in the PyTorch source code for loss functions, I replaced the above function with a custom module:

class F1_Loss(nn.Module): def __init__(self, epsilon=1e-7): super(F1_Loss, self).__init__() self.epsilon = epsilon def forward(self, output, target): probas = nn.Sigmoid()(output) TP = (probas * target).sum(dim=1) precision = TP / (probas.sum(dim=1) + self.epsilon) recall = TP / (target.sum(dim=1) + self.epsilon) f1 = 2 * precision * recall / (precision + recall + self.epsilon) f1 = f1.clamp(min=self.epsilon, max=1-self.epsilon) return 1 - f1.mean()

That is simply to put the original f1_loss function on to theforward pass of a simple module. As a result, I can put themodule to GPU, and then the calculation can happen all on GPU.

f1_loss = F1_Loss().cuda() for batch_idx, (inp, target) in enumerate(dataloader): output = model(inp) optimizer.zero_grad() loss = f1_loss(output, target) loss.backward() optimizer.step()

本文开发(python)相关术语:python基础教程 python多线程 web开发工程师 软件开发工程师 软件开发流程

代码区博客精选文章
分页:12
转载请注明
本文标题:Writing your own loss function/module for pytorch
本站链接:https://www.codesec.net/view/628125.html


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 开发(python) | 评论(0) | 阅读(68)