未加星标

The bug about using hooks and MirroredStrategy in tf.estimator.Estimator

字体大小 | |
[开发(python) 所属分类 开发(python) | 发布者 店小二04 | 时间 2018 | 作者 红领巾 ] 0人收藏点击收藏

When I was using MirroedStrategy in my tf.estimator.Estimator:

distribution = tf.contrib.distribute.MirroredStrategy( ["/device:GPU:0", "/device:GPU:1"]) config = tf.estimator.RunConfig(train_distribute=distribution, eval_distribute=distribution) estimator = tf.estimator.Estimator( model_fn=build_model_fn_optimizer(), config=config) estimator.train(input_fn=input_fn, steps=10)

and add hooks for training:

logging_hook = tf.train.LoggingTensorHook({'logits' : logits}) return tf.estimator.EstimatorSpec(mode, loss=loss_fn(), train_op=train_op, training_hooks = [logging_hook])

The tensorflow report errors:

File "/usr/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 356, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "/usr/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1179, in _train_model return self._train_model_distributed(input_fn, hooks, saving_listeners) File "/usr/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1309, in _train_model_distributed grouped_estimator_spec.training_hooks) File "/usr/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1305, in get_hooks_from_the_first_device for per_device_hook in per_device_hooks File "/usr/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1305, in <listcomp> for per_device_hook in per_device_hooks AttributeError: 'Estimator' object has no attribute '_distribution'

Without finding any answers on google, I have to look into the code of ‘estimator.py’ in tensorflow. Fortunately, the code defect is obvious:

scaffold = _combine_distributed_scaffold( grouped_estimator_spec.scaffold, self._train_distribution) # TODO(yuefengz): add a test for unwrapping per_device_hooks. def get_hooks_from_the_first_device(per_device_hooks): return [ self._distribution.unwrap(per_device_hook)[0] for per_device_hook in per_device_hooks ] training_hooks = get_hooks_from_the_first_device( grouped_estimator_spec.training_hooks) class Estimator havn’t any private argument named ‘_distribution’ but only have ‘_train_distribution’ and ‘_eval_distribution’. So the fix is just change ‘self._distribution.unwrap(per_device_hook)[0]’ to ‘self._train_distribution.unwrap(per_device_hook)[0]’.

I had submitted a request pull for tensorflow to fix this bug in branch 1.11

本文开发(python)相关术语:python基础教程 python多线程 web开发工程师 软件开发工程师 软件开发流程

分页:12
转载请注明
本文标题:The bug about using hooks and MirroredStrategy in tf.estimator.Estimator
本站链接:https://www.codesec.net/view/611759.html


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 开发(python) | 评论(0) | 阅读(10)