This repository was archived by the owner on Nov 17, 2023. It is now read-only.
Mixed precison binary op backward (use in) for numpy#16791
Merged
reminisce merged 2 commits intoapache:masterfrom Nov 20, 2019
Merged
Mixed precison binary op backward (use in) for numpy#16791reminisce merged 2 commits intoapache:masterfrom
reminisce merged 2 commits intoapache:masterfrom
Conversation
362891d to
064b401
Compare
reminisce
approved these changes
Nov 14, 2019
064b401 to
4e58f0f
Compare
4e58f0f to
5139bd5
Compare
reminisce
approved these changes
Nov 20, 2019
ptrendx
pushed a commit
to ptrendx/mxnet
that referenced
this pull request
Nov 20, 2019
* mixed precison binary op backward * reduce unix cpu runtime
ptrendx
added a commit
that referenced
this pull request
Nov 22, 2019
* Add unoptimized symbol to executor for sharing (#16798) * Add unoptimized symbol to executor for sharing * Copy the symbol in Reshape * Added test for multiple reshapes * Mixed precison binary op backward (use in) for numpy (#16791) * mixed precison binary op backward * reduce unix cpu runtime * USE_NVRTC -> ENABLE_CUDA_RTC to fix maven build. Add compile-guard to fusion. (#16838) * Rename USE_NVRTC -> ENABLE_CUDA_RTC to fix maven build. Compile-guard fusion framework. * Fix fusion-not-supported warning. * Fix compile guards * Fix cmake build so -DMXNET_ENABLE_CUDA_RTC=1 is passed to nvcc * Minimize side-effects of prev change * Fix InferAttr/InferShapeAttr not calling inference for all nodes in a graph (#16836) * Fix the attribute inference omitting nodes * Add test * Cleaning * Fix lint * Fix TransposeShape * Fix WhileLoopType * Changing a/b test for fusion to a/(b+1) to increase numerical stability * Revert "Mixed precison binary op backward (use in) for numpy (#16791)" This reverts commit 8b58b78.
ptrendx
pushed a commit
to ptrendx/mxnet
that referenced
this pull request
Nov 25, 2019
* mixed precison binary op backward * reduce unix cpu runtime
ptrendx
added a commit
that referenced
this pull request
Nov 26, 2019
* refactor and reduce float types for some functions, also add bitwise_xor (#16827) * Mixed precison binary op backward (use in) for numpy (#16791) * mixed precison binary op backward * reduce unix cpu runtime * Add evaluation_loss to the estimator base class. (#16888) * Add evaluation_loss to the estimator base class. * Update the base estimator class to support the separate evaluation loss. * Add evaluation loss to the base estimator class. * Add unittest for evaluation loss in the test_evaluation function * Update estimator.py * Update estimator.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
As title.
Done through casting the
lhsorrhsvalues to the same type and fallback to existing implementations.Not implemented for cases where both inputs are integers, since that case backward is not meaningful.
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments
Limited support for the sake of d2l, benchmark results yet to come.
More to come for UseNone in the future.