You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
Both of them can be invoked from mx.sym.RNN, mx.rnn.FusedRNNCell, mx.gluon.rnn.LSTM/GRU/RNN. The fusion of DNNL provides more efficient Forward and Backward, while the native one gives a backup for some devices or environments that cannot use DNNL library.
Recently, we have found that there are some problems leading to the wrong gradients' calculation of the native implementation. Just tracking the issue here, and it will be fixed ASAP.
Description
Currently, we have two implementations of RNN layers on the CPU backend, which are
Both of them can be invoked from
mx.sym.RNN,mx.rnn.FusedRNNCell,mx.gluon.rnn.LSTM/GRU/RNN. The fusion of DNNL provides more efficient Forward and Backward, while the native one gives a backup for some devices or environments that cannot use DNNL library.Recently, we have found that there are some problems leading to the wrong gradients' calculation of the native implementation. Just tracking the issue here, and it will be fixed ASAP.