You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
Package used (Python/R/Scala/Julia): R
Before I write my own custom loss function with mx.metric.custom, I checked the https://github.com/dmlc/mxnet/blob/master/R-package/R/metric.R.
I find two functions: mx.metric.accuracy is used as a metric for classification and mx.metric.rmse is used as a metric for regression.
So far as I know, the model should minimize the loss function like MSE. But minimize accuracy indicates a bad model. When I test the mx.metric.accuracy, the increasing accuracy of each epoch indicates the mxnet know how to treat the loss function.
This confuses me when I write my own loss function. I do not know when the mxnet will maximum the loss function and when the mxnet will minimize the loss function. Could you explain this stuff? Thank you.
Package used (Python/R/Scala/Julia): R
Before I write my own custom loss function with mx.metric.custom, I checked the
https://github.com/dmlc/mxnet/blob/master/R-package/R/metric.R.I find two functions: mx.metric.accuracy is used as a metric for classification and mx.metric.rmse is used as a metric for regression.
So far as I know, the model should minimize the loss function like MSE. But minimize accuracy indicates a bad model. When I test the mx.metric.accuracy, the increasing accuracy of each epoch indicates the mxnet know how to treat the loss function.
This confuses me when I write my own loss function. I do not know when the mxnet will maximum the loss function and when the mxnet will minimize the loss function. Could you explain this stuff? Thank you.