Skip to content
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
This repository was archived by the owner on Nov 17, 2023. It is now read-only.

[R] Is the mxnet always minimize the custom loss function? #4166

@yiyusheng

Description

@yiyusheng

Package used (Python/R/Scala/Julia): R
Before I write my own custom loss function with mx.metric.custom, I checked the https://github.com/dmlc/mxnet/blob/master/R-package/R/metric.R.
I find two functions: mx.metric.accuracy is used as a metric for classification and mx.metric.rmse is used as a metric for regression.
So far as I know, the model should minimize the loss function like MSE. But minimize accuracy indicates a bad model. When I test the mx.metric.accuracy, the increasing accuracy of each epoch indicates the mxnet know how to treat the loss function.
This confuses me when I write my own loss function. I do not know when the mxnet will maximum the loss function and when the mxnet will minimize the loss function. Could you explain this stuff? Thank you.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions