libdl  0.0.1
Simple yet powerful deep learning
Loading...
Searching...
No Matches
dl::optim::GradientDescent Class Reference
Inheritance diagram for dl::optim::GradientDescent:
Collaboration diagram for dl::optim::GradientDescent:

Public Member Functions

 GradientDescent (std::map< std::string, dl::TensorRef > &parameters, float learnrate=0.001f)
 
virtual void step (dl::TensorPtr &loss) override
 

Detailed Description

Definition at line 11 of file gradientdescent.hpp.

Constructor & Destructor Documentation

◆ GradientDescent()

dl::optim::GradientDescent::GradientDescent ( std::map< std::string, dl::TensorRef > &  parameters,
float  learnrate = 0.001f 
)
inlineexplicit

Definition at line 17 of file gradientdescent.hpp.

18 : dl::Optimizer(), parameters(parameters), learnrate(learnrate) {}
Defines an optimization strategy for a given set of Parameters.
Definition optimizer.hpp:11

Member Function Documentation

◆ step()

virtual void dl::optim::GradientDescent::step ( dl::TensorPtr loss)
inlineoverridevirtual

Implements dl::Optimizer.

Definition at line 20 of file gradientdescent.hpp.

20 {
21 loss->backward();
22 for (auto&& [_, tensor] : parameters) {
23 auto& gradient = tensor.get()->gradient();
24 assert(gradient != nullptr);
25 tensor.get() = tensor.get()->add(gradient->mul(dl::constant(-learnrate, gradient->device())));
26 }
27 }

The documentation for this class was generated from the following file: