:py:mod:`kosmos.ml.config.factories.optimizer` ============================================== .. py:module:: kosmos.ml.config.factories.optimizer Module Attributes ----------------- .. py:type:: ParamsT :canonical: Iterable[torch.Tensor] | Iterable[dict[str, Any]] | Iterable[tuple[str, torch.Tensor]] Classes ------- .. py:class:: OptimizerConfig Bases: :py:class:`abc.ABC` Optimizer configuration. | .. rubric:: Methods .. py:method:: get_instance(params: ParamsT) -> torch.optim.Optimizer Get the optimizer instance. :param params: Parameters to optimize. :type params: ParamsT :returns: Optimizer instance. :rtype: Optimizer ---- .. py:class:: SGDOptimizerConfig(lr: float = 0.001, momentum: float = 0.0, weight_decay: float = 0.0, *, nesterov: bool = False) Bases: :py:class:`OptimizerConfig` Stochastic gradient descent (SGD) optimizer configuration. Initialize the SGD optimizer configuration. :param lr: Learning rate. Defaults to 1e-3. :type lr: float :param momentum: Momentum factor. Defaults to 0.0. :type momentum: float :param weight_decay: Weight decay. Defaults to 0.0. :type weight_decay: float :param nesterov: Whether to use Nesterov momentum. Only applicable when momentum is non-zero. Defaults to False. :type nesterov: bool | .. rubric:: Methods .. py:method:: get_instance(params: ParamsT) -> torch.optim.SGD Get the SGD optimizer instance. :param params: Parameters to optimize. :type params: ParamsT :returns: SGD optimizer instance. :rtype: SGD ---- .. py:class:: AdamOptimizerConfig(lr: float = 0.001, weight_decay: float = 0.0) Bases: :py:class:`OptimizerConfig` Adam optimizer configuration. Initialize the Adam optimizer configuration. :param lr: Learning rate. Defaults to 1e-3. :type lr: float :param weight_decay: Weight decay. Defaults to 0.0. :type weight_decay: float | .. rubric:: Methods .. py:method:: get_instance(params: ParamsT) -> torch.optim.Adam Get the Adam optimizer instance. :param params: Parameters to optimize. :type params: ParamsT :returns: Adam optimizer instance. :rtype: Adam