SNN2.src.model.reinforcement.RL_actorCritic module
- class SNN2.src.model.reinforcement.RL_actorCritic.RL_AC_manager(*args, **kwargs)
Bases:
RLModelHandler- aggregate(agg, x)
- calculate_confusion_matrix(labels: Tensor, exp_l: Tensor | None = None) Dict[str, int]
- calculate_reward(labels: Tensor, current_params: Tensor | None = None, previous_params: Tensor | None = None) Tensor
- discounted_sum(x: Tensor) Tensor
- evaluate_performances(*args, **kwargs) None
- execute_train(*args, **kwargs) None
- get_margin_values(*args, **kwargs) Tensor
- get_probabilities_values(*args, **kwargs) Tensor
- register(stat: str, value: Any, step: int | None = None) None
- reset() None
- step(observation: Tensor, labels: Tensor, game_over: bool) int | None
- train(*args, **kwargs) None
- update_memory() None