malib.rl.coma package

Submodules

malib.rl.coma.critic module

class malib.rl.coma.critic.COMADiscreteCritic(centralized_obs_space: Space, action_space: Space, net_type: Optional[str] = None, device: str = 'cpu', **kwargs)[source]

Bases: Module

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(inputs: Union[Dict[str, Batch], Tensor]) Union[Tuple[Tensor, Any], Tensor][source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

malib.rl.coma.trainer module

class malib.rl.coma.trainer.COMATrainer(training_config: Dict[str, Any], critic_creator: Callable, policy_instance: Optional[Policy] = None)[source]

Bases: Trainer

Initialize a trainer for a type of policies.

Parameters:
  • learning_mode (str) – Learning mode inidication, could be off_policy or on_policy.

  • training_config (Dict[str, Any], optional) – The training configuration. Defaults to None.

  • policy_instance (Policy, optional) – A policy instance, if None, we must reset it. Defaults to None.

create_joint_action(n_agents, batch_size, time_step, actions)[source]
post_process(batch: Dict[str, Batch], agent_filter: Sequence[str]) Batch[source]

Stack batches in agent wise.

Parameters:
  • batch (Dict[str, Any]) – A dict of agent batches.

  • agent_filter (Sequence[AgentID]) – A list of agent filter.

Returns:

Batch

Return type:

Batch

setup()[source]

Set up optimizers here.

train(batch: Batch) Dict[str, float][source]

Run training, and return info dict.

Parameters:

batch (Union[Dict[AgentID, Batch], Batch]) – A dict of batch or batch

Returns:

A training batch of data.

Return type:

Batch

train_critic(batch: Batch)[source]