mmedit.engine.runner
¶
Package Contents¶
Classes¶
Test loop for MMEditing models. This class support evaluate: |
|
Validation loop for MMEditing models. This class support evaluate: |
|
EditLogProcessor inherits from |
- class mmedit.engine.runner.EditTestLoop(runner, dataloader, evaluator, fp16=False)[源代码]¶
Bases:
mmengine.runner.base_loop.BaseLoop
Test loop for MMEditing models. This class support evaluate:
Metrics (metric) on a single dataset (e.g. PSNR and SSIM on DIV2K dataset)
Different metrics on different datasets (e.g. PSNR on DIV2K and SSIM and PSNR on SET5)
Use cases:
Case 1: metrics on a single dataset
>>> # add the following lines in your config >>> # 1. use `EditTestLoop` instead of `TestLoop` in MMEngine >>> val_cfg = dict(type='EditTestLoop') >>> # 2. specific EditEvaluator instead of Evaluator in MMEngine >>> test_evaluator = dict( >>> type='EditEvaluator', >>> metrics=[ >>> dict(type='PSNR', crop_border=2, prefix='Set5'), >>> dict(type='SSIM', crop_border=2, prefix='Set5'), >>> ]) >>> # 3. define dataloader >>> test_dataloader = dict(...)
Case 2: different metrics on different datasets
>>> # add the following lines in your config >>> # 1. use `EditTestLoop` instead of `TestLoop` in MMEngine >>> Test_cfg = dict(type='EditTestLoop') >>> # 2. specific a list EditEvaluator >>> # do not forget to add prefix for each metric group >>> div2k_evaluator = dict( >>> type='EditEvaluator', >>> metrics=dict(type='SSIM', crop_border=2, prefix='DIV2K')) >>> set5_evaluator = dict( >>> type='EditEvaluator', >>> metrics=[ >>> dict(type='PSNR', crop_border=2, prefix='Set5'), >>> dict(type='SSIM', crop_border=2, prefix='Set5'), >>> ]) >>> # define evaluator config >>> test_evaluator = [div2k_evaluator, set5_evaluator] >>> # 3. specific a list dataloader for each metric groups >>> div2k_dataloader = dict(...) >>> set5_dataloader = dict(...) >>> # define dataloader config >>> test_dataloader = [div2k_dataloader, set5_dataloader]
- 参数
runner (Runner) – A reference of runner.
dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dicts.
evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.
- property total_length: int¶
- _build_dataloaders(dataloader: DATALOADER_TYPE) List[torch.utils.data.DataLoader] ¶
Build dataloaders.
- 参数
dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dict.
- 返回
List of dataloaders for compute metrics.
- 返回类型
List[Dataloader]
- _build_evaluators(evaluator: EVALUATOR_TYPE) List[mmengine.evaluator.Evaluator] ¶
Build evaluators.
- 参数
evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.
- 返回
List of evaluators for compute metrics.
- 返回类型
List[Evaluator]
- run()¶
Launch validation. The evaluation process consists of four steps.
Prepare pre-calculated items for all metrics by calling
self.evaluator.prepare_metrics()
.Get a list of metrics-sampler pair. Each pair contains a list of metrics with the same sampler mode and a shared sampler.
Generate images for the each metrics group. Loop for elements in each sampler and feed to the model as input by calling
self.run_iter()
.Evaluate all metrics by calling
self.evaluator.evaluate()
.
- run_iter(idx, data_batch: dict, metrics: Sequence[mmengine.evaluator.BaseMetric])¶
Iterate one mini-batch and feed the output to corresponding metrics.
- 参数
idx (int) – Current idx for the input data.
data_batch (dict) – Batch of data from dataloader.
metrics (Sequence[BaseMetric]) – Specific metrics to evaluate.
- class mmedit.engine.runner.EditValLoop(runner, dataloader: DATALOADER_TYPE, evaluator: EVALUATOR_TYPE, fp16: bool = False)[源代码]¶
Bases:
mmengine.runner.base_loop.BaseLoop
Validation loop for MMEditing models. This class support evaluate:
Metrics (metric) on a single dataset (e.g. PSNR and SSIM on DIV2K dataset)
Different metrics on different datasets (e.g. PSNR on DIV2K and SSIM and PSNR on SET5)
Use cases:
Case 1: metrics on a single dataset
>>> # add the following lines in your config >>> # 1. use `EditValLoop` instead of `ValLoop` in MMEngine >>> val_cfg = dict(type='EditValLoop') >>> # 2. specific EditEvaluator instead of Evaluator in MMEngine >>> val_evaluator = dict( >>> type='EditEvaluator', >>> metrics=[ >>> dict(type='PSNR', crop_border=2, prefix='Set5'), >>> dict(type='SSIM', crop_border=2, prefix='Set5'), >>> ]) >>> # 3. define dataloader >>> val_dataloader = dict(...)
Case 2: different metrics on different datasets
>>> # add the following lines in your config >>> # 1. use `EditValLoop` instead of `ValLoop` in MMEngine >>> val_cfg = dict(type='EditValLoop') >>> # 2. specific a list EditEvaluator >>> # do not forget to add prefix for each metric group >>> div2k_evaluator = dict( >>> type='EditEvaluator', >>> metrics=dict(type='SSIM', crop_border=2, prefix='DIV2K')) >>> set5_evaluator = dict( >>> type='EditEvaluator', >>> metrics=[ >>> dict(type='PSNR', crop_border=2, prefix='Set5'), >>> dict(type='SSIM', crop_border=2, prefix='Set5'), >>> ]) >>> # define evaluator config >>> val_evaluator = [div2k_evaluator, set5_evaluator] >>> # 3. specific a list dataloader for each metric groups >>> div2k_dataloader = dict(...) >>> set5_dataloader = dict(...) >>> # define dataloader config >>> val_dataloader = [div2k_dataloader, set5_dataloader]
- 参数
runner (Runner) – A reference of runner.
dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dicts.
evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.
- property total_length: int¶
- _build_dataloaders(dataloader: DATALOADER_TYPE) List[torch.utils.data.DataLoader] ¶
Build dataloaders.
- 参数
dataloader (Dataloader or dict or list) – A dataloader object or a dict to build a dataloader a list of dataloader object or a list of config dict.
- 返回
List of dataloaders for compute metrics.
- 返回类型
List[Dataloader]
- _build_evaluators(evaluator: EVALUATOR_TYPE) List[mmengine.evaluator.Evaluator] ¶
Build evaluators.
- 参数
evaluator (Evaluator or dict or list) – A evaluator object or a dict to build the evaluator or a list of evaluator object or a list of config dicts.
- 返回
List of evaluators for compute metrics.
- 返回类型
List[Evaluator]
- run()¶
Launch validation. The evaluation process consists of four steps.
Prepare pre-calculated items for all metrics by calling
self.evaluator.prepare_metrics()
.Get a list of metrics-sampler pair. Each pair contains a list of metrics with the same sampler mode and a shared sampler.
Generate images for the each metrics group. Loop for elements in each sampler and feed to the model as input by calling
self.run_iter()
.Evaluate all metrics by calling
self.evaluator.evaluate()
.
- run_iter(idx, data_batch: dict, metrics: Sequence[mmengine.evaluator.BaseMetric])¶
Iterate one mini-batch and feed the output to corresponding metrics.
- 参数
idx (int) – Current idx for the input data.
data_batch (dict) – Batch of data from dataloader.
metrics (Sequence[BaseMetric]) – Specific metrics to evaluate.
- class mmedit.engine.runner.EditLogProcessor(window_size=10, by_epoch=True, custom_cfg: Optional[List[dict]] = None, num_digits: int = 4)[源代码]¶
Bases:
mmengine.runner.LogProcessor
EditLogProcessor inherits from
mmengine.runner.LogProcessor
and overwritesself.get_log_after_iter()
.This log processor should be used along with
mmedit.engine.runner.GenValLoop
andmmedit.engine.runner.GenTestLoop
.- get_log_after_iter(runner, batch_idx: int, mode: str) Tuple[dict, str] ¶
Format log string after training, validation or testing epoch.
If mode is in ‘val’ or ‘test’, we use runner.val_loop.total_length and runner.test_loop.total_length as the total number of iterations shown in log. If you want to know how total_length is calculated, please refers to
mmedit.engine.runner.GenValLoop.run()
andmmedit.engine.runner.GenTestLoop.run()
.- 参数
runner (Runner) – The runner of training phase.
batch_idx (int) – The index of the current batch in the current loop.
mode (str) – Current mode of runner, train, test or val.
- 返回
- Formatted log dict/string which will be
recorded by
runner.message_hub
andrunner.visualizer
.
- 返回类型
Tuple(dict, str)
- get_log_after_epoch(runner, batch_idx: int, mode: str, with_non_scalar: bool = False) Tuple[dict, str] ¶
Format log string after validation or testing epoch.
We use runner.val_loop.total_length and runner.test_loop.total_length as the total number of iterations shown in log. If you want to know how total_length is calculated, please refers to
mmedit.engine.runner.EditValLoop.run()
andmmedit.engine.runner.EditTestLoop.run()
.- 参数
runner (Runner) – The runner of validation/testing phase.
batch_idx (int) – The index of the current batch in the current loop.
mode (str) – Current mode of runner.
with_non_scalar (bool) – Whether to include non-scalar infos in the returned tag. Defaults to False.
- 返回
Formatted log dict/string which will be recorded by
runner.message_hub
andrunner.visualizer
.- 返回类型
Tuple(dict, str)