Explorar el Código

More docs and command hook context tweaks (#1019).

Dan Helfman hace 2 meses
padre
commit
9941d7dc57

+ 4 - 0
borgmatic/commands/borgmatic.py

@@ -102,6 +102,8 @@ def run_configuration(config_filename, config, config_paths, arguments):
         umask=config.get('umask'),
         dry_run=global_arguments.dry_run,
         action_names=arguments.keys(),
+        configuration_filename=config_filename,
+        log_file=arguments['global'].log_file,
     ):
         try:
             local_borg_version = borg_version.local_borg_version(config, local_path)
@@ -828,6 +830,7 @@ def collect_configuration_run_summary_logs(configs, config_paths, arguments):
                 config.get('umask'),
                 arguments['global'].dry_run,
                 configuration_filename=config_filename,
+                log_file=arguments['global'].log_file,
             )
     except (CalledProcessError, ValueError, OSError) as error:
         yield from log_error_records('Error running before everything hook', error)
@@ -880,6 +883,7 @@ def collect_configuration_run_summary_logs(configs, config_paths, arguments):
                 config.get('umask'),
                 arguments['global'].dry_run,
                 configuration_filename=config_filename,
+                log_file=arguments['global'].log_file,
             )
     except (CalledProcessError, ValueError, OSError) as error:
         yield from log_error_records('Error running after everything hook', error)

+ 51 - 46
docs/how-to/add-preparation-and-cleanup-steps-to-backups.md

@@ -8,24 +8,23 @@ eleventyNavigation:
 ## Preparation and cleanup hooks
 
 If you find yourself performing preparation tasks before your backup runs or
-cleanup work afterwards, borgmatic command hooks may be of interest. These are
-custom shell commands you can configure borgmatic to execute at various points
-as it runs.
+doing cleanup work afterwards, borgmatic command hooks may be of interest. These
+are custom shell commands you can configure borgmatic to execute at various
+points as it runs.
 
-But if you're looking to backup a database, it's probably easier to use the
+(But if you're looking to backup a database, it's probably easier to use the
 [database backup
 feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
-instead.
+instead.)
 
-<span class="minilink minilink-addedin">New in version 1.9.14</span> You can
-configure command hooks via a list of `commands:` in your borgmatic
+<span class="minilink minilink-addedin">New in version 1.9.14</span> Command
+hooks are now configured via a list of `commands:` in your borgmatic
 configuration file. For example:
 
 ```yaml
 commands:
     - before: action
-      when:
-          - create
+      when: [create]
       run:
           - echo "Before create!"
     - after: action
@@ -36,27 +35,27 @@ commands:
           - echo "After create and/or prune!"
 ```
 
-If A `run:` command contains a special YAML character such as a colon, you may
+If a `run:` command contains a special YAML character such as a colon, you may
 need to quote the entire string (or use a [multiline
 string](https://yaml-multiline.info/)) to avoid an error:
 
 ```yaml
 commands:
     - before: action
-      when:
-          - create
+      when: [create]
       run:
-    - "echo Backup: start"
+          - "echo Backup: start"
 ```
 
-Each command has the following options:
+Each command in the `commands:` list has the following options:
 
- * `before` or `after`: Name for the point in borgmatic's execution that the commands should be run before/after, one of:
-    * `action` runs before each action for each repository. (Replaces the deprecated `before_create`, `after_prune`, etc.)
-    * `repository` runs before/after all actions for each repository. (Replaces the deprecated `before_actions`/`after_actions`.)
-    * `configuration` runs before/after all actions and repositories in the current configuration file.
-    * `everything` runs before/after all configuration files. (Replaces the deprecated `before_everything`/`after_everything`.)
- * `when`: List of actions for which the commands will be run. Defaults to running for all actions.
+ * `before` or `after`: Name for the point in borgmatic's execution that the commands should be run before or after, one of:
+    * `action` runs before each action for each repository. This replaces the deprecated `before_create`, `after_prune`, etc.
+    * `repository` runs before or after all actions for each repository. This replaces the deprecated `before_actions` and `after_actions`.
+    * `configuration` runs before or after all actions and repositories in the current configuration file.
+    * `everything` runs before or after all configuration files. This replaces the deprecated `before_everything` and `after_everything`.
+    * `error` runs after an error occurs—and it's only available for `after`. This replaces the deprecated `on_error` hook.
+ * `when`: Actions (`create`, `prune`, etc.) for which the commands will be run. Defaults to running for all actions.
  * `run`: List of one or more shell commands or scripts to run when this command hook is triggered.
 
 There's also another command hook that works a little differently:
@@ -64,8 +63,7 @@ There's also another command hook that works a little differently:
 ```yaml
 commands:
     - before: dump_data_sources
-      hooks:
-          - postgresql
+      hooks: [postgresql]
       run:
           - echo "Right before the PostgreSQL database dump!"
 ```
@@ -73,8 +71,8 @@ commands:
 This command hook has the following options:
 
  * `before` or `after`: `dump_data_sources`
- * `hooks`: List of names of other hooks that this command hook applies to. Defaults to all hooks of the relevant type.
- * `run`: List of one or more shell commands or scripts to run when this command hook is triggered.
+ * `hooks`: Names of other hooks that this command hook applies to, e.g. `postgresql`, `mariadb`, `zfs`, `btrfs`, etc. Defaults to all hooks of the relevant type.
+ * `run`: One or more shell commands or scripts to run when this command hook is triggered.
 
 
 ### Order of execution
@@ -86,20 +84,23 @@ Let's say you've got a borgmatic configuration file with a configured
 repository. And suppose you configure several command hooks and then run
 borgmatic for the `create` and `prune` actions. Here's the order of execution:
 
- * Trigger `before: everything` (from all configuration files).
-    * Trigger `before: configuration` (from the first configuration file).
-        * Trigger `before: repository` (for the first repository).
-            * Trigger `before: action` for `create`.
-            * Run the `create` action.
-            * Trigger `after: action` for `create`.
-            * Trigger `before: action` for `prune`.
-            * Run the `prune` action.
-            * Trigger `after: action` for `prune`.
-        * Trigger `after: repository` (for the first repository).
-    * Trigger `after: configuration` (from the first configuration file).
- * Trigger `after: everything` (from all configuration files).
-
-You can imagine how this would be extended to multiple repositories and/or
+ * Run `before: everything` hooks (from all configuration files).
+    * Run `before: configuration` hooks (from the first configuration file).
+        * Run `before: repository` hooks (for the first repository).
+            * Run `before: action` hooks for `create`.
+                * Run `before: dump_data_sources` hooks (e.g. for the PostgreSQL hook).
+                * Actually dump data sources (e.g. PostgreSQL databases).
+                * Run `after: dump_data_sources` hooks (e.g. for the PostgreSQL hook).
+            * Actually run the `create` action.
+            * Run `after: action` hooks for `create`.
+            * Run `before: action` hooks for `prune`.
+            * Actually run the `prune` action.
+            * Run `after: action` hooks for `prune`.
+        * Run `after: repository` hooks (for the first repository).
+    * Run `after: configuration` hooks (from the first configuration file).
+ * Run `after: everything` hooks (from all configuration files).
+
+This same order of execution extends to multiple repositories and/or
 configuration files.
 
 
@@ -151,18 +152,18 @@ rather than once per repository.)
 
 ## Variable interpolation
 
-The before and after action hooks support interpolating particular runtime
-variables into the hook command. Here's an example that assumes you provide a
-separate shell script:
+The command action hooks support interpolating particular runtime variables into
+the commands that are run. Here's an example that assumes you provide a separate
+shell script:
 
 ```yaml
-after_prune:
-    - record-prune.sh "{configuration_filename}" "{repository}"
+commands:
+    - after: action
+      when: [prune]
+      run:
+          - record-prune.sh "{configuration_filename}" "{repository}"
 ```
 
-<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
-this option in the `hooks:` section of your configuration.
-
 In this example, when the hook is triggered, borgmatic interpolates runtime
 values into the hook command: the borgmatic configuration filename and the
 paths of the current Borg repository. Here's the full set of supported
@@ -179,6 +180,10 @@ variables you can use here:
    1.8.12</span>: label of the current repository as configured in the current
    borgmatic configuration file
 
+Not all command hooks support all variables. For instance, the `everything` and
+`configuration` hooks don't support repository variables because those hooks
+don't run in the context of a single repository.
+
 Note that you can also interpolate in [arbitrary environment
 variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
 

+ 110 - 35
tests/unit/commands/test_borgmatic.py

@@ -38,7 +38,7 @@ def test_run_configuration_runs_actions_for_each_repository():
         expected_results[1:]
     )
     config = {'repositories': [{'path': 'foo'}, {'path': 'bar'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False)}
+    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock())}
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -53,7 +53,7 @@ def test_run_configuration_with_skip_actions_does_not_raise():
     flexmock(module).should_receive('Log_prefix').and_return(flexmock())
     flexmock(module).should_receive('run_actions').and_return(flexmock()).and_return(flexmock())
     config = {'repositories': [{'path': 'foo'}, {'path': 'bar'}], 'skip_actions': ['compact']}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False)}
+    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock())}
 
     list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -67,7 +67,10 @@ def test_run_configuration_with_invalid_borg_version_errors():
     flexmock(module).should_receive('Log_prefix').and_return(flexmock())
     flexmock(module).should_receive('run_actions').never()
     config = {'repositories': [{'path': 'foo'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'prune': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'prune': flexmock(),
+    }
 
     list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -87,7 +90,10 @@ def test_run_configuration_logs_monitor_start_error():
     flexmock(module.command).should_receive('filter_hooks')
     flexmock(module.command).should_receive('execute_hooks')
     config = {'repositories': [{'path': 'foo'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -105,7 +111,10 @@ def test_run_configuration_bails_for_monitor_start_soft_failure():
     flexmock(module).should_receive('Log_prefix').and_return(flexmock())
     flexmock(module).should_receive('run_actions').never()
     config = {'repositories': [{'path': 'foo'}, {'path': 'bar'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -125,7 +134,7 @@ def test_run_configuration_logs_actions_error():
     flexmock(module.command).should_receive('filter_hooks')
     flexmock(module.command).should_receive('execute_hooks')
     config = {'repositories': [{'path': 'foo'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False)}
+    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock())}
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -145,7 +154,10 @@ def test_run_configuration_skips_remaining_actions_for_actions_soft_failure_but_
     flexmock(module).should_receive('log_error_records').never()
     flexmock(module.command).should_receive('considered_soft_failure').and_return(True)
     config = {'repositories': [{'path': 'foo'}, {'path': 'bar'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -167,7 +179,10 @@ def test_run_configuration_logs_monitor_log_error():
     flexmock(module.command).should_receive('filter_hooks')
     flexmock(module.command).should_receive('execute_hooks')
     config = {'repositories': [{'path': 'foo'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -188,7 +203,10 @@ def test_run_configuration_still_pings_monitor_for_monitor_log_soft_failure():
     flexmock(module).should_receive('run_actions').and_return([])
     flexmock(module.command).should_receive('considered_soft_failure').and_return(True)
     config = {'repositories': [{'path': 'foo'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -210,7 +228,10 @@ def test_run_configuration_logs_monitor_finish_error():
     flexmock(module.command).should_receive('filter_hooks')
     flexmock(module.command).should_receive('execute_hooks')
     config = {'repositories': [{'path': 'foo'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -231,7 +252,10 @@ def test_run_configuration_bails_for_monitor_finish_soft_failure():
     flexmock(module).should_receive('run_actions').and_return([])
     flexmock(module.command).should_receive('considered_soft_failure').and_return(True)
     config = {'repositories': [{'path': 'foo'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -249,7 +273,10 @@ def test_run_configuration_does_not_call_monitoring_hooks_if_monitoring_hooks_ar
     flexmock(module).should_receive('run_actions').and_return([])
 
     config = {'repositories': [{'path': 'foo'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=-2, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=-2, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
     assert results == []
 
@@ -268,7 +295,10 @@ def test_run_configuration_logs_on_error_hook_error():
     flexmock(module).should_receive('Log_prefix').and_return(flexmock())
     flexmock(module).should_receive('run_actions').and_raise(OSError)
     config = {'repositories': [{'path': 'foo'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -288,7 +318,10 @@ def test_run_configuration_bails_for_on_error_hook_soft_failure():
     flexmock(module).should_receive('Log_prefix').and_return(flexmock())
     flexmock(module).should_receive('run_actions').and_raise(OSError)
     config = {'repositories': [{'path': 'foo'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -307,7 +340,10 @@ def test_run_configuration_retries_soft_error():
     flexmock(module.command).should_receive('filter_hooks').never()
     flexmock(module.command).should_receive('execute_hooks').never()
     config = {'repositories': [{'path': 'foo'}], 'retries': 1}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -336,7 +372,10 @@ def test_run_configuration_retries_hard_error():
     flexmock(module.command).should_receive('filter_hooks')
     flexmock(module.command).should_receive('execute_hooks')
     config = {'repositories': [{'path': 'foo'}], 'retries': 1}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -360,7 +399,10 @@ def test_run_configuration_retries_repositories_in_order():
     flexmock(module.command).should_receive('filter_hooks')
     flexmock(module.command).should_receive('execute_hooks')
     config = {'repositories': [{'path': 'foo'}, {'path': 'bar'}]}
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -400,7 +442,10 @@ def test_run_configuration_retries_round_robin():
         'repositories': [{'path': 'foo'}, {'path': 'bar'}],
         'retries': 1,
     }
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -438,7 +483,10 @@ def test_run_configuration_with_one_retry():
         'repositories': [{'path': 'foo'}, {'path': 'bar'}],
         'retries': 1,
     }
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -487,7 +535,10 @@ def test_run_configuration_with_retry_wait_does_backoff_after_each_retry():
         'retries': 3,
         'retry_wait': 10,
     }
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -532,7 +583,10 @@ def test_run_configuration_with_multiple_repositories_retries_with_timeout():
         'retries': 1,
         'retry_wait': 10,
     }
-    arguments = {'global': flexmock(monitoring_verbosity=1, dry_run=False), 'create': flexmock()}
+    arguments = {
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+        'create': flexmock(),
+    }
 
     results = list(module.run_configuration('test.yaml', config, ['/tmp/test.yaml'], arguments))
 
@@ -1441,7 +1495,7 @@ def test_collect_configuration_run_summary_logs_info_for_success():
     flexmock(module.command).should_receive('execute_hooks')
     flexmock(module).should_receive('Log_prefix').and_return(flexmock())
     flexmock(module).should_receive('run_configuration').and_return([])
-    arguments = {'global': flexmock(dry_run=False)}
+    arguments = {'global': flexmock(dry_run=False, log_file=flexmock())}
 
     logs = tuple(
         module.collect_configuration_run_summary_logs(
@@ -1458,7 +1512,10 @@ def test_collect_configuration_run_summary_executes_hooks_for_create():
     flexmock(module.command).should_receive('execute_hooks')
     flexmock(module).should_receive('Log_prefix').and_return(flexmock())
     flexmock(module).should_receive('run_configuration').and_return([])
-    arguments = {'create': flexmock(), 'global': flexmock(monitoring_verbosity=1, dry_run=False)}
+    arguments = {
+        'create': flexmock(),
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+    }
 
     logs = tuple(
         module.collect_configuration_run_summary_logs(
@@ -1475,7 +1532,10 @@ def test_collect_configuration_run_summary_logs_info_for_success_with_extract():
     flexmock(module.command).should_receive('execute_hooks')
     flexmock(module).should_receive('Log_prefix').and_return(flexmock())
     flexmock(module).should_receive('run_configuration').and_return([])
-    arguments = {'extract': flexmock(repository='repo'), 'global': flexmock(dry_run=False)}
+    arguments = {
+        'extract': flexmock(repository='repo'),
+        'global': flexmock(dry_run=False, log_file=flexmock()),
+    }
 
     logs = tuple(
         module.collect_configuration_run_summary_logs(
@@ -1492,7 +1552,7 @@ def test_collect_configuration_run_summary_logs_extract_with_repository_error():
     )
     expected_logs = (flexmock(),)
     flexmock(module).should_receive('log_error_records').and_return(expected_logs)
-    arguments = {'extract': flexmock(repository='repo')}
+    arguments = {'extract': flexmock(repository='repo', log_file=flexmock())}
 
     logs = tuple(
         module.collect_configuration_run_summary_logs(
@@ -1509,7 +1569,10 @@ def test_collect_configuration_run_summary_logs_info_for_success_with_mount():
     flexmock(module.command).should_receive('execute_hooks')
     flexmock(module).should_receive('Log_prefix').and_return(flexmock())
     flexmock(module).should_receive('run_configuration').and_return([])
-    arguments = {'mount': flexmock(repository='repo'), 'global': flexmock(dry_run=False)}
+    arguments = {
+        'mount': flexmock(repository='repo'),
+        'global': flexmock(dry_run=False, log_file=flexmock()),
+    }
 
     logs = tuple(
         module.collect_configuration_run_summary_logs(
@@ -1526,7 +1589,10 @@ def test_collect_configuration_run_summary_logs_mount_with_repository_error():
     )
     expected_logs = (flexmock(),)
     flexmock(module).should_receive('log_error_records').and_return(expected_logs)
-    arguments = {'mount': flexmock(repository='repo'), 'global': flexmock(dry_run=False)}
+    arguments = {
+        'mount': flexmock(repository='repo'),
+        'global': flexmock(dry_run=False, log_file=flexmock()),
+    }
 
     logs = tuple(
         module.collect_configuration_run_summary_logs(
@@ -1541,7 +1607,7 @@ def test_collect_configuration_run_summary_logs_missing_configs_error():
     flexmock(module.validate).should_receive('guard_configuration_contains_repository')
     flexmock(module.command).should_receive('filter_hooks')
     flexmock(module.command).should_receive('execute_hooks')
-    arguments = {'global': flexmock(config_paths=[])}
+    arguments = {'global': flexmock(config_paths=[], log_file=flexmock())}
     expected_logs = (flexmock(),)
     flexmock(module).should_receive('log_error_records').and_return(expected_logs)
 
@@ -1558,7 +1624,10 @@ def test_collect_configuration_run_summary_logs_pre_hook_error():
     flexmock(module.command).should_receive('execute_hooks').and_raise(ValueError)
     expected_logs = (flexmock(),)
     flexmock(module).should_receive('log_error_records').and_return(expected_logs)
-    arguments = {'create': flexmock(), 'global': flexmock(monitoring_verbosity=1, dry_run=False)}
+    arguments = {
+        'create': flexmock(),
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+    }
 
     logs = tuple(
         module.collect_configuration_run_summary_logs(
@@ -1577,7 +1646,10 @@ def test_collect_configuration_run_summary_logs_post_hook_error():
     flexmock(module).should_receive('run_configuration').and_return([])
     expected_logs = (flexmock(),)
     flexmock(module).should_receive('log_error_records').and_return(expected_logs)
-    arguments = {'create': flexmock(), 'global': flexmock(monitoring_verbosity=1, dry_run=False)}
+    arguments = {
+        'create': flexmock(),
+        'global': flexmock(monitoring_verbosity=1, dry_run=False, log_file=flexmock()),
+    }
 
     logs = tuple(
         module.collect_configuration_run_summary_logs(
@@ -1596,7 +1668,7 @@ def test_collect_configuration_run_summary_logs_for_list_with_archive_and_reposi
     flexmock(module).should_receive('log_error_records').and_return(expected_logs)
     arguments = {
         'list': flexmock(repository='repo', archive='test'),
-        'global': flexmock(dry_run=False),
+        'global': flexmock(dry_run=False, log_file=flexmock()),
     }
 
     logs = tuple(
@@ -1616,7 +1688,7 @@ def test_collect_configuration_run_summary_logs_info_for_success_with_list():
     flexmock(module).should_receive('run_configuration').and_return([])
     arguments = {
         'list': flexmock(repository='repo', archive=None),
-        'global': flexmock(dry_run=False),
+        'global': flexmock(dry_run=False, log_file=flexmock()),
     }
 
     logs = tuple(
@@ -1637,7 +1709,7 @@ def test_collect_configuration_run_summary_logs_run_configuration_error():
         [logging.makeLogRecord(dict(levelno=logging.CRITICAL, levelname='CRITICAL', msg='Error'))]
     )
     flexmock(module).should_receive('log_error_records').and_return([])
-    arguments = {'global': flexmock(dry_run=False)}
+    arguments = {'global': flexmock(dry_run=False, log_file=flexmock())}
 
     logs = tuple(
         module.collect_configuration_run_summary_logs(
@@ -1658,7 +1730,10 @@ def test_collect_configuration_run_summary_logs_run_umount_error():
     flexmock(module).should_receive('log_error_records').and_return(
         [logging.makeLogRecord(dict(levelno=logging.CRITICAL, levelname='CRITICAL', msg='Error'))]
     )
-    arguments = {'umount': flexmock(mount_point='/mnt'), 'global': flexmock(dry_run=False)}
+    arguments = {
+        'umount': flexmock(mount_point='/mnt'),
+        'global': flexmock(dry_run=False, log_file=flexmock()),
+    }
 
     logs = tuple(
         module.collect_configuration_run_summary_logs(
@@ -1680,7 +1755,7 @@ def test_collect_configuration_run_summary_logs_outputs_merged_json_results():
     stdout = flexmock()
     stdout.should_receive('write').with_args('["foo", "bar", "baz"]').once()
     flexmock(module.sys).stdout = stdout
-    arguments = {'global': flexmock(dry_run=False)}
+    arguments = {'global': flexmock(dry_run=False, log_file=flexmock())}
 
     tuple(
         module.collect_configuration_run_summary_logs(