Bläddra i källkod

Merge branch 'main' into logging-verbosity-config.

Dan Helfman 2 månader sedan
förälder
incheckning
c6ce9c70ab
100 ändrade filer med 4456 tillägg och 1495 borttagningar
  1. 94 1
      NEWS
  2. 15 1
      README.md
  3. 1 1
      borgmatic/actions/change_passphrase.py
  4. 12 28
      borgmatic/actions/check.py
  5. 0 20
      borgmatic/actions/compact.py
  6. 3 2
      borgmatic/actions/config/bootstrap.py
  7. 29 25
      borgmatic/actions/create.py
  8. 0 1
      borgmatic/actions/export_tar.py
  9. 0 18
      borgmatic/actions/extract.py
  10. 33 0
      borgmatic/actions/import_key.py
  11. 0 17
      borgmatic/actions/prune.py
  12. 53 0
      borgmatic/actions/recreate.py
  13. 24 4
      borgmatic/actions/repo_create.py
  14. 6 0
      borgmatic/actions/transfer.py
  15. 1 1
      borgmatic/borg/borg.py
  16. 1 1
      borgmatic/borg/break_lock.py
  17. 1 1
      borgmatic/borg/change_passphrase.py
  18. 4 4
      borgmatic/borg/check.py
  19. 3 4
      borgmatic/borg/compact.py
  20. 32 26
      borgmatic/borg/create.py
  21. 12 4
      borgmatic/borg/delete.py
  22. 35 9
      borgmatic/borg/environment.py
  23. 1 1
      borgmatic/borg/export_key.py
  24. 3 4
      borgmatic/borg/export_tar.py
  25. 9 12
      borgmatic/borg/extract.py
  26. 2 0
      borgmatic/borg/feature.py
  27. 41 0
      borgmatic/borg/flags.py
  28. 70 0
      borgmatic/borg/import_key.py
  29. 3 5
      borgmatic/borg/info.py
  30. 3 3
      borgmatic/borg/list.py
  31. 2 2
      borgmatic/borg/mount.py
  32. 3 13
      borgmatic/borg/passcommand.py
  33. 20 1
      borgmatic/borg/pattern.py
  34. 12 6
      borgmatic/borg/prune.py
  35. 103 0
      borgmatic/borg/recreate.py
  36. 3 3
      borgmatic/borg/repo_create.py
  37. 3 3
      borgmatic/borg/repo_delete.py
  38. 2 2
      borgmatic/borg/repo_info.py
  39. 4 4
      borgmatic/borg/repo_list.py
  40. 12 7
      borgmatic/borg/transfer.py
  41. 1 1
      borgmatic/borg/version.py
  42. 389 57
      borgmatic/commands/arguments.py
  43. 486 409
      borgmatic/commands/borgmatic.py
  44. 12 2
      borgmatic/commands/completion/bash.py
  45. 15 8
      borgmatic/commands/completion/fish.py
  46. 13 0
      borgmatic/commands/completion/flag.py
  47. 176 0
      borgmatic/config/arguments.py
  48. 57 54
      borgmatic/config/generate.py
  49. 1 1
      borgmatic/config/load.py
  50. 90 1
      borgmatic/config/normalize.py
  51. 8 0
      borgmatic/config/override.py
  52. 1 1
      borgmatic/config/paths.py
  53. 72 0
      borgmatic/config/schema.py
  54. 457 91
      borgmatic/config/schema.yaml
  55. 33 8
      borgmatic/config/validate.py
  56. 56 48
      borgmatic/execute.py
  57. 175 41
      borgmatic/hooks/command.py
  58. 43 0
      borgmatic/hooks/credential/container.py
  59. 32 0
      borgmatic/hooks/credential/file.py
  60. 45 0
      borgmatic/hooks/credential/keepassxc.py
  61. 124 0
      borgmatic/hooks/credential/parse.py
  62. 43 0
      borgmatic/hooks/credential/systemd.py
  63. 10 2
      borgmatic/hooks/data_source/bootstrap.py
  64. 65 7
      borgmatic/hooks/data_source/btrfs.py
  65. 39 9
      borgmatic/hooks/data_source/lvm.py
  66. 151 31
      borgmatic/hooks/data_source/mariadb.py
  67. 68 16
      borgmatic/hooks/data_source/mongodb.py
  68. 82 30
      borgmatic/hooks/data_source/mysql.py
  69. 59 35
      borgmatic/hooks/data_source/postgresql.py
  70. 16 6
      borgmatic/hooks/data_source/snapshot.py
  71. 11 7
      borgmatic/hooks/data_source/sqlite.py
  72. 60 13
      borgmatic/hooks/data_source/zfs.py
  73. 1 0
      borgmatic/hooks/dispatch.py
  74. 1 1
      borgmatic/hooks/monitoring/cronhub.py
  75. 1 1
      borgmatic/hooks/monitoring/cronitor.py
  76. 1 1
      borgmatic/hooks/monitoring/logs.py
  77. 16 3
      borgmatic/hooks/monitoring/ntfy.py
  78. 41 11
      borgmatic/hooks/monitoring/pagerduty.py
  79. 10 2
      borgmatic/hooks/monitoring/pushover.py
  80. 1 1
      borgmatic/hooks/monitoring/uptime_kuma.py
  81. 83 32
      borgmatic/hooks/monitoring/zabbix.py
  82. 16 9
      borgmatic/logger.py
  83. 3 0
      borgmatic/signals.py
  84. 1 1
      docs/Dockerfile
  85. 1 0
      docs/_includes/index.css
  86. 1 1
      docs/_includes/layouts/base.njk
  87. 2 3
      docs/fetch-contributors
  88. 207 51
      docs/how-to/add-preparation-and-cleanup-steps-to-backups.md
  89. 47 46
      docs/how-to/backup-to-a-removable-drive-or-an-intermittent-server.md
  90. 35 17
      docs/how-to/backup-your-databases.md
  91. 77 8
      docs/how-to/make-per-application-backups.md
  92. 86 139
      docs/how-to/monitor-your-backups.md
  93. 230 35
      docs/how-to/provide-your-passwords.md
  94. 14 1
      docs/how-to/set-up-backups.md
  95. 36 20
      docs/how-to/snapshot-your-filesystems.md
  96. BIN
      docs/static/docker.png
  97. BIN
      docs/static/keepassxc.png
  98. BIN
      docs/static/podman.png
  99. BIN
      docs/static/pushover.png
  100. BIN
      docs/static/systemd.png

+ 94 - 1
NEWS

@@ -1,4 +1,95 @@
-1.9.10.dev0
+2.0.0.dev0
+ * TL;DR: More flexible, completely revamped command hooks. All config options settable on the
+   command-line. Config option defaults for many command-line flags. New "key import" and "recreate"
+   actions. Almost everything is backwards compatible.
+ * #262: Add a "default_actions" option that supports disabling default actions when borgmatic is
+   run without any command-line arguments.
+ * #303: Deprecate the "--override" flag in favor of direct command-line flags for every borgmatic
+   configuration option. See the documentation for more information:
+   https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides
+ * #303: Add configuration options that serve as defaults for some (but not all) command-line
+   action flags. For example, each entry in "repositories:" now has an "encryption" option that
+   applies to the "repo-create" action, serving as a default for the "--encryption" flag. See the
+   documentation for more information: https://torsion.org/borgmatic/docs/reference/configuration/
+ * #345: Add a "key import" action to import a repository key from backup.
+ * #422: Add home directory expansion to file-based and KeePassXC credential hooks.
+ * #610: Add a "recreate" action for recreating archives, for instance for retroactively excluding
+   particular files from existing archives.
+ * #790, #821: Deprecate all "before_*", "after_*" and "on_error" command hooks in favor of more
+   flexible "commands:". See the documentation for more information:
+   https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/
+ * #790: BREAKING: For both new and deprecated command hooks, run a configured "after" hook even if
+   an error occurs first. This allows you to perform cleanup steps that correspond to "before"
+   preparation commands—even when something goes wrong.
+ * #790: BREAKING: Run all command hooks (both new and deprecated) respecting the
+   "working_directory" option if configured, meaning that hook commands are run in that directory.
+ * #836: Add a custom command option for the SQLite hook.
+ * #837: Add custom command options for the MongoDB hook.
+ * #1010: When using Borg 2, don't pass the "--stats" flag to "borg prune".
+ * #1020: Document a database use case involving a temporary database client container:
+   https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#containers
+ * #1037: Fix an error with the "extract" action when both a remote repository and a
+   "working_directory" are used.
+ * #1044: Fix an error in the systemd credential hook when the credential name contains a "."
+   character.
+ * #1047: Add "key-file" and "yubikey" options to the KeePassXC credential hook.
+ * #1048: Fix a "no such file or directory" error in ZFS, Btrfs, and LVM hooks with nested
+   directories that reside on separate devices/filesystems.
+ * #1050: Fix a failure in the "spot" check when the archive contains a symlink.
+ * #1051: Add configuration filename to the "Successfully ran configuration file" log message.
+
+1.9.14
+ * #409: With the PagerDuty monitoring hook, send borgmatic logs to PagerDuty so they show up in the
+   incident UI. See the documentation for more information:
+   https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook
+ * #936: Clarify Zabbix monitoring hook documentation about creating items:
+   https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#zabbix-hook
+ * #1017: Fix a regression in which some MariaDB/MySQL passwords were not escaped correctly.
+ * #1021: Fix a regression in which the "exclude_patterns" option didn't expand "~" (the user's
+   home directory). This fix means that all "patterns" and "patterns_from" also now expand "~".
+ * #1023: Fix an error in the Btrfs hook when attempting to snapshot a read-only subvolume. Now,
+   read-only subvolumes are ignored since Btrfs can't actually snapshot them.
+
+1.9.13
+ * #975: Add a "compression" option to the PostgreSQL database hook.
+ * #1001: Fix a ZFS error during snapshot cleanup.
+ * #1003: In the Zabbix monitoring hook, support Zabbix 7.2's authentication changes.
+ * #1009: Send database passwords to MariaDB and MySQL via anonymous pipe, which is more secure than
+   using an environment variable.
+ * #1013: Send database passwords to MongoDB via anonymous pipe, which is more secure than using
+   "--password" on the command-line!
+ * #1015: When ctrl-C is pressed, more strongly encourage Borg to actually exit.
+ * Add a "verify_tls" option to the Uptime Kuma monitoring hook for disabling TLS verification.
+ * Add "tls" options to the MariaDB and MySQL database hooks to enable or disable TLS encryption
+   between client and server.
+
+1.9.12
+ * #1005: Fix the credential hooks to avoid using Python 3.12+ string features. Now borgmatic will
+   work with Python 3.9, 3.10, and 3.11 again.
+
+1.9.11
+ * #795: Add credential loading from file, KeePassXC, and Docker/Podman secrets. See the
+   documentation for more information:
+   https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
+ * #996: Fix the "create" action to omit the repository label prefix from Borg's output when
+   databases are enabled.
+ * #998: Send the "encryption_passphrase" option to Borg via an anonymous pipe, which is more secure
+   than using an environment variable.
+ * #999: Fix a runtime directory error from a conflict between "extra_borg_options" and special file
+   detection.
+ * #1001: For the ZFS, Btrfs, and LVM hooks, only make snapshots for root patterns that come from
+   a borgmatic configuration option (e.g. "source_directories")—not from other hooks within
+   borgmatic.
+ * #1001: Fix a ZFS/LVM error due to colliding snapshot mount points for nested datasets or logical
+   volumes.
+ * #1001: Don't try to snapshot ZFS datasets that have the "canmount=off" property.
+ * Fix another error in the Btrfs hook when a subvolume mounted at "/" is configured in borgmatic's
+   source directories.
+
+1.9.10
+ * #966: Add a "{credential ...}" syntax for loading systemd credentials into borgmatic
+   configuration files. See the documentation for more information:
+   https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/
  * #987: Fix a "list" action error when the "encryption_passcommand" option is set.
  * #987: When both "encryption_passcommand" and "encryption_passphrase" are configured, prefer
    "encryption_passphrase" even if it's an empty value.
@@ -7,6 +98,8 @@
    refused to run checks in this situation.
  * #989: Fix the log message code to avoid using Python 3.10+ logging features. Now borgmatic will
    work with Python 3.9 again.
+ * Capture and delay any log records produced before logging is fully configured, so early log
+   records don't get lost.
  * Add support for Python 3.13.
 
 1.9.9

+ 15 - 1
README.md

@@ -56,6 +56,8 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
 
 ## Integrations
 
+### Data
+
 <a href="https://www.postgresql.org/"><img src="docs/static/postgresql.png" alt="PostgreSQL" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
 <a href="https://www.mysql.com/"><img src="docs/static/mysql.png" alt="MySQL" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
 <a href="https://mariadb.com/"><img src="docs/static/mariadb.png" alt="MariaDB" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
@@ -65,6 +67,11 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
 <a href="https://btrfs.readthedocs.io/"><img src="docs/static/btrfs.png" alt="Btrfs" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
 <a href="https://sourceware.org/lvm2/"><img src="docs/static/lvm.png" alt="LVM" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
 <a href="https://rclone.org"><img src="docs/static/rclone.png" alt="rclone" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
+<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
+
+
+### Monitoring
+
 <a href="https://healthchecks.io/"><img src="docs/static/healthchecks.png" alt="Healthchecks" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
 <a href="https://uptime.kuma.pet/"><img src="docs/static/uptimekuma.png" alt="Uptime Kuma" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
 <a href="https://cronitor.io/"><img src="docs/static/cronitor.png" alt="Cronitor" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
@@ -76,7 +83,14 @@ borgmatic is powered by [Borg Backup](https://www.borgbackup.org/).
 <a href="https://github.com/caronc/apprise/wiki"><img src="docs/static/apprise.png" alt="Apprise" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
 <a href="https://www.zabbix.com/"><img src="docs/static/zabbix.png" alt="Zabbix" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
 <a href="https://sentry.io/"><img src="docs/static/sentry.png" alt="Sentry" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
-<a href="https://www.borgbase.com/?utm_source=borgmatic"><img src="docs/static/borgbase.png" alt="BorgBase" height="60px" style="margin-bottom:20px; margin-right:20px;"></a>
+
+
+### Credentials
+
+<a href="https://systemd.io/"><img src="docs/static/systemd.png" alt="Sentry" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
+<a href="https://www.docker.com/"><img src="docs/static/docker.png" alt="Docker" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
+<a href="https://podman.io/"><img src="docs/static/podman.png" alt="Podman" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
+<a href="https://keepassxc.org/"><img src="docs/static/keepassxc.png" alt="Podman" height="40px" style="margin-bottom:20px; margin-right:20px;"></a>
 
 
 ## Getting started

+ 1 - 1
borgmatic/actions/change_passphrase.py

@@ -16,7 +16,7 @@ def run_change_passphrase(
     remote_path,
 ):
     '''
-    Run the "key change-passprhase" action for the given repository.
+    Run the "key change-passphrase" action for the given repository.
     '''
     if (
         change_passphrase_arguments.repository is None

+ 12 - 28
borgmatic/actions/check.py

@@ -170,7 +170,7 @@ def filter_checks_on_frequency(
 
             if calendar.day_name[datetime_now().weekday()] not in days:
                 logger.info(
-                    f"Skipping {check} check due to day of the week; check only runs on {'/'.join(days)} (use --force to check anyway)"
+                    f"Skipping {check} check due to day of the week; check only runs on {'/'.join(day.title() for day in days)} (use --force to check anyway)"
                 )
                 filtered_checks.remove(check)
                 continue
@@ -372,7 +372,7 @@ def collect_spot_check_source_paths(
         borgmatic.borg.create.make_base_create_command(
             dry_run=True,
             repository_path=repository['path'],
-            config=config,
+            config=dict(config, list_details=True),
             patterns=borgmatic.actions.create.process_patterns(
                 borgmatic.actions.create.collect_patterns(config),
                 working_directory,
@@ -382,7 +382,6 @@ def collect_spot_check_source_paths(
             borgmatic_runtime_directory=borgmatic_runtime_directory,
             local_path=local_path,
             remote_path=remote_path,
-            list_files=True,
             stream_processes=stream_processes,
         )
     )
@@ -391,7 +390,7 @@ def collect_spot_check_source_paths(
     paths_output = borgmatic.execute.execute_command_and_capture_output(
         create_flags + create_positional_arguments,
         capture_stderr=True,
-        extra_environment=borgmatic.borg.environment.make_environment(config),
+        environment=borgmatic.borg.environment.make_environment(config),
         working_directory=working_directory,
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),
@@ -483,10 +482,12 @@ def compare_spot_check_hashes(
     )
     source_sample_paths = tuple(random.sample(source_paths, sample_count))
     working_directory = borgmatic.config.paths.get_working_directory(config)
-    existing_source_sample_paths = {
+    hashable_source_sample_path = {
         source_path
         for source_path in source_sample_paths
-        if os.path.exists(os.path.join(working_directory or '', source_path))
+        for full_source_path in (os.path.join(working_directory or '', source_path),)
+        if os.path.exists(full_source_path)
+        if not os.path.islink(full_source_path)
     }
     logger.debug(
         f'Sampling {sample_count} source paths (~{spot_check_config["data_sample_percentage"]}%) for spot check'
@@ -509,7 +510,7 @@ def compare_spot_check_hashes(
         hash_output = borgmatic.execute.execute_command_and_capture_output(
             (spot_check_config.get('xxh64sum_command', 'xxh64sum'),)
             + tuple(
-                path for path in source_sample_paths_subset if path in existing_source_sample_paths
+                path for path in source_sample_paths_subset if path in hashable_source_sample_path
             ),
             working_directory=working_directory,
         )
@@ -517,11 +518,13 @@ def compare_spot_check_hashes(
         source_hashes.update(
             **dict(
                 (reversed(line.split('  ', 1)) for line in hash_output.splitlines()),
-                # Represent non-existent files as having empty hashes so the comparison below still works.
+                # Represent non-existent files as having empty hashes so the comparison below still
+                # works. Same thing for filesystem links, since Borg produces empty archive hashes
+                # for them.
                 **{
                     path: ''
                     for path in source_sample_paths_subset
-                    if path not in existing_source_sample_paths
+                    if path not in hashable_source_sample_path
                 },
             )
         )
@@ -682,7 +685,6 @@ def run_check(
     config_filename,
     repository,
     config,
-    hook_context,
     local_borg_version,
     check_arguments,
     global_arguments,
@@ -699,15 +701,6 @@ def run_check(
     ):
         return
 
-    borgmatic.hooks.command.execute_hook(
-        config.get('before_check'),
-        config.get('umask'),
-        config_filename,
-        'pre-check',
-        global_arguments.dry_run,
-        **hook_context,
-    )
-
     logger.info('Running consistency checks')
 
     repository_id = borgmatic.borg.check.get_repository_id(
@@ -772,12 +765,3 @@ def run_check(
                 borgmatic_runtime_directory,
             )
         write_check_time(make_check_time_path(config, repository_id, 'spot'))
-
-    borgmatic.hooks.command.execute_hook(
-        config.get('after_check'),
-        config.get('umask'),
-        config_filename,
-        'post-check',
-        global_arguments.dry_run,
-        **hook_context,
-    )

+ 0 - 20
borgmatic/actions/compact.py

@@ -12,7 +12,6 @@ def run_compact(
     config_filename,
     repository,
     config,
-    hook_context,
     local_borg_version,
     compact_arguments,
     global_arguments,
@@ -28,14 +27,6 @@ def run_compact(
     ):
         return
 
-    borgmatic.hooks.command.execute_hook(
-        config.get('before_compact'),
-        config.get('umask'),
-        config_filename,
-        'pre-compact',
-        global_arguments.dry_run,
-        **hook_context,
-    )
     if borgmatic.borg.feature.available(borgmatic.borg.feature.Feature.COMPACT, local_borg_version):
         logger.info(f'Compacting segments{dry_run_label}')
         borgmatic.borg.compact.compact_segments(
@@ -46,18 +37,7 @@ def run_compact(
             global_arguments,
             local_path=local_path,
             remote_path=remote_path,
-            progress=compact_arguments.progress,
             cleanup_commits=compact_arguments.cleanup_commits,
-            threshold=compact_arguments.threshold,
         )
     else:  # pragma: nocover
         logger.info('Skipping compact (only available/needed in Borg 1.2+)')
-
-    borgmatic.hooks.command.execute_hook(
-        config.get('after_compact'),
-        config.get('umask'),
-        config_filename,
-        'post-compact',
-        global_arguments.dry_run,
-        **hook_context,
-    )

+ 3 - 2
borgmatic/actions/config/bootstrap.py

@@ -119,7 +119,9 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
         bootstrap_arguments.repository,
         archive_name,
         [config_path.lstrip(os.path.sep) for config_path in manifest_config_paths],
-        config,
+        # Only add progress here and not the extract_archive() call above, because progress
+        # conflicts with extract_to_stdout.
+        dict(config, progress=bootstrap_arguments.progress or False),
         local_borg_version,
         global_arguments,
         local_path=bootstrap_arguments.local_path,
@@ -127,5 +129,4 @@ def run_bootstrap(bootstrap_arguments, global_arguments, local_borg_version):
         extract_to_stdout=False,
         destination_path=bootstrap_arguments.destination,
         strip_components=bootstrap_arguments.strip_components,
-        progress=bootstrap_arguments.progress,
     )

+ 29 - 25
borgmatic/actions/create.py

@@ -36,6 +36,7 @@ def parse_pattern(pattern_line, default_style=borgmatic.borg.pattern.Pattern_sty
         path,
         borgmatic.borg.pattern.Pattern_type(pattern_type),
         borgmatic.borg.pattern.Pattern_style(pattern_style),
+        source=borgmatic.borg.pattern.Pattern_source.CONFIG,
     )
 
 
@@ -51,7 +52,9 @@ def collect_patterns(config):
     try:
         return (
             tuple(
-                borgmatic.borg.pattern.Pattern(source_directory)
+                borgmatic.borg.pattern.Pattern(
+                    source_directory, source=borgmatic.borg.pattern.Pattern_source.CONFIG
+                )
                 for source_directory in config.get('source_directories', ())
             )
             + tuple(
@@ -127,8 +130,11 @@ def expand_directory(directory, working_directory):
 def expand_patterns(patterns, working_directory=None, skip_paths=None):
     '''
     Given a sequence of borgmatic.borg.pattern.Pattern instances and an optional working directory,
-    expand tildes and globs in each root pattern. Return all the resulting patterns (not just the
-    root patterns) as a tuple.
+    expand tildes and globs in each root pattern and expand just tildes in each non-root pattern.
+    The idea is that non-root patterns may be regular expressions or other pattern styles containing
+    "*" that borgmatic should not expand as a shell glob.
+
+    Return all the resulting patterns as a tuple.
 
     If a set of paths are given to skip, then don't expand any patterns matching them.
     '''
@@ -144,12 +150,21 @@ def expand_patterns(patterns, working_directory=None, skip_paths=None):
                         pattern.type,
                         pattern.style,
                         pattern.device,
+                        pattern.source,
                     )
                     for expanded_path in expand_directory(pattern.path, working_directory)
                 )
                 if pattern.type == borgmatic.borg.pattern.Pattern_type.ROOT
                 and pattern.path not in (skip_paths or ())
-                else (pattern,)
+                else (
+                    borgmatic.borg.pattern.Pattern(
+                        os.path.expanduser(pattern.path),
+                        pattern.type,
+                        pattern.style,
+                        pattern.device,
+                        pattern.source,
+                    ),
+                )
             )
             for pattern in patterns
         )
@@ -178,6 +193,7 @@ def device_map_patterns(patterns, working_directory=None):
                 and os.path.exists(full_path)
                 else None
             ),
+            source=pattern.source,
         )
         for pattern in patterns
         for full_path in (os.path.join(working_directory or '', pattern.path),)
@@ -256,7 +272,6 @@ def run_create(
     repository,
     config,
     config_paths,
-    hook_context,
     local_borg_version,
     create_arguments,
     global_arguments,
@@ -274,14 +289,15 @@ def run_create(
     ):
         return
 
-    borgmatic.hooks.command.execute_hook(
-        config.get('before_backup'),
-        config.get('umask'),
-        config_filename,
-        'pre-backup',
-        global_arguments.dry_run,
-        **hook_context,
-    )
+    if config.get('list_details') and config.get('progress'):
+        raise ValueError(
+            'With the create action, only one of --list/--files/list_details and --progress/progress can be used.'
+        )
+
+    if config.get('list_details') and create_arguments.json:
+        raise ValueError(
+            'With the create action, only one of --list/--files/list_details and --json can be used.'
+        )
 
     logger.info(f'Creating archive{dry_run_label}')
     working_directory = borgmatic.config.paths.get_working_directory(config)
@@ -321,10 +337,7 @@ def run_create(
             borgmatic_runtime_directory,
             local_path=local_path,
             remote_path=remote_path,
-            progress=create_arguments.progress,
-            stats=create_arguments.stats,
             json=create_arguments.json,
-            list_files=create_arguments.list_files,
             stream_processes=stream_processes,
         )
 
@@ -338,12 +351,3 @@ def run_create(
             borgmatic_runtime_directory,
             global_arguments.dry_run,
         )
-
-    borgmatic.hooks.command.execute_hook(
-        config.get('after_backup'),
-        config.get('umask'),
-        config_filename,
-        'post-backup',
-        global_arguments.dry_run,
-        **hook_context,
-    )

+ 0 - 1
borgmatic/actions/export_tar.py

@@ -43,6 +43,5 @@ def run_export_tar(
             local_path=local_path,
             remote_path=remote_path,
             tar_filter=export_tar_arguments.tar_filter,
-            list_files=export_tar_arguments.list_files,
             strip_components=export_tar_arguments.strip_components,
         )

+ 0 - 18
borgmatic/actions/extract.py

@@ -12,7 +12,6 @@ def run_extract(
     config_filename,
     repository,
     config,
-    hook_context,
     local_borg_version,
     extract_arguments,
     global_arguments,
@@ -22,14 +21,6 @@ def run_extract(
     '''
     Run the "extract" action for the given repository.
     '''
-    borgmatic.hooks.command.execute_hook(
-        config.get('before_extract'),
-        config.get('umask'),
-        config_filename,
-        'pre-extract',
-        global_arguments.dry_run,
-        **hook_context,
-    )
     if extract_arguments.repository is None or borgmatic.config.validate.repositories_match(
         repository, extract_arguments.repository
     ):
@@ -54,13 +45,4 @@ def run_extract(
             remote_path=remote_path,
             destination_path=extract_arguments.destination,
             strip_components=extract_arguments.strip_components,
-            progress=extract_arguments.progress,
         )
-    borgmatic.hooks.command.execute_hook(
-        config.get('after_extract'),
-        config.get('umask'),
-        config_filename,
-        'post-extract',
-        global_arguments.dry_run,
-        **hook_context,
-    )

+ 33 - 0
borgmatic/actions/import_key.py

@@ -0,0 +1,33 @@
+import logging
+
+import borgmatic.borg.import_key
+import borgmatic.config.validate
+
+logger = logging.getLogger(__name__)
+
+
+def run_import_key(
+    repository,
+    config,
+    local_borg_version,
+    import_arguments,
+    global_arguments,
+    local_path,
+    remote_path,
+):
+    '''
+    Run the "key import" action for the given repository.
+    '''
+    if import_arguments.repository is None or borgmatic.config.validate.repositories_match(
+        repository, import_arguments.repository
+    ):
+        logger.info('Importing repository key')
+        borgmatic.borg.import_key.import_key(
+            repository['path'],
+            config,
+            local_borg_version,
+            import_arguments,
+            global_arguments,
+            local_path=local_path,
+            remote_path=remote_path,
+        )

+ 0 - 17
borgmatic/actions/prune.py

@@ -11,7 +11,6 @@ def run_prune(
     config_filename,
     repository,
     config,
-    hook_context,
     local_borg_version,
     prune_arguments,
     global_arguments,
@@ -27,14 +26,6 @@ def run_prune(
     ):
         return
 
-    borgmatic.hooks.command.execute_hook(
-        config.get('before_prune'),
-        config.get('umask'),
-        config_filename,
-        'pre-prune',
-        global_arguments.dry_run,
-        **hook_context,
-    )
     logger.info(f'Pruning archives{dry_run_label}')
     borgmatic.borg.prune.prune_archives(
         global_arguments.dry_run,
@@ -46,11 +37,3 @@ def run_prune(
         local_path=local_path,
         remote_path=remote_path,
     )
-    borgmatic.hooks.command.execute_hook(
-        config.get('after_prune'),
-        config.get('umask'),
-        config_filename,
-        'post-prune',
-        global_arguments.dry_run,
-        **hook_context,
-    )

+ 53 - 0
borgmatic/actions/recreate.py

@@ -0,0 +1,53 @@
+import logging
+
+import borgmatic.borg.recreate
+import borgmatic.config.validate
+from borgmatic.actions.create import collect_patterns, process_patterns
+
+logger = logging.getLogger(__name__)
+
+
+def run_recreate(
+    repository,
+    config,
+    local_borg_version,
+    recreate_arguments,
+    global_arguments,
+    local_path,
+    remote_path,
+):
+    '''
+    Run the "recreate" action for the given repository.
+    '''
+    if recreate_arguments.repository is None or borgmatic.config.validate.repositories_match(
+        repository, recreate_arguments.repository
+    ):
+        if recreate_arguments.archive:
+            logger.answer(f'Recreating archive {recreate_arguments.archive}')
+        else:
+            logger.answer('Recreating repository')
+
+        # Collect and process patterns.
+        processed_patterns = process_patterns(
+            collect_patterns(config), borgmatic.config.paths.get_working_directory(config)
+        )
+
+        borgmatic.borg.recreate.recreate_archive(
+            repository['path'],
+            borgmatic.borg.repo_list.resolve_archive_name(
+                repository['path'],
+                recreate_arguments.archive,
+                config,
+                local_borg_version,
+                global_arguments,
+                local_path,
+                remote_path,
+            ),
+            config,
+            local_borg_version,
+            recreate_arguments,
+            global_arguments,
+            local_path=local_path,
+            remote_path=remote_path,
+            patterns=processed_patterns,
+        )

+ 24 - 4
borgmatic/actions/repo_create.py

@@ -24,18 +24,38 @@ def run_repo_create(
         return
 
     logger.info('Creating repository')
+
+    encryption_mode = repo_create_arguments.encryption_mode or repository.get('encryption')
+
+    if not encryption_mode:
+        raise ValueError(
+            'With the repo-create action, either the --encryption flag or the repository encryption option is required.'
+        )
+
     borgmatic.borg.repo_create.create_repository(
         global_arguments.dry_run,
         repository['path'],
         config,
         local_borg_version,
         global_arguments,
-        repo_create_arguments.encryption_mode,
+        encryption_mode,
         repo_create_arguments.source_repository,
         repo_create_arguments.copy_crypt_key,
-        repo_create_arguments.append_only,
-        repo_create_arguments.storage_quota,
-        repo_create_arguments.make_parent_dirs,
+        (
+            repository.get('append_only')
+            if repo_create_arguments.append_only is None
+            else repo_create_arguments.append_only
+        ),
+        (
+            repository.get('storage_quota')
+            if repo_create_arguments.storage_quota is None
+            else repo_create_arguments.storage_quota
+        ),
+        (
+            repository.get('make_parent_directories')
+            if repo_create_arguments.make_parent_directories is None
+            else repo_create_arguments.make_parent_directories
+        ),
         local_path=local_path,
         remote_path=remote_path,
     )

+ 6 - 0
borgmatic/actions/transfer.py

@@ -17,7 +17,13 @@ def run_transfer(
     '''
     Run the "transfer" action for the given repository.
     '''
+    if transfer_arguments.archive and config.get('match_archives'):
+        raise ValueError(
+            'With the transfer action, only one of --archive and --match-archives/match_archives can be used.'
+        )
+
     logger.info('Transferring archives to repository')
+
     borgmatic.borg.transfer.transfer_archives(
         global_arguments.dry_run,
         repository['path'],

+ 1 - 1
borgmatic/borg/borg.py

@@ -61,7 +61,7 @@ def run_arbitrary_borg(
         tuple(shlex.quote(part) for part in full_command),
         output_file=DO_NOT_CAPTURE,
         shell=True,
-        extra_environment=dict(
+        environment=dict(
             (environment.make_environment(config) or {}),
             **{
                 'BORG_REPO': repository_path,

+ 1 - 1
borgmatic/borg/break_lock.py

@@ -36,7 +36,7 @@ def break_lock(
 
     execute_command(
         full_command,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 1 - 1
borgmatic/borg/change_passphrase.py

@@ -56,7 +56,7 @@ def change_passphrase(
         full_command,
         output_file=borgmatic.execute.DO_NOT_CAPTURE,
         output_log_level=logging.ANSWER,
-        extra_environment=environment.make_environment(config_without_passphrase),
+        environment=environment.make_environment(config_without_passphrase),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 4 - 4
borgmatic/borg/check.py

@@ -32,7 +32,7 @@ def make_archive_filter_flags(local_borg_version, config, checks, check_argument
             if prefix
             else (
                 flags.make_match_archives_flags(
-                    check_arguments.match_archives or config.get('match_archives'),
+                    config.get('match_archives'),
                     config.get('archive_name_format'),
                     local_borg_version,
                 )
@@ -170,7 +170,7 @@ def check_archives(
             + (('--log-json',) if global_arguments.log_json else ())
             + (('--lock-wait', str(lock_wait)) if lock_wait else ())
             + verbosity_flags
-            + (('--progress',) if check_arguments.progress else ())
+            + (('--progress',) if config.get('progress') else ())
             + (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
             + flags.make_repository_flags(repository_path, local_borg_version)
         )
@@ -180,9 +180,9 @@ def check_archives(
             # The Borg repair option triggers an interactive prompt, which won't work when output is
             # captured. And progress messes with the terminal directly.
             output_file=(
-                DO_NOT_CAPTURE if check_arguments.repair or check_arguments.progress else None
+                DO_NOT_CAPTURE if check_arguments.repair or config.get('progress') else None
             ),
-            extra_environment=environment.make_environment(config),
+            environment=environment.make_environment(config),
             working_directory=working_directory,
             borg_local_path=local_path,
             borg_exit_codes=borg_exit_codes,

+ 3 - 4
borgmatic/borg/compact.py

@@ -15,9 +15,7 @@ def compact_segments(
     global_arguments,
     local_path='borg',
     remote_path=None,
-    progress=False,
     cleanup_commits=False,
-    threshold=None,
 ):
     '''
     Given dry-run flag, a local or remote repository path, a configuration dict, and the local Borg
@@ -26,6 +24,7 @@ def compact_segments(
     umask = config.get('umask', None)
     lock_wait = config.get('lock_wait', None)
     extra_borg_options = config.get('extra_borg_options', {}).get('compact', '')
+    threshold = config.get('compact_threshold')
 
     full_command = (
         (local_path, 'compact')
@@ -33,7 +32,7 @@ def compact_segments(
         + (('--umask', str(umask)) if umask else ())
         + (('--log-json',) if global_arguments.log_json else ())
         + (('--lock-wait', str(lock_wait)) if lock_wait else ())
-        + (('--progress',) if progress else ())
+        + (('--progress',) if config.get('progress') else ())
         + (('--cleanup-commits',) if cleanup_commits else ())
         + (('--threshold', str(threshold)) if threshold else ())
         + (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
@@ -49,7 +48,7 @@ def compact_segments(
     execute_command(
         full_command,
         output_log_level=logging.INFO,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 32 - 26
borgmatic/borg/create.py

@@ -132,41 +132,53 @@ def collect_special_file_paths(
     used.
 
     Skip looking for special files in the given borgmatic runtime directory, as borgmatic creates
-    its own special files there for database dumps. And if the borgmatic runtime directory is
-    configured to be excluded from the files Borg backs up, error, because this means Borg won't be
-    able to consume any database dumps and therefore borgmatic will hang.
+    its own special files there for database dumps and we don't want those omitted.
+
+    Additionally, if the borgmatic runtime directory is not contained somewhere in the files Borg
+    plans to backup, that means the user must have excluded the runtime directory (e.g. via
+    "exclude_patterns" or similar). Therefore, raise, because this means Borg won't be able to
+    consume any database dumps and therefore borgmatic will hang when it tries to do so.
     '''
     # Omit "--exclude-nodump" from the Borg dry run command, because that flag causes Borg to open
-    # files including any named pipe we've created.
+    # files including any named pipe we've created. And omit "--filter" because that can break the
+    # paths output parsing below such that path lines no longer start with th expected "- ".
     paths_output = execute_command_and_capture_output(
-        tuple(argument for argument in create_command if argument != '--exclude-nodump')
+        flags.omit_flag_and_value(flags.omit_flag(create_command, '--exclude-nodump'), '--filter')
         + ('--dry-run', '--list'),
         capture_stderr=True,
         working_directory=working_directory,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),
     )
 
+    # These are all the individual files that Borg is planning to backup as determined by the Borg
+    # create dry run above.
     paths = tuple(
         path_line.split(' ', 1)[1]
         for path_line in paths_output.split('\n')
         if path_line and path_line.startswith('- ') or path_line.startswith('+ ')
     )
-    skip_paths = {}
+
+    # These are the subset of those files that contain the borgmatic runtime directory.
+    paths_containing_runtime_directory = {}
 
     if os.path.exists(borgmatic_runtime_directory):
-        skip_paths = {
+        paths_containing_runtime_directory = {
             path for path in paths if any_parent_directories(path, (borgmatic_runtime_directory,))
         }
 
-        if not skip_paths and not dry_run:
+        # If no paths to backup contain the runtime directory, it must've been excluded.
+        if not paths_containing_runtime_directory and not dry_run:
             raise ValueError(
                 f'The runtime directory {os.path.normpath(borgmatic_runtime_directory)} overlaps with the configured excludes or patterns with excludes. Please ensure the runtime directory is not excluded.'
             )
 
     return tuple(
-        path for path in paths if special_file(path, working_directory) if path not in skip_paths
+        path
+        for path in paths
+        if special_file(path, working_directory)
+        if path not in paths_containing_runtime_directory
     )
 
 
@@ -184,7 +196,7 @@ def check_all_root_patterns_exist(patterns):
 
     if missing_paths:
         raise ValueError(
-            f"Source directories / root pattern paths do not exist: {', '.join(missing_paths)}"
+            f"Source directories or root pattern paths do not exist: {', '.join(missing_paths)}"
         )
 
 
@@ -201,9 +213,7 @@ def make_base_create_command(
     borgmatic_runtime_directory,
     local_path='borg',
     remote_path=None,
-    progress=False,
     json=False,
-    list_files=False,
     stream_processes=None,
 ):
     '''
@@ -281,7 +291,7 @@ def make_base_create_command(
         + (('--lock-wait', str(lock_wait)) if lock_wait else ())
         + (
             ('--list', '--filter', list_filter_flags)
-            if list_files and not json and not progress
+            if config.get('list_details') and not json and not config.get('progress')
             else ()
         )
         + (('--dry-run',) if dry_run else ())
@@ -325,6 +335,7 @@ def make_base_create_command(
                         special_file_path,
                         borgmatic.borg.pattern.Pattern_type.NO_RECURSE,
                         borgmatic.borg.pattern.Pattern_style.FNMATCH,
+                        source=borgmatic.borg.pattern.Pattern_source.INTERNAL,
                     )
                     for special_file_path in special_file_paths
                 ),
@@ -348,10 +359,7 @@ def create_archive(
     borgmatic_runtime_directory,
     local_path='borg',
     remote_path=None,
-    progress=False,
-    stats=False,
     json=False,
-    list_files=False,
     stream_processes=None,
 ):
     '''
@@ -376,28 +384,26 @@ def create_archive(
         borgmatic_runtime_directory,
         local_path,
         remote_path,
-        progress,
         json,
-        list_files,
         stream_processes,
     )
 
     if json:
         output_log_level = None
-    elif list_files or (stats and not dry_run):
+    elif config.get('list_details') or (config.get('statistics') and not dry_run):
         output_log_level = logging.ANSWER
     else:
         output_log_level = logging.INFO
 
     # The progress output isn't compatible with captured and logged output, as progress messes with
     # the terminal directly.
-    output_file = DO_NOT_CAPTURE if progress else None
+    output_file = DO_NOT_CAPTURE if config.get('progress') else None
 
     create_flags += (
         (('--info',) if logger.getEffectiveLevel() == logging.INFO and not json else ())
-        + (('--stats',) if stats and not json and not dry_run else ())
+        + (('--stats',) if config.get('statistics') and not json and not dry_run else ())
         + (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) and not json else ())
-        + (('--progress',) if progress else ())
+        + (('--progress',) if config.get('progress') else ())
         + (('--json',) if json else ())
     )
     borg_exit_codes = config.get('borg_exit_codes')
@@ -409,7 +415,7 @@ def create_archive(
             output_log_level,
             output_file,
             working_directory=working_directory,
-            extra_environment=environment.make_environment(config),
+            environment=environment.make_environment(config),
             borg_local_path=local_path,
             borg_exit_codes=borg_exit_codes,
         )
@@ -417,7 +423,7 @@ def create_archive(
         return execute_command_and_capture_output(
             create_flags + create_positional_arguments,
             working_directory=working_directory,
-            extra_environment=environment.make_environment(config),
+            environment=environment.make_environment(config),
             borg_local_path=local_path,
             borg_exit_codes=borg_exit_codes,
         )
@@ -427,7 +433,7 @@ def create_archive(
             output_log_level,
             output_file,
             working_directory=working_directory,
-            extra_environment=environment.make_environment(config),
+            environment=environment.make_environment(config),
             borg_local_path=local_path,
             borg_exit_codes=borg_exit_codes,
         )

+ 12 - 4
borgmatic/borg/delete.py

@@ -34,7 +34,7 @@ def make_delete_command(
         + borgmatic.borg.flags.make_flags('umask', config.get('umask'))
         + borgmatic.borg.flags.make_flags('log-json', global_arguments.log_json)
         + borgmatic.borg.flags.make_flags('lock-wait', config.get('lock_wait'))
-        + borgmatic.borg.flags.make_flags('list', delete_arguments.list_archives)
+        + borgmatic.borg.flags.make_flags('list', config.get('list_details'))
         + (
             (('--force',) + (('--force',) if delete_arguments.force >= 2 else ()))
             if delete_arguments.force
@@ -48,9 +48,17 @@ def make_delete_command(
             local_borg_version=local_borg_version,
             default_archive_name_format='*',
         )
+        + (('--stats',) if config.get('statistics') else ())
         + borgmatic.borg.flags.make_flags_from_arguments(
             delete_arguments,
-            excludes=('list_archives', 'force', 'match_archives', 'archive', 'repository'),
+            excludes=(
+                'list_details',
+                'statistics',
+                'force',
+                'match_archives',
+                'archive',
+                'repository',
+            ),
         )
         + borgmatic.borg.flags.make_repository_flags(repository['path'], local_borg_version)
     )
@@ -98,7 +106,7 @@ def delete_archives(
 
         repo_delete_arguments = argparse.Namespace(
             repository=repository['path'],
-            list_archives=delete_arguments.list_archives,
+            list_details=delete_arguments.list_details,
             force=delete_arguments.force,
             cache_only=delete_arguments.cache_only,
             keep_security_info=delete_arguments.keep_security_info,
@@ -128,7 +136,7 @@ def delete_archives(
     borgmatic.execute.execute_command(
         command,
         output_log_level=logging.ANSWER,
-        extra_environment=borgmatic.borg.environment.make_environment(config),
+        environment=borgmatic.borg.environment.make_environment(config),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 35 - 9
borgmatic/borg/environment.py

@@ -1,6 +1,7 @@
 import os
 
 import borgmatic.borg.passcommand
+import borgmatic.hooks.credential.parse
 
 OPTION_TO_ENVIRONMENT_VARIABLE = {
     'borg_base_directory': 'BORG_BASE_DIR',
@@ -9,7 +10,6 @@ OPTION_TO_ENVIRONMENT_VARIABLE = {
     'borg_files_cache_ttl': 'BORG_FILES_CACHE_TTL',
     'borg_security_directory': 'BORG_SECURITY_DIR',
     'borg_keys_directory': 'BORG_KEYS_DIR',
-    'encryption_passphrase': 'BORG_PASSPHRASE',
     'ssh_command': 'BORG_RSH',
     'temporary_directory': 'TMPDIR',
 }
@@ -26,29 +26,55 @@ DEFAULT_BOOL_OPTION_TO_UPPERCASE_ENVIRONMENT_VARIABLE = {
 
 def make_environment(config):
     '''
-    Given a borgmatic configuration dict, return its options converted to a Borg environment
-    variable dict.
+    Given a borgmatic configuration dict, convert it to a Borg environment variable dict, merge it
+    with a copy of the current environment variables, and return the result.
 
     Do not reuse this environment across multiple Borg invocations, because it can include
     references to resources like anonymous pipes for passphrases—which can only be consumed once.
+
+    Here's how native Borg precedence works for a few of the environment variables:
+
+      1. BORG_PASSPHRASE, if set, is used first.
+      2. BORG_PASSCOMMAND is used only if BORG_PASSPHRASE isn't set.
+      3. BORG_PASSPHRASE_FD is used only if neither of the above are set.
+
+    In borgmatic, we want to simulate this precedence order, but there are some additional
+    complications. First, values can come from either configuration or from environment variables
+    set outside borgmatic; configured options should take precedence. Second, when borgmatic gets a
+    passphrase—directly from configuration or indirectly via a credential hook or a passcommand—we
+    want to pass that passphrase to Borg via an anonymous pipe (+ BORG_PASSPHRASE_FD), since that's
+    more secure than using an environment variable (BORG_PASSPHRASE).
     '''
-    environment = {}
+    environment = dict(os.environ)
 
     for option_name, environment_variable_name in OPTION_TO_ENVIRONMENT_VARIABLE.items():
         value = config.get(option_name)
 
-        if value:
+        if value is not None:
             environment[environment_variable_name] = str(value)
 
-    passphrase = borgmatic.borg.passcommand.get_passphrase_from_passcommand(config)
+    if 'encryption_passphrase' in config:
+        environment.pop('BORG_PASSPHRASE', None)
+        environment.pop('BORG_PASSCOMMAND', None)
+
+    if 'encryption_passcommand' in config:
+        environment.pop('BORG_PASSCOMMAND', None)
+
+    passphrase = borgmatic.hooks.credential.parse.resolve_credential(
+        config.get('encryption_passphrase'), config
+    )
+
+    if passphrase is None:
+        passphrase = borgmatic.borg.passcommand.get_passphrase_from_passcommand(config)
 
-    # If the passcommand produced a passphrase, send it to Borg via an anonymous pipe.
-    if passphrase:
+    # If there's a passphrase (from configuration, from a configured credential, or from a
+    # configured passcommand), send it to Borg via an anonymous pipe.
+    if passphrase is not None:
         read_file_descriptor, write_file_descriptor = os.pipe()
         os.write(write_file_descriptor, passphrase.encode('utf-8'))
         os.close(write_file_descriptor)
 
-        # This, plus subprocess.Popen(..., close_fds=False) in execute.py, is necessary for the Borg
+        # This plus subprocess.Popen(..., close_fds=False) in execute.py is necessary for the Borg
         # child process to inherit the file descriptor.
         os.set_inheritable(read_file_descriptor, True)
         environment['BORG_PASSPHRASE_FD'] = str(read_file_descriptor)

+ 1 - 1
borgmatic/borg/export_key.py

@@ -67,7 +67,7 @@ def export_key(
         full_command,
         output_file=output_file,
         output_log_level=logging.ANSWER,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=working_directory,
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 3 - 4
borgmatic/borg/export_tar.py

@@ -20,7 +20,6 @@ def export_tar_archive(
     local_path='borg',
     remote_path=None,
     tar_filter=None,
-    list_files=False,
     strip_components=None,
 ):
     '''
@@ -43,7 +42,7 @@ def export_tar_archive(
         + (('--log-json',) if global_arguments.log_json else ())
         + (('--lock-wait', str(lock_wait)) if lock_wait else ())
         + (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
-        + (('--list',) if list_files else ())
+        + (('--list',) if config.get('list_details') else ())
         + (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
         + (('--dry-run',) if dry_run else ())
         + (('--tar-filter', tar_filter) if tar_filter else ())
@@ -57,7 +56,7 @@ def export_tar_archive(
         + (tuple(paths) if paths else ())
     )
 
-    if list_files:
+    if config.get('list_details'):
         output_log_level = logging.ANSWER
     else:
         output_log_level = logging.INFO
@@ -70,7 +69,7 @@ def export_tar_archive(
         full_command,
         output_file=DO_NOT_CAPTURE if destination_path == '-' else None,
         output_log_level=output_log_level,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 9 - 12
borgmatic/borg/extract.py

@@ -58,7 +58,7 @@ def extract_last_archive_dry_run(
 
     execute_command(
         full_extract_command,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),
@@ -77,7 +77,6 @@ def extract_archive(
     remote_path=None,
     destination_path=None,
     strip_components=None,
-    progress=False,
     extract_to_stdout=False,
 ):
     '''
@@ -92,8 +91,8 @@ def extract_archive(
     umask = config.get('umask', None)
     lock_wait = config.get('lock_wait', None)
 
-    if progress and extract_to_stdout:
-        raise ValueError('progress and extract_to_stdout cannot both be set')
+    if config.get('progress') and extract_to_stdout:
+        raise ValueError('progress and extract to stdout cannot both be set')
 
     if feature.available(feature.Feature.NUMERIC_IDS, local_borg_version):
         numeric_ids_flags = ('--numeric-ids',) if config.get('numeric_ids') else ()
@@ -128,15 +127,13 @@ def extract_archive(
         + (('--debug', '--list', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
         + (('--dry-run',) if dry_run else ())
         + (('--strip-components', str(strip_components)) if strip_components else ())
-        + (('--progress',) if progress else ())
+        + (('--progress',) if config.get('progress') else ())
         + (('--stdout',) if extract_to_stdout else ())
         + flags.make_repository_archive_flags(
             # Make the repository path absolute so the destination directory used below via changing
             # the working directory doesn't prevent Borg from finding the repo. But also apply the
             # user's configured working directory (if any) to the repo path.
-            borgmatic.config.validate.normalize_repository_path(
-                os.path.join(working_directory or '', repository)
-            ),
+            borgmatic.config.validate.normalize_repository_path(repository, working_directory),
             archive,
             local_borg_version,
         )
@@ -150,11 +147,11 @@ def extract_archive(
 
     # The progress output isn't compatible with captured and logged output, as progress messes with
     # the terminal directly.
-    if progress:
+    if config.get('progress'):
         return execute_command(
             full_command,
             output_file=DO_NOT_CAPTURE,
-            extra_environment=environment.make_environment(config),
+            environment=environment.make_environment(config),
             working_directory=full_destination_path,
             borg_local_path=local_path,
             borg_exit_codes=borg_exit_codes,
@@ -166,7 +163,7 @@ def extract_archive(
             full_command,
             output_file=subprocess.PIPE,
             run_to_completion=False,
-            extra_environment=environment.make_environment(config),
+            environment=environment.make_environment(config),
             working_directory=full_destination_path,
             borg_local_path=local_path,
             borg_exit_codes=borg_exit_codes,
@@ -176,7 +173,7 @@ def extract_archive(
     # if the restore paths don't exist in the archive.
     execute_command(
         full_command,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=full_destination_path,
         borg_local_path=local_path,
         borg_exit_codes=borg_exit_codes,

+ 2 - 0
borgmatic/borg/feature.py

@@ -17,6 +17,7 @@ class Feature(Enum):
     MATCH_ARCHIVES = 11
     EXCLUDED_FILES_MINUS = 12
     ARCHIVE_SERIES = 13
+    NO_PRUNE_STATS = 14
 
 
 FEATURE_TO_MINIMUM_BORG_VERSION = {
@@ -33,6 +34,7 @@ FEATURE_TO_MINIMUM_BORG_VERSION = {
     Feature.MATCH_ARCHIVES: parse('2.0.0b3'),  # borg --match-archives
     Feature.EXCLUDED_FILES_MINUS: parse('2.0.0b5'),  # --list --filter uses "-" for excludes
     Feature.ARCHIVE_SERIES: parse('2.0.0b11'),  # identically named archives form a series
+    Feature.NO_PRUNE_STATS: parse('2.0.0b10'),  # prune --stats is not available
 }
 
 

+ 41 - 0
borgmatic/borg/flags.py

@@ -156,3 +156,44 @@ def warn_for_aggressive_archive_flags(json_command, json_output):
         logger.debug(f'Cannot parse JSON output from archive command: {error}')
     except (TypeError, KeyError):
         logger.debug('Cannot parse JSON output from archive command: No "archives" key found')
+
+
+def omit_flag(arguments, flag):
+    '''
+    Given a sequence of Borg command-line arguments, return them with the given (valueless) flag
+    omitted. For instance, if the flag is "--flag" and arguments is:
+
+        ('borg', 'create', '--flag', '--other-flag')
+
+    ... then return:
+
+        ('borg', 'create', '--other-flag')
+    '''
+    return tuple(argument for argument in arguments if argument != flag)
+
+
+def omit_flag_and_value(arguments, flag):
+    '''
+    Given a sequence of Borg command-line arguments, return them with the given flag and its
+    corresponding value omitted. For instance, if the flag is "--flag" and arguments is:
+
+        ('borg', 'create', '--flag', 'value', '--other-flag')
+
+    ... or:
+
+        ('borg', 'create', '--flag=value', '--other-flag')
+
+    ... then return:
+
+        ('borg', 'create', '--other-flag')
+    '''
+    # This works by zipping together a list of overlapping pairwise arguments. E.g., ('one', 'two',
+    # 'three', 'four') becomes ((None, 'one'), ('one, 'two'), ('two', 'three'), ('three', 'four')).
+    # This makes it easy to "look back" at the previous arguments so we can exclude both a flag and
+    # its value.
+    return tuple(
+        argument
+        for (previous_argument, argument) in zip((None,) + arguments, arguments)
+        if flag not in (previous_argument, argument)
+        if not argument.startswith(f'{flag}=')
+    )

+ 70 - 0
borgmatic/borg/import_key.py

@@ -0,0 +1,70 @@
+import logging
+import os
+
+import borgmatic.config.paths
+import borgmatic.logger
+from borgmatic.borg import environment, flags
+from borgmatic.execute import DO_NOT_CAPTURE, execute_command
+
+logger = logging.getLogger(__name__)
+
+
+def import_key(
+    repository_path,
+    config,
+    local_borg_version,
+    import_arguments,
+    global_arguments,
+    local_path='borg',
+    remote_path=None,
+):
+    '''
+    Given a local or remote repository path, a configuration dict, the local Borg version, import
+    arguments, and optional local and remote Borg paths, import the repository key from the
+    path indicated in the import arguments.
+
+    If the path is empty or "-", then read the key from stdin.
+
+    Raise ValueError if the path is given and it does not exist.
+    '''
+    umask = config.get('umask', None)
+    lock_wait = config.get('lock_wait', None)
+    working_directory = borgmatic.config.paths.get_working_directory(config)
+
+    if import_arguments.path and import_arguments.path != '-':
+        if not os.path.exists(os.path.join(working_directory or '', import_arguments.path)):
+            raise ValueError(f'Path {import_arguments.path} does not exist. Aborting.')
+
+        input_file = None
+    else:
+        input_file = DO_NOT_CAPTURE
+
+    full_command = (
+        (local_path, 'key', 'import')
+        + (('--remote-path', remote_path) if remote_path else ())
+        + (('--umask', str(umask)) if umask else ())
+        + (('--log-json',) if global_arguments.log_json else ())
+        + (('--lock-wait', str(lock_wait)) if lock_wait else ())
+        + (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+        + (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+        + flags.make_flags('paper', import_arguments.paper)
+        + flags.make_repository_flags(
+            repository_path,
+            local_borg_version,
+        )
+        + ((import_arguments.path,) if input_file is None else ())
+    )
+
+    if global_arguments.dry_run:
+        logger.info('Skipping key import (dry run)')
+        return
+
+    execute_command(
+        full_command,
+        input_file=input_file,
+        output_log_level=logging.INFO,
+        environment=environment.make_environment(config),
+        working_directory=working_directory,
+        borg_local_path=local_path,
+        borg_exit_codes=config.get('borg_exit_codes'),
+    )

+ 3 - 5
borgmatic/borg/info.py

@@ -48,9 +48,7 @@ def make_info_command(
             if info_arguments.prefix
             else (
                 flags.make_match_archives_flags(
-                    info_arguments.match_archives
-                    or info_arguments.archive
-                    or config.get('match_archives'),
+                    info_arguments.archive or config.get('match_archives'),
                     config.get('archive_name_format'),
                     local_borg_version,
                 )
@@ -102,7 +100,7 @@ def display_archives_info(
 
     json_info = execute_command_and_capture_output(
         json_command,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=working_directory,
         borg_local_path=local_path,
         borg_exit_codes=borg_exit_codes,
@@ -116,7 +114,7 @@ def display_archives_info(
     execute_command(
         main_command,
         output_log_level=logging.ANSWER,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=working_directory,
         borg_local_path=local_path,
         borg_exit_codes=borg_exit_codes,

+ 3 - 3
borgmatic/borg/list.py

@@ -124,7 +124,7 @@ def capture_archive_listing(
                 local_path,
                 remote_path,
             ),
-            extra_environment=environment.make_environment(config),
+            environment=environment.make_environment(config),
             working_directory=borgmatic.config.paths.get_working_directory(config),
             borg_local_path=local_path,
             borg_exit_codes=config.get('borg_exit_codes'),
@@ -221,7 +221,7 @@ def list_archive(
                     local_path,
                     remote_path,
                 ),
-                extra_environment=environment.make_environment(config),
+                environment=environment.make_environment(config),
                 working_directory=borgmatic.config.paths.get_working_directory(config),
                 borg_local_path=local_path,
                 borg_exit_codes=borg_exit_codes,
@@ -257,7 +257,7 @@ def list_archive(
         execute_command(
             main_command,
             output_log_level=logging.ANSWER,
-            extra_environment=environment.make_environment(config),
+            environment=environment.make_environment(config),
             working_directory=borgmatic.config.paths.get_working_directory(config),
             borg_local_path=local_path,
             borg_exit_codes=borg_exit_codes,

+ 2 - 2
borgmatic/borg/mount.py

@@ -66,7 +66,7 @@ def mount_archive(
         execute_command(
             full_command,
             output_file=DO_NOT_CAPTURE,
-            extra_environment=environment.make_environment(config),
+            environment=environment.make_environment(config),
             working_directory=working_directory,
             borg_local_path=local_path,
             borg_exit_codes=config.get('borg_exit_codes'),
@@ -75,7 +75,7 @@ def mount_archive(
 
     execute_command(
         full_command,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=working_directory,
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 3 - 13
borgmatic/borg/passcommand.py

@@ -9,21 +9,14 @@ logger = logging.getLogger(__name__)
 
 
 @functools.cache
-def run_passcommand(passcommand, passphrase_configured, working_directory):
+def run_passcommand(passcommand, working_directory):
     '''
     Run the given passcommand using the given working directory and return the passphrase produced
-    by the command. But bail first if a passphrase is already configured; this mimics Borg's
-    behavior.
+    by the command.
 
     Cache the results so that the passcommand only needs to run—and potentially prompt the user—once
     per borgmatic invocation.
     '''
-    if passcommand and passphrase_configured:
-        logger.warning(
-            'Ignoring the "encryption_passcommand" option because "encryption_passphrase" is set'
-        )
-        return None
-
     return borgmatic.execute.execute_command_and_capture_output(
         shlex.split(passcommand),
         working_directory=working_directory,
@@ -44,7 +37,4 @@ def get_passphrase_from_passcommand(config):
     if not passcommand:
         return None
 
-    passphrase = config.get('encryption_passphrase')
-    working_directory = borgmatic.config.paths.get_working_directory(config)
-
-    return run_passcommand(passcommand, bool(passphrase is not None), working_directory)
+    return run_passcommand(passcommand, borgmatic.config.paths.get_working_directory(config))

+ 20 - 1
borgmatic/borg/pattern.py

@@ -20,12 +20,31 @@ class Pattern_style(enum.Enum):
     PATH_FULL_MATCH = 'pf'
 
 
+class Pattern_source(enum.Enum):
+    '''
+    Where the pattern came from within borgmatic. This is important because certain use cases (like
+    filesystem snapshotting) only want to consider patterns that the user actually put in a
+    configuration file and not patterns from other sources.
+    '''
+
+    # The pattern is from a borgmatic configuration option, e.g. listed in "source_directories".
+    CONFIG = 'config'
+
+    # The pattern is generated internally within borgmatic, e.g. for special file excludes.
+    INTERNAL = 'internal'
+
+    # The pattern originates from within a borgmatic hook, e.g. a database hook that adds its dump
+    # directory.
+    HOOK = 'hook'
+
+
 Pattern = collections.namedtuple(
     'Pattern',
-    ('path', 'type', 'style', 'device'),
+    ('path', 'type', 'style', 'device', 'source'),
     defaults=(
         Pattern_type.ROOT,
         Pattern_style.NONE,
         None,
+        Pattern_source.HOOK,
     ),
 )

+ 12 - 6
borgmatic/borg/prune.py

@@ -41,7 +41,7 @@ def make_prune_flags(config, prune_arguments, local_borg_version):
         if prefix
         else (
             flags.make_match_archives_flags(
-                prune_arguments.match_archives or config.get('match_archives'),
+                config.get('match_archives'),
                 config.get('archive_name_format'),
                 local_borg_version,
             )
@@ -75,20 +75,26 @@ def prune_archives(
         + (('--umask', str(umask)) if umask else ())
         + (('--log-json',) if global_arguments.log_json else ())
         + (('--lock-wait', str(lock_wait)) if lock_wait else ())
-        + (('--stats',) if prune_arguments.stats and not dry_run else ())
+        + (
+            ('--stats',)
+            if config.get('statistics')
+            and not dry_run
+            and not feature.available(feature.Feature.NO_PRUNE_STATS, local_borg_version)
+            else ()
+        )
         + (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
         + flags.make_flags_from_arguments(
             prune_arguments,
-            excludes=('repository', 'match_archives', 'stats', 'list_archives'),
+            excludes=('repository', 'match_archives', 'statistics', 'list_details'),
         )
-        + (('--list',) if prune_arguments.list_archives else ())
+        + (('--list',) if config.get('list_details') else ())
         + (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
         + (('--dry-run',) if dry_run else ())
         + (tuple(extra_borg_options.split(' ')) if extra_borg_options else ())
         + flags.make_repository_flags(repository_path, local_borg_version)
     )
 
-    if prune_arguments.stats or prune_arguments.list_archives:
+    if config.get('statistics') or config.get('list_details'):
         output_log_level = logging.ANSWER
     else:
         output_log_level = logging.INFO
@@ -96,7 +102,7 @@ def prune_archives(
     execute_command(
         full_command,
         output_log_level=output_log_level,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 103 - 0
borgmatic/borg/recreate.py

@@ -0,0 +1,103 @@
+import logging
+import shlex
+
+import borgmatic.borg.environment
+import borgmatic.borg.feature
+import borgmatic.config.paths
+import borgmatic.execute
+from borgmatic.borg import flags
+from borgmatic.borg.create import make_exclude_flags, make_list_filter_flags, write_patterns_file
+
+logger = logging.getLogger(__name__)
+
+
+def recreate_archive(
+    repository,
+    archive,
+    config,
+    local_borg_version,
+    recreate_arguments,
+    global_arguments,
+    local_path,
+    remote_path=None,
+    patterns=None,
+):
+    '''
+    Given a local or remote repository path, an archive name, a configuration dict, the local Borg
+    version string, an argparse.Namespace of recreate arguments, an argparse.Namespace of global
+    arguments, optional local and remote Borg paths, executes the recreate command with the given
+    arguments.
+    '''
+    lock_wait = config.get('lock_wait', None)
+    exclude_flags = make_exclude_flags(config)
+    compression = config.get('compression', None)
+    chunker_params = config.get('chunker_params', None)
+    # Available recompress MODES: "if-different", "always", "never" (default)
+    recompress = config.get('recompress', None)
+
+    # Write patterns to a temporary file and use that file with --patterns-from.
+    patterns_file = write_patterns_file(
+        patterns, borgmatic.config.paths.get_working_directory(config)
+    )
+
+    recreate_command = (
+        (local_path, 'recreate')
+        + (('--remote-path', remote_path) if remote_path else ())
+        + (('--log-json',) if global_arguments.log_json else ())
+        + (('--lock-wait', str(lock_wait)) if lock_wait is not None else ())
+        + (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
+        + (('--debug', '--show-rc') if logger.isEnabledFor(logging.DEBUG) else ())
+        + (('--patterns-from', patterns_file.name) if patterns_file else ())
+        + (
+            (
+                '--list',
+                '--filter',
+                make_list_filter_flags(local_borg_version, global_arguments.dry_run),
+            )
+            if config.get('list_details')
+            else ()
+        )
+        # Flag --target works only for a single archive.
+        + (('--target', recreate_arguments.target) if recreate_arguments.target and archive else ())
+        + (
+            ('--comment', shlex.quote(recreate_arguments.comment))
+            if recreate_arguments.comment
+            else ()
+        )
+        + (('--timestamp', recreate_arguments.timestamp) if recreate_arguments.timestamp else ())
+        + (('--compression', compression) if compression else ())
+        + (('--chunker-params', chunker_params) if chunker_params else ())
+        + (('--recompress', recompress) if recompress else ())
+        + exclude_flags
+        + (
+            (
+                flags.make_repository_flags(repository, local_borg_version)
+                + flags.make_match_archives_flags(
+                    archive or config.get('match_archives'),
+                    config.get('archive_name_format'),
+                    local_borg_version,
+                )
+            )
+            if borgmatic.borg.feature.available(
+                borgmatic.borg.feature.Feature.SEPARATE_REPOSITORY_ARCHIVE, local_borg_version
+            )
+            else (
+                flags.make_repository_archive_flags(repository, archive, local_borg_version)
+                if archive
+                else flags.make_repository_flags(repository, local_borg_version)
+            )
+        )
+    )
+
+    if global_arguments.dry_run:
+        logger.info('Skipping the archive recreation (dry run)')
+        return
+
+    borgmatic.execute.execute_command(
+        full_command=recreate_command,
+        output_log_level=logging.INFO,
+        environment=borgmatic.borg.environment.make_environment(config),
+        working_directory=borgmatic.config.paths.get_working_directory(config),
+        borg_local_path=local_path,
+        borg_exit_codes=config.get('borg_exit_codes'),
+    )

+ 3 - 3
borgmatic/borg/repo_create.py

@@ -24,7 +24,7 @@ def create_repository(
     copy_crypt_key=False,
     append_only=None,
     storage_quota=None,
-    make_parent_dirs=False,
+    make_parent_directories=False,
     local_path='borg',
     remote_path=None,
 ):
@@ -79,7 +79,7 @@ def create_repository(
         + (('--copy-crypt-key',) if copy_crypt_key else ())
         + (('--append-only',) if append_only else ())
         + (('--storage-quota', storage_quota) if storage_quota else ())
-        + (('--make-parent-dirs',) if make_parent_dirs else ())
+        + (('--make-parent-dirs',) if make_parent_directories else ())
         + (('--info',) if logger.getEffectiveLevel() == logging.INFO else ())
         + (('--debug',) if logger.isEnabledFor(logging.DEBUG) else ())
         + (('--log-json',) if global_arguments.log_json else ())
@@ -98,7 +98,7 @@ def create_repository(
     execute_command(
         repo_create_command,
         output_file=DO_NOT_CAPTURE,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 3 - 3
borgmatic/borg/repo_delete.py

@@ -39,14 +39,14 @@ def make_repo_delete_command(
         + borgmatic.borg.flags.make_flags('umask', config.get('umask'))
         + borgmatic.borg.flags.make_flags('log-json', global_arguments.log_json)
         + borgmatic.borg.flags.make_flags('lock-wait', config.get('lock_wait'))
-        + borgmatic.borg.flags.make_flags('list', repo_delete_arguments.list_archives)
+        + borgmatic.borg.flags.make_flags('list', config.get('list_details'))
         + (
             (('--force',) + (('--force',) if repo_delete_arguments.force >= 2 else ()))
             if repo_delete_arguments.force
             else ()
         )
         + borgmatic.borg.flags.make_flags_from_arguments(
-            repo_delete_arguments, excludes=('list_archives', 'force', 'repository')
+            repo_delete_arguments, excludes=('list_details', 'force', 'repository')
         )
         + borgmatic.borg.flags.make_repository_flags(repository['path'], local_borg_version)
     )
@@ -88,7 +88,7 @@ def delete_repository(
             if repo_delete_arguments.force or repo_delete_arguments.cache_only
             else borgmatic.execute.DO_NOT_CAPTURE
         ),
-        extra_environment=borgmatic.borg.environment.make_environment(config),
+        environment=borgmatic.borg.environment.make_environment(config),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 2 - 2
borgmatic/borg/repo_info.py

@@ -56,7 +56,7 @@ def display_repository_info(
     if repo_info_arguments.json:
         return execute_command_and_capture_output(
             full_command,
-            extra_environment=environment.make_environment(config),
+            environment=environment.make_environment(config),
             working_directory=working_directory,
             borg_local_path=local_path,
             borg_exit_codes=borg_exit_codes,
@@ -65,7 +65,7 @@ def display_repository_info(
         execute_command(
             full_command,
             output_log_level=logging.ANSWER,
-            extra_environment=environment.make_environment(config),
+            environment=environment.make_environment(config),
             working_directory=working_directory,
             borg_local_path=local_path,
             borg_exit_codes=borg_exit_codes,

+ 4 - 4
borgmatic/borg/repo_list.py

@@ -49,7 +49,7 @@ def resolve_archive_name(
 
     output = execute_command_and_capture_output(
         full_command,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),
@@ -113,7 +113,7 @@ def make_repo_list_command(
             if repo_list_arguments.prefix
             else (
                 flags.make_match_archives_flags(
-                    repo_list_arguments.match_archives or config.get('match_archives'),
+                    config.get('match_archives'),
                     config.get('archive_name_format'),
                     local_borg_version,
                 )
@@ -164,7 +164,7 @@ def list_repository(
 
     json_listing = execute_command_and_capture_output(
         json_command,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=working_directory,
         borg_local_path=local_path,
         borg_exit_codes=borg_exit_codes,
@@ -178,7 +178,7 @@ def list_repository(
     execute_command(
         main_command,
         output_log_level=logging.ANSWER,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=working_directory,
         borg_local_path=local_path,
         borg_exit_codes=borg_exit_codes,

+ 12 - 7
borgmatic/borg/transfer.py

@@ -32,17 +32,22 @@ def transfer_archives(
         + flags.make_flags('remote-path', remote_path)
         + flags.make_flags('umask', config.get('umask'))
         + flags.make_flags('log-json', global_arguments.log_json)
-        + flags.make_flags('lock-wait', config.get('lock_wait', None))
+        + flags.make_flags('lock-wait', config.get('lock_wait'))
+        + flags.make_flags('progress', config.get('progress'))
         + (
             flags.make_flags_from_arguments(
                 transfer_arguments,
-                excludes=('repository', 'source_repository', 'archive', 'match_archives'),
+                excludes=(
+                    'repository',
+                    'source_repository',
+                    'archive',
+                    'match_archives',
+                    'progress',
+                ),
             )
             or (
                 flags.make_match_archives_flags(
-                    transfer_arguments.match_archives
-                    or transfer_arguments.archive
-                    or config.get('match_archives'),
+                    transfer_arguments.archive or config.get('match_archives'),
                     config.get('archive_name_format'),
                     local_borg_version,
                 )
@@ -56,8 +61,8 @@ def transfer_archives(
     return execute_command(
         full_command,
         output_log_level=logging.ANSWER,
-        output_file=DO_NOT_CAPTURE if transfer_arguments.progress else None,
-        extra_environment=environment.make_environment(config),
+        output_file=DO_NOT_CAPTURE if config.get('progress') else None,
+        environment=environment.make_environment(config),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 1 - 1
borgmatic/borg/version.py

@@ -21,7 +21,7 @@ def local_borg_version(config, local_path='borg'):
     )
     output = execute_command_and_capture_output(
         full_command,
-        extra_environment=environment.make_environment(config),
+        environment=environment.make_environment(config),
         working_directory=borgmatic.config.paths.get_working_directory(config),
         borg_local_path=local_path,
         borg_exit_codes=config.get('borg_exit_codes'),

+ 389 - 57
borgmatic/commands/arguments.py

@@ -1,8 +1,13 @@
 import collections
+import io
 import itertools
+import re
 import sys
 from argparse import ArgumentParser
 
+import ruamel.yaml
+
+import borgmatic.config.schema
 from borgmatic.config import collect
 
 ACTION_ALIASES = {
@@ -27,6 +32,7 @@ ACTION_ALIASES = {
     'break-lock': [],
     'key': [],
     'borg': [],
+    'recreate': [],
 }
 
 
@@ -63,9 +69,9 @@ def get_subactions_for_actions(action_parsers):
 
 def omit_values_colliding_with_action_names(unparsed_arguments, parsed_arguments):
     '''
-    Given a sequence of string arguments and a dict from action name to parsed argparse.Namespace
-    arguments, return the string arguments with any values omitted that happen to be the same as
-    the name of a borgmatic action.
+    Given unparsed arguments as a sequence of strings and a dict from action name to parsed
+    argparse.Namespace arguments, return the string arguments with any values omitted that happen to
+    be the same as the name of a borgmatic action.
 
     This prevents, for instance, "check --only extract" from triggering the "extract" action.
     '''
@@ -282,17 +288,270 @@ def parse_arguments_for_actions(unparsed_arguments, action_parsers, global_parse
     )
 
 
-def make_parsers():
+OMITTED_FLAG_NAMES = {'match-archives', 'progress', 'statistics', 'list-details'}
+
+
+def make_argument_description(schema, flag_name):
     '''
-    Build a global arguments parser, individual action parsers, and a combined parser containing
-    both. Return them as a tuple. The global parser is useful for parsing just global arguments
-    while ignoring actions, and the combined parser is handy for displaying help that includes
-    everything: global flags, a list of actions, etc.
+    Given a configuration schema dict and a flag name for it, extend the schema's description with
+    an example or additional information as appropriate based on its type. Return the updated
+    description for use in a command-line argument.
+    '''
+    description = schema.get('description')
+    schema_type = schema.get('type')
+    example = schema.get('example')
+    pieces = [description] if description else []
+
+    if '[0]' in flag_name:
+        pieces.append(
+            ' To specify a different list element, replace the "[0]" with another array index ("[1]", "[2]", etc.).'
+        )
+
+    if example and schema_type in ('array', 'object'):
+        example_buffer = io.StringIO()
+        yaml = ruamel.yaml.YAML(typ='safe')
+        yaml.default_flow_style = True
+        yaml.dump(example, example_buffer)
+
+        pieces.append(f'Example value: "{example_buffer.getvalue().strip()}"')
+
+    return ' '.join(pieces).replace('%', '%%')
+
+
+def add_array_element_arguments(arguments_group, unparsed_arguments, flag_name):
+    r'''
+    Given an argparse._ArgumentGroup instance, a sequence of unparsed argument strings, and a dotted
+    flag name, add command-line array element flags that correspond to the given unparsed arguments.
+
+    Here's the background. We want to support flags that can have arbitrary indices like:
+
+      --foo.bar[1].baz
+
+    But argparse doesn't support that natively because the index can be an arbitrary number. We
+    won't let that stop us though, will we?
+
+    If the current flag name has an array component in it (e.g. a name with "[0]"), then make a
+    pattern that would match the flag name regardless of the number that's in it. The idea is that
+    we want to look for unparsed arguments that appear like the flag name, but instead of "[0]" they
+    have, say, "[1]" or "[123]".
+
+    Next, we check each unparsed argument against that pattern. If one of them matches, add an
+    argument flag for it to the argument parser group. Example:
+
+    Let's say flag_name is:
+
+        --foo.bar[0].baz
+
+    ... then the regular expression pattern will be:
+
+        ^--foo\.bar\[\d+\]\.baz
+
+    ... and, if that matches an unparsed argument of:
+
+        --foo.bar[1].baz
+
+    ... then an argument flag will get added equal to that unparsed argument. And so the unparsed
+    argument will match it when parsing is performed! In this manner, we're using the actual user
+    CLI input to inform what exact flags we support.
+    '''
+    if '[0]' not in flag_name or not unparsed_arguments or '--help' in unparsed_arguments:
+        return
+
+    pattern = re.compile(fr'^--{flag_name.replace("[0]", r"\[\d+\]").replace(".", r"\.")}$')
+
+    try:
+        # Find an existing list index flag (and its action) corresponding to the given flag name.
+        (argument_action, existing_flag_name) = next(
+            (action, action_flag_name)
+            for action in arguments_group._group_actions
+            for action_flag_name in action.option_strings
+            if pattern.match(action_flag_name)
+            if f'--{flag_name}'.startswith(action_flag_name)
+        )
+
+        # Based on the type of the action (e.g. argparse._StoreTrueAction), look up the corresponding
+        # action registry name (e.g., "store_true") to pass to add_argument(action=...) below.
+        action_registry_name = next(
+            registry_name
+            for registry_name, action_type in arguments_group._registries['action'].items()
+            # Not using isinstance() here because we only want an exact match—no parent classes.
+            if type(argument_action) is action_type
+        )
+    except StopIteration:
+        return
+
+    for unparsed in unparsed_arguments:
+        unparsed_flag_name = unparsed.split('=', 1)[0]
+        destination_name = unparsed_flag_name.lstrip('-').replace('-', '_')
+
+        if not pattern.match(unparsed_flag_name) or unparsed_flag_name == existing_flag_name:
+            continue
+
+        if action_registry_name in ('store_true', 'store_false'):
+            arguments_group.add_argument(
+                unparsed_flag_name,
+                action=action_registry_name,
+                default=argument_action.default,
+                dest=destination_name,
+                required=argument_action.nargs,
+            )
+        else:
+            arguments_group.add_argument(
+                unparsed_flag_name,
+                action=action_registry_name,
+                choices=argument_action.choices,
+                default=argument_action.default,
+                dest=destination_name,
+                nargs=argument_action.nargs,
+                required=argument_action.nargs,
+                type=argument_action.type,
+            )
+
+
+def add_arguments_from_schema(arguments_group, schema, unparsed_arguments, names=None):
+    '''
+    Given an argparse._ArgumentGroup instance, a configuration schema dict, and a sequence of
+    unparsed argument strings, convert the entire schema into corresponding command-line flags and
+    add them to the arguments group.
+
+    For instance, given a schema of:
+
+        {
+            'type': 'object',
+            'properties': {
+                'foo': {
+                    'type': 'object',
+                    'properties': {
+                        'bar': {'type': 'integer'}
+                    }
+                }
+            }
+        }
+
+    ... the following flag will be added to the arguments group:
+
+        --foo.bar
+
+    If "foo" is instead an array of objects, both of the following will get added:
+
+        --foo
+        --foo[0].bar
+
+    And if names are also passed in, they are considered to be the name components of an option
+    (e.g. "foo" and "bar") and are used to construct a resulting flag.
+
+    Bail if the schema is not a dict.
+    '''
+    if names is None:
+        names = ()
+
+    if not isinstance(schema, dict):
+        return
+
+    schema_type = schema.get('type')
+
+    # If this option has multiple types, just use the first one (that isn't "null").
+    if isinstance(schema_type, list):
+        try:
+            schema_type = next(single_type for single_type in schema_type if single_type != 'null')
+        except StopIteration:
+            raise ValueError(f'Unknown type in configuration schema: {schema_type}')
+
+    # If this is an "object" type, recurse for each child option ("property").
+    if schema_type == 'object':
+        properties = schema.get('properties')
+
+        # If there are child properties, recurse for each one. But if there are no child properties,
+        # fall through so that a flag gets added below for the (empty) object.
+        if properties:
+            for name, child in properties.items():
+                add_arguments_from_schema(
+                    arguments_group, child, unparsed_arguments, names + (name,)
+                )
+
+            return
+
+    # If this is an "array" type, recurse for each items type child option. Don't return yet so that
+    # a flag also gets added below for the array itself.
+    if schema_type == 'array':
+        items = schema.get('items', {})
+        properties = borgmatic.config.schema.get_properties(items)
+
+        if properties:
+            for name, child in properties.items():
+                add_arguments_from_schema(
+                    arguments_group,
+                    child,
+                    unparsed_arguments,
+                    names[:-1] + (f'{names[-1]}[0]',) + (name,),
+                )
+        # If there aren't any children, then this is an array of scalars. Recurse accordingly.
+        else:
+            add_arguments_from_schema(
+                arguments_group, items, unparsed_arguments, names[:-1] + (f'{names[-1]}[0]',)
+            )
+
+    flag_name = '.'.join(names).replace('_', '-')
+
+    # Certain options already have corresponding flags on individual actions (like "create
+    # --progress"), so don't bother adding them to the global flags.
+    if not flag_name or flag_name in OMITTED_FLAG_NAMES:
+        return
+
+    metavar = names[-1].upper()
+    description = make_argument_description(schema, flag_name)
+
+    # The object=str and array=str given here is to support specifying an object or an array as a
+    # YAML string on the command-line.
+    argument_type = borgmatic.config.schema.parse_type(schema_type, object=str, array=str)
+
+    # As a UX nicety, add separate true and false flags for boolean options.
+    if schema_type == 'boolean':
+        arguments_group.add_argument(
+            f'--{flag_name}',
+            action='store_true',
+            default=None,
+            help=description,
+        )
+
+        if names[-1].startswith('no_'):
+            no_flag_name = '.'.join(names[:-1] + (names[-1][len('no_') :],)).replace('_', '-')
+        else:
+            no_flag_name = '.'.join(names[:-1] + ('no-' + names[-1],)).replace('_', '-')
+
+        arguments_group.add_argument(
+            f'--{no_flag_name}',
+            dest=flag_name.replace('-', '_'),
+            action='store_false',
+            default=None,
+            help=f'Set the --{flag_name} value to false.',
+        )
+    else:
+        arguments_group.add_argument(
+            f'--{flag_name}',
+            type=argument_type,
+            metavar=metavar,
+            help=description,
+        )
+
+    add_array_element_arguments(arguments_group, unparsed_arguments, flag_name)
+
+
+def make_parsers(schema, unparsed_arguments):
+    '''
+    Given a configuration schema dict and unparsed arguments as a sequence of strings, build a
+    global arguments parser, individual action parsers, and a combined parser containing both.
+    Return them as a tuple. The global parser is useful for parsing just global arguments while
+    ignoring actions, and the combined parser is handy for displaying help that includes everything:
+    global flags, a list of actions, etc.
     '''
     config_paths = collect.get_default_config_paths(expand_home=True)
     unexpanded_config_paths = collect.get_default_config_paths(expand_home=False)
 
-    global_parser = ArgumentParser(add_help=False)
+    # Using allow_abbrev=False here prevents the global parser from erroring about "ambiguous"
+    # options like --encryption. Such options are intended for an action parser rather than the
+    # global parser, and so we don't want to error on them here.
+    global_parser = ArgumentParser(allow_abbrev=False, add_help=False)
     global_group = global_parser.add_argument_group('global arguments')
 
     global_group.add_argument(
@@ -309,9 +568,6 @@ def make_parsers():
         action='store_true',
         help='Go through the motions, but do not actually write to any repositories',
     )
-    global_group.add_argument(
-        '-nc', '--no-color', dest='no_color', action='store_true', help='Disable colored output'
-    )
     global_group.add_argument(
         '-v',
         '--verbosity',
@@ -388,6 +644,7 @@ def make_parsers():
         action='store_true',
         help='Display installed version number of borgmatic and exit',
     )
+    add_arguments_from_schema(global_group, schema, unparsed_arguments)
 
     global_plus_action_parser = ArgumentParser(
         description='''
@@ -415,7 +672,6 @@ def make_parsers():
         '--encryption',
         dest='encryption_mode',
         help='Borg repository encryption mode',
-        required=True,
     )
     repo_create_group.add_argument(
         '--source-repository',
@@ -434,6 +690,7 @@ def make_parsers():
     )
     repo_create_group.add_argument(
         '--append-only',
+        default=None,
         action='store_true',
         help='Create an append-only repository',
     )
@@ -443,6 +700,8 @@ def make_parsers():
     )
     repo_create_group.add_argument(
         '--make-parent-dirs',
+        dest='make_parent_directories',
+        default=None,
         action='store_true',
         help='Create any missing parent directories of the repository directory',
     )
@@ -477,7 +736,7 @@ def make_parsers():
     )
     transfer_group.add_argument(
         '--progress',
-        default=False,
+        default=None,
         action='store_true',
         help='Display progress as each archive is transferred',
     )
@@ -544,13 +803,17 @@ def make_parsers():
     )
     prune_group.add_argument(
         '--stats',
-        dest='stats',
-        default=False,
+        dest='statistics',
+        default=None,
         action='store_true',
-        help='Display statistics of the pruned archive',
+        help='Display statistics of the pruned archive [Borg 1 only]',
     )
     prune_group.add_argument(
-        '--list', dest='list_archives', action='store_true', help='List archives kept/pruned'
+        '--list',
+        dest='list_details',
+        default=None,
+        action='store_true',
+        help='List archives kept/pruned',
     )
     prune_group.add_argument(
         '--oldest',
@@ -588,8 +851,7 @@ def make_parsers():
     )
     compact_group.add_argument(
         '--progress',
-        dest='progress',
-        default=False,
+        default=None,
         action='store_true',
         help='Display progress as each segment is compacted',
     )
@@ -603,7 +865,7 @@ def make_parsers():
     compact_group.add_argument(
         '--threshold',
         type=int,
-        dest='threshold',
+        dest='compact_threshold',
         help='Minimum saved space percentage threshold for compacting a segment, defaults to 10',
     )
     compact_group.add_argument(
@@ -624,20 +886,24 @@ def make_parsers():
     )
     create_group.add_argument(
         '--progress',
-        dest='progress',
-        default=False,
+        default=None,
         action='store_true',
         help='Display progress for each file as it is backed up',
     )
     create_group.add_argument(
         '--stats',
-        dest='stats',
-        default=False,
+        dest='statistics',
+        default=None,
         action='store_true',
         help='Display statistics of archive',
     )
     create_group.add_argument(
-        '--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
+        '--list',
+        '--files',
+        dest='list_details',
+        default=None,
+        action='store_true',
+        help='Show per-file details',
     )
     create_group.add_argument(
         '--json', dest='json', default=False, action='store_true', help='Output results as JSON'
@@ -658,8 +924,7 @@ def make_parsers():
     )
     check_group.add_argument(
         '--progress',
-        dest='progress',
-        default=False,
+        default=None,
         action='store_true',
         help='Display progress for each file as it is checked',
     )
@@ -716,12 +981,15 @@ def make_parsers():
     )
     delete_group.add_argument(
         '--list',
-        dest='list_archives',
+        dest='list_details',
+        default=None,
         action='store_true',
         help='Show details for the deleted archives',
     )
     delete_group.add_argument(
         '--stats',
+        dest='statistics',
+        default=None,
         action='store_true',
         help='Display statistics for the deleted archives',
     )
@@ -826,8 +1094,7 @@ def make_parsers():
     )
     extract_group.add_argument(
         '--progress',
-        dest='progress',
-        default=False,
+        default=None,
         action='store_true',
         help='Display progress for each file as it is extracted',
     )
@@ -902,8 +1169,7 @@ def make_parsers():
     )
     config_bootstrap_group.add_argument(
         '--progress',
-        dest='progress',
-        default=False,
+        default=None,
         action='store_true',
         help='Display progress for each file as it is extracted',
     )
@@ -996,7 +1262,12 @@ def make_parsers():
         '--tar-filter', help='Name of filter program to pipe data through'
     )
     export_tar_group.add_argument(
-        '--list', '--files', dest='list_files', action='store_true', help='Show per-file details'
+        '--list',
+        '--files',
+        dest='list_details',
+        default=None,
+        action='store_true',
+        help='Show per-file details',
     )
     export_tar_group.add_argument(
         '--strip-components',
@@ -1107,7 +1378,8 @@ def make_parsers():
     )
     repo_delete_group.add_argument(
         '--list',
-        dest='list_archives',
+        dest='list_details',
+        default=None,
         action='store_true',
         help='Show details for the archives in the given repository',
     )
@@ -1479,6 +1751,31 @@ def make_parsers():
         '-h', '--help', action='help', help='Show this help message and exit'
     )
 
+    key_import_parser = key_parsers.add_parser(
+        'import',
+        help='Import a copy of the repository key from backup',
+        description='Import a copy of the repository key from backup',
+        add_help=False,
+    )
+    key_import_group = key_import_parser.add_argument_group('key import arguments')
+    key_import_group.add_argument(
+        '--paper',
+        action='store_true',
+        help='Import interactively from a backup done with --paper',
+    )
+    key_import_group.add_argument(
+        '--repository',
+        help='Path of repository to import the key from, defaults to the configured repository if there is only one, quoted globs supported',
+    )
+    key_import_group.add_argument(
+        '--path',
+        metavar='PATH',
+        help='Path to import the key from backup, defaults to stdin',
+    )
+    key_import_group.add_argument(
+        '-h', '--help', action='help', help='Show this help message and exit'
+    )
+
     key_change_passphrase_parser = key_parsers.add_parser(
         'change-passphrase',
         help='Change the passphrase protecting the repository key',
@@ -1496,6 +1793,56 @@ def make_parsers():
         '-h', '--help', action='help', help='Show this help message and exit'
     )
 
+    recreate_parser = action_parsers.add_parser(
+        'recreate',
+        aliases=ACTION_ALIASES['recreate'],
+        help='Recreate an archive in a repository (with Borg 1.2+, you must run compact afterwards to actually free space)',
+        description='Recreate an archive in a repository (with Borg 1.2+, you must run compact afterwards to actually free space)',
+        add_help=False,
+    )
+    recreate_group = recreate_parser.add_argument_group('recreate arguments')
+    recreate_group.add_argument(
+        '--repository',
+        help='Path of repository containing archive to recreate, defaults to the configured repository if there is only one, quoted globs supported',
+    )
+    recreate_group.add_argument(
+        '--archive',
+        help='Archive name, hash, or series to recreate',
+    )
+    recreate_group.add_argument(
+        '--list',
+        dest='list_details',
+        default=None,
+        action='store_true',
+        help='Show per-file details',
+    )
+    recreate_group.add_argument(
+        '--target',
+        metavar='TARGET',
+        help='Create a new archive from the specified archive (via --archive), without replacing it',
+    )
+    recreate_group.add_argument(
+        '--comment',
+        metavar='COMMENT',
+        help='Add a comment text to the archive or, if an archive is not provided, to all matching archives',
+    )
+    recreate_group.add_argument(
+        '--timestamp',
+        metavar='TIMESTAMP',
+        help='Manually override the archive creation date/time (UTC)',
+    )
+    recreate_group.add_argument(
+        '-a',
+        '--match-archives',
+        '--glob-archives',
+        dest='match_archives',
+        metavar='PATTERN',
+        help='Only consider archive names, hashes, or series matching this pattern [Borg 2.x+ only]',
+    )
+    recreate_group.add_argument(
+        '-h', '--help', action='help', help='Show this help message and exit'
+    )
+
     borg_parser = action_parsers.add_parser(
         'borg',
         aliases=ACTION_ALIASES['borg'],
@@ -1523,15 +1870,18 @@ def make_parsers():
     return global_parser, action_parsers, global_plus_action_parser
 
 
-def parse_arguments(*unparsed_arguments):
+def parse_arguments(schema, *unparsed_arguments):
     '''
-    Given command-line arguments with which this script was invoked, parse the arguments and return
-    them as a dict mapping from action name (or "global") to an argparse.Namespace instance.
+    Given a configuration schema dict and the command-line arguments with which this script was
+    invoked and unparsed arguments as a sequence of strings, parse the arguments and return them as
+    a dict mapping from action name (or "global") to an argparse.Namespace instance.
 
     Raise ValueError if the arguments cannot be parsed.
     Raise SystemExit with an error code of 0 if "--help" was requested.
     '''
-    global_parser, action_parsers, global_plus_action_parser = make_parsers()
+    global_parser, action_parsers, global_plus_action_parser = make_parsers(
+        schema, unparsed_arguments
+    )
     arguments, remaining_action_arguments = parse_arguments_for_actions(
         unparsed_arguments, action_parsers.choices, global_parser
     )
@@ -1559,15 +1909,6 @@ def parse_arguments(*unparsed_arguments):
             f"Unrecognized argument{'s' if len(unknown_arguments) > 1 else ''}: {' '.join(unknown_arguments)}"
         )
 
-    if 'create' in arguments and arguments['create'].list_files and arguments['create'].progress:
-        raise ValueError(
-            'With the create action, only one of --list (--files) and --progress flags can be used.'
-        )
-    if 'create' in arguments and arguments['create'].list_files and arguments['create'].json:
-        raise ValueError(
-            'With the create action, only one of --list (--files) and --json flags can be used.'
-        )
-
     if (
         ('list' in arguments and 'repo-info' in arguments and arguments['list'].json)
         or ('list' in arguments and 'info' in arguments and arguments['list'].json)
@@ -1575,15 +1916,6 @@ def parse_arguments(*unparsed_arguments):
     ):
         raise ValueError('With the --json flag, multiple actions cannot be used together.')
 
-    if (
-        'transfer' in arguments
-        and arguments['transfer'].archive
-        and arguments['transfer'].match_archives
-    ):
-        raise ValueError(
-            'With the transfer action, only one of --archive and --match-archives flags can be used.'
-        )
-
     if 'list' in arguments and (arguments['list'].prefix and arguments['list'].match_archives):
         raise ValueError(
             'With the list action, only one of --prefix or --match-archives flags can be used.'

Filskillnaden har hållts tillbaka eftersom den är för stor
+ 486 - 409
borgmatic/commands/borgmatic.py


+ 12 - 2
borgmatic/commands/completion/bash.py

@@ -1,5 +1,7 @@
 import borgmatic.commands.arguments
 import borgmatic.commands.completion.actions
+import borgmatic.commands.completion.flag
+import borgmatic.config.validate
 
 
 def parser_flags(parser):
@@ -7,7 +9,12 @@ def parser_flags(parser):
     Given an argparse.ArgumentParser instance, return its argument flags in a space-separated
     string.
     '''
-    return ' '.join(option for action in parser._actions for option in action.option_strings)
+    return ' '.join(
+        flag_variant
+        for action in parser._actions
+        for flag_name in action.option_strings
+        for flag_variant in borgmatic.commands.completion.flag.variants(flag_name)
+    )
 
 
 def bash_completion():
@@ -19,7 +26,10 @@ def bash_completion():
         unused_global_parser,
         action_parsers,
         global_plus_action_parser,
-    ) = borgmatic.commands.arguments.make_parsers()
+    ) = borgmatic.commands.arguments.make_parsers(
+        schema=borgmatic.config.validate.load_schema(borgmatic.config.validate.schema_filename()),
+        unparsed_arguments=(),
+    )
     global_flags = parser_flags(global_plus_action_parser)
 
     # Avert your eyes.

+ 15 - 8
borgmatic/commands/completion/fish.py

@@ -4,6 +4,7 @@ from textwrap import dedent
 
 import borgmatic.commands.arguments
 import borgmatic.commands.completion.actions
+import borgmatic.config.validate
 
 
 def has_file_options(action: Action):
@@ -26,9 +27,11 @@ def has_choice_options(action: Action):
 def has_unknown_required_param_options(action: Action):
     '''
     A catch-all for options that take a required parameter, but we don't know what the parameter is.
-    This should be used last. These are actions that take something like a glob, a list of numbers, or a string.
+    This should be used last. These are actions that take something like a glob, a list of numbers,
+    or a string.
 
-    Actions that match this pattern should not show the normal arguments, because those are unlikely to be valid.
+    Actions that match this pattern should not show the normal arguments, because those are unlikely
+    to be valid.
     '''
     return (
         action.required is True
@@ -52,9 +55,9 @@ def has_exact_options(action: Action):
 
 def exact_options_completion(action: Action):
     '''
-    Given an argparse.Action instance, return a completion invocation that forces file completions, options completion,
-    or just that some value follow the action, if the action takes such an argument and was the last action on the
-    command line prior to the cursor.
+    Given an argparse.Action instance, return a completion invocation that forces file completions,
+    options completion, or just that some value follow the action, if the action takes such an
+    argument and was the last action on the command line prior to the cursor.
 
     Otherwise, return an empty string.
     '''
@@ -80,8 +83,9 @@ def exact_options_completion(action: Action):
 
 def dedent_strip_as_tuple(string: str):
     '''
-    Dedent a string, then strip it to avoid requiring your first line to have content, then return a tuple of the string.
-    Makes it easier to write multiline strings for completions when you join them with a tuple.
+    Dedent a string, then strip it to avoid requiring your first line to have content, then return a
+    tuple of the string. Makes it easier to write multiline strings for completions when you join
+    them with a tuple.
     '''
     return (dedent(string).strip('\n'),)
 
@@ -95,7 +99,10 @@ def fish_completion():
         unused_global_parser,
         action_parsers,
         global_plus_action_parser,
-    ) = borgmatic.commands.arguments.make_parsers()
+    ) = borgmatic.commands.arguments.make_parsers(
+        schema=borgmatic.config.validate.load_schema(borgmatic.config.validate.schema_filename()),
+        unparsed_arguments=(),
+    )
 
     all_action_parsers = ' '.join(action for action in action_parsers.choices.keys())
 

+ 13 - 0
borgmatic/commands/completion/flag.py

@@ -0,0 +1,13 @@
+def variants(flag_name):
+    '''
+    Given a flag name as a string, yield it and any variations that should be complete-able as well.
+    For instance, for a string like "--foo[0].bar", yield "--foo[0].bar", "--foo[1].bar", ...,
+    "--foo[9].bar".
+    '''
+    if '[0]' in flag_name:
+        for index in range(0, 10):
+            yield flag_name.replace('[0]', f'[{index}]')
+
+        return
+
+    yield flag_name

+ 176 - 0
borgmatic/config/arguments.py

@@ -0,0 +1,176 @@
+import io
+import re
+
+import ruamel.yaml
+
+import borgmatic.config.schema
+
+LIST_INDEX_KEY_PATTERN = re.compile(r'^(?P<list_name>[a-zA-z-]+)\[(?P<index>\d+)\]$')
+
+
+def set_values(config, keys, value):
+    '''
+    Given a configuration dict, a sequence of parsed key strings, and a string value, descend into
+    the configuration hierarchy based on the given keys and set the value into the right place.
+    For example, consider these keys:
+
+        ('foo', 'bar', 'baz')
+
+    This looks up "foo" in the given configuration dict. And within that, it looks up "bar". And
+    then within that, it looks up "baz" and sets it to the given value. Another example:
+
+        ('mylist[0]', 'foo')
+
+    This looks for the zeroth element of "mylist" in the given configuration. And within that, it
+    looks up "foo" and sets it to the given value.
+    '''
+    if not keys:
+        return
+
+    first_key = keys[0]
+
+    # Support "mylist[0]" list index syntax.
+    match = LIST_INDEX_KEY_PATTERN.match(first_key)
+
+    if match:
+        list_key = match.group('list_name')
+        list_index = int(match.group('index'))
+
+        try:
+            if len(keys) == 1:
+                config[list_key][list_index] = value
+
+                return
+
+            if list_key not in config:
+                config[list_key] = []
+
+            set_values(config[list_key][list_index], keys[1:], value)
+        except (IndexError, KeyError):
+            raise ValueError(f'Argument list index {first_key} is out of range')
+
+        return
+
+    if len(keys) == 1:
+        config[first_key] = value
+
+        return
+
+    if first_key not in config:
+        config[first_key] = {}
+
+    set_values(config[first_key], keys[1:], value)
+
+
+def type_for_option(schema, option_keys):
+    '''
+    Given a configuration schema dict and a sequence of keys identifying a potentially nested
+    option, e.g. ('extra_borg_options', 'create'), return the schema type of that option as a
+    string.
+
+    Return None if the option or its type cannot be found in the schema.
+    '''
+    option_schema = schema
+
+    for key in option_keys:
+        # Support "name[0]"-style list index syntax.
+        match = LIST_INDEX_KEY_PATTERN.match(key)
+        properties = borgmatic.config.schema.get_properties(option_schema)
+
+        try:
+            if match:
+                option_schema = properties[match.group('list_name')]['items']
+            else:
+                option_schema = properties[key]
+        except KeyError:
+            return None
+
+    try:
+        return option_schema['type']
+    except KeyError:
+        return None
+
+
+def convert_value_type(value, option_type):
+    '''
+    Given a string value and its schema type as a string, determine its logical type (string,
+    boolean, integer, etc.), and return it converted to that type.
+
+    If the destination option type is a string, then leave the value as-is so that special
+    characters in it don't get interpreted as YAML during conversion.
+
+    And if the source value isn't a string, return it as-is.
+
+    Raise ruamel.yaml.error.YAMLError if there's a parse issue with the YAML.
+    Raise ValueError if the parsed value doesn't match the option type.
+    '''
+    if not isinstance(value, str):
+        return value
+
+    if option_type == 'string':
+        return value
+
+    try:
+        parsed_value = ruamel.yaml.YAML(typ='safe').load(io.StringIO(value))
+    except ruamel.yaml.error.YAMLError as error:
+        raise ValueError(f'Argument value "{value}" is invalid: {error.problem}')
+
+    if not isinstance(parsed_value, borgmatic.config.schema.parse_type(option_type)):
+        raise ValueError(f'Argument value "{value}" is not of the expected type: {option_type}')
+
+    return parsed_value
+
+
+def prepare_arguments_for_config(global_arguments, schema):
+    '''
+    Given global arguments as an argparse.Namespace and a configuration schema dict, parse each
+    argument that corresponds to an option in the schema and return a sequence of tuples (keys,
+    values) for that option, where keys is a sequence of strings. For instance, given the following
+    arguments:
+
+        argparse.Namespace(**{'my_option.sub_option': 'value1', 'other_option': 'value2'})
+
+    ... return this:
+
+        (
+            (('my_option', 'sub_option'), 'value1'),
+            (('other_option',), 'value2'),
+        )
+    '''
+    prepared_values = []
+
+    for argument_name, value in global_arguments.__dict__.items():
+        if value is None:
+            continue
+
+        keys = tuple(argument_name.split('.'))
+        option_type = type_for_option(schema, keys)
+
+        # The argument doesn't correspond to any option in the schema, so ignore it. It's
+        # probably a flag that borgmatic has on the command-line but not in configuration.
+        if option_type is None:
+            continue
+
+        prepared_values.append(
+            (
+                keys,
+                convert_value_type(value, option_type),
+            )
+        )
+
+    return tuple(prepared_values)
+
+
+def apply_arguments_to_config(config, schema, arguments):
+    '''
+    Given a configuration dict, a corresponding configuration schema dict, and arguments as a dict
+    from action name to argparse.Namespace, set those given argument values into their corresponding
+    configuration options in the configuration dict.
+
+    This supports argument flags of the from "--foo.bar.baz" where each dotted component is a nested
+    configuration object. Additionally, flags like "--foo.bar[0].baz" are supported to update a list
+    element in the configuration.
+    '''
+    for action_arguments in arguments.values():
+        for keys, value in prepare_arguments_for_config(action_arguments, schema):
+            set_values(config, keys, value)

+ 57 - 54
borgmatic/config/generate.py

@@ -5,6 +5,7 @@ import re
 
 import ruamel.yaml
 
+import borgmatic.config.schema
 from borgmatic.config import load, normalize
 
 INDENT = 4
@@ -21,45 +22,59 @@ def insert_newline_before_comment(config, field_name):
     )
 
 
-def get_properties(schema):
-    '''
-    Given a schema dict, return its properties. But if it's got sub-schemas with multiple different
-    potential properties, returned their merged properties instead.
-    '''
-    if 'oneOf' in schema:
-        return dict(
-            collections.ChainMap(*[sub_schema['properties'] for sub_schema in schema['oneOf']])
-        )
-
-    return schema['properties']
+SCALAR_SCHEMA_TYPES = {'string', 'boolean', 'integer', 'number'}
 
 
-def schema_to_sample_configuration(schema, level=0, parent_is_sequence=False):
+def schema_to_sample_configuration(schema, source_config=None, level=0, parent_is_sequence=False):
     '''
-    Given a loaded configuration schema, generate and return sample config for it. Include comments
-    for each option based on the schema "description".
+    Given a loaded configuration schema and a source configuration, generate and return sample
+    config for the schema. Include comments for each option based on the schema "description".
+
+    If a source config is given, walk it alongside the given schema so that both can be taken into
+    account when commenting out particular options in add_comments_to_configuration_object().
     '''
     schema_type = schema.get('type')
     example = schema.get('example')
-    if example is not None:
-        return example
 
-    if schema_type == 'array' or (isinstance(schema_type, list) and 'array' in schema_type):
+    if borgmatic.config.schema.compare_types(schema_type, {'array'}):
         config = ruamel.yaml.comments.CommentedSeq(
-            [schema_to_sample_configuration(schema['items'], level, parent_is_sequence=True)]
+            example
+            if borgmatic.config.schema.compare_types(
+                schema['items'].get('type'), SCALAR_SCHEMA_TYPES
+            )
+            else [
+                schema_to_sample_configuration(
+                    schema['items'], source_config, level, parent_is_sequence=True
+                )
+            ]
         )
         add_comments_to_configuration_sequence(config, schema, indent=(level * INDENT))
-    elif schema_type == 'object' or (isinstance(schema_type, list) and 'object' in schema_type):
-        config = ruamel.yaml.comments.CommentedMap(
-            [
-                (field_name, schema_to_sample_configuration(sub_schema, level + 1))
-                for field_name, sub_schema in get_properties(schema).items()
-            ]
+    elif borgmatic.config.schema.compare_types(schema_type, {'object'}):
+        if source_config and isinstance(source_config, list) and isinstance(source_config[0], dict):
+            source_config = dict(collections.ChainMap(*source_config))
+
+        config = (
+            ruamel.yaml.comments.CommentedMap(
+                [
+                    (
+                        field_name,
+                        schema_to_sample_configuration(
+                            sub_schema, (source_config or {}).get(field_name, {}), level + 1
+                        ),
+                    )
+                    for field_name, sub_schema in borgmatic.config.schema.get_properties(
+                        schema
+                    ).items()
+                ]
+            )
+            or example
         )
         indent = (level * INDENT) + (SEQUENCE_INDENT if parent_is_sequence else 0)
         add_comments_to_configuration_object(
-            config, schema, indent=indent, skip_first=parent_is_sequence
+            config, schema, source_config, indent=indent, skip_first=parent_is_sequence
         )
+    elif borgmatic.config.schema.compare_types(schema_type, SCALAR_SCHEMA_TYPES, match=all):
+        return example
     else:
         raise ValueError(f'Schema at level {level} is unsupported: {schema}')
 
@@ -164,7 +179,7 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
         return
 
     for field_name in config[0].keys():
-        field_schema = get_properties(schema['items']).get(field_name, {})
+        field_schema = borgmatic.config.schema.get_properties(schema['items']).get(field_name, {})
         description = field_schema.get('description')
 
         # No description to use? Skip it.
@@ -178,26 +193,35 @@ def add_comments_to_configuration_sequence(config, schema, indent=0):
         return
 
 
-REQUIRED_KEYS = {'source_directories', 'repositories', 'keep_daily'}
+DEFAULT_KEYS = {'source_directories', 'repositories', 'keep_daily'}
 COMMENTED_OUT_SENTINEL = 'COMMENT_OUT'
 
 
-def add_comments_to_configuration_object(config, schema, indent=0, skip_first=False):
+def add_comments_to_configuration_object(
+    config, schema, source_config=None, indent=0, skip_first=False
+):
     '''
     Using descriptions from a schema as a source, add those descriptions as comments to the given
-    config mapping, before each field. Indent the comment the given number of characters.
+    configuration dict, putting them before each field. Indent the comment the given number of
+    characters.
+
+    And a sentinel for commenting out options that are neither in DEFAULT_KEYS nor the the given
+    source configuration dict. The idea is that any options used in the source configuration should
+    stay active in the generated configuration.
     '''
     for index, field_name in enumerate(config.keys()):
         if skip_first and index == 0:
             continue
 
-        field_schema = get_properties(schema).get(field_name, {})
+        field_schema = borgmatic.config.schema.get_properties(schema).get(field_name, {})
         description = field_schema.get('description', '').strip()
 
-        # If this is an optional key, add an indicator to the comment flagging it to be commented
+        # If this isn't a default key, add an indicator to the comment flagging it to be commented
         # out from the sample configuration. This sentinel is consumed by downstream processing that
         # does the actual commenting out.
-        if field_name not in REQUIRED_KEYS:
+        if field_name not in DEFAULT_KEYS and (
+            source_config is None or field_name not in source_config
+        ):
             description = (
                 '\n'.join((description, COMMENTED_OUT_SENTINEL))
                 if description
@@ -217,21 +241,6 @@ def add_comments_to_configuration_object(config, schema, indent=0, skip_first=Fa
 RUAMEL_YAML_COMMENTS_INDEX = 1
 
 
-def remove_commented_out_sentinel(config, field_name):
-    '''
-    Given a configuration CommentedMap and a top-level field name in it, remove any "commented out"
-    sentinel found at the end of its YAML comments. This prevents the given field name from getting
-    commented out by downstream processing that consumes the sentinel.
-    '''
-    try:
-        last_comment_value = config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX][-1].value
-    except KeyError:
-        return
-
-    if last_comment_value == f'# {COMMENTED_OUT_SENTINEL}\n':
-        config.ca.items[field_name][RUAMEL_YAML_COMMENTS_INDEX].pop()
-
-
 def merge_source_configuration_into_destination(destination_config, source_config):
     '''
     Deep merge the given source configuration dict into the destination configuration CommentedMap,
@@ -246,12 +255,6 @@ def merge_source_configuration_into_destination(destination_config, source_confi
         return source_config
 
     for field_name, source_value in source_config.items():
-        # Since this key/value is from the source configuration, leave it uncommented and remove any
-        # sentinel that would cause it to get commented out.
-        remove_commented_out_sentinel(
-            ruamel.yaml.comments.CommentedMap(destination_config), field_name
-        )
-
         # This is a mapping. Recurse for this key/value.
         if isinstance(source_value, collections.abc.Mapping):
             destination_config[field_name] = merge_source_configuration_into_destination(
@@ -297,7 +300,7 @@ def generate_sample_configuration(
         normalize.normalize(source_filename, source_config)
 
     destination_config = merge_source_configuration_into_destination(
-        schema_to_sample_configuration(schema), source_config
+        schema_to_sample_configuration(schema, source_config), source_config
     )
 
     if dry_run:

+ 1 - 1
borgmatic/config/load.py

@@ -69,7 +69,7 @@ def include_configuration(loader, filename_node, include_directory, config_paths
         ]
 
     raise ValueError(
-        '!include value is not supported; use a single filename or a list of filenames'
+        'The value given for the !include tag is invalid; use a single filename or a list of filenames instead'
     )
 
 

+ 90 - 1
borgmatic/config/normalize.py

@@ -58,6 +58,90 @@ def normalize_sections(config_filename, config):
     return []
 
 
+def make_command_hook_deprecation_log(config_filename, option_name):  # pragma: no cover
+    '''
+    Given a configuration filename and the name of a configuration option, return a deprecation
+    warning log for it.
+    '''
+    return logging.makeLogRecord(
+        dict(
+            levelno=logging.WARNING,
+            levelname='WARNING',
+            msg=f'{config_filename}: {option_name} is deprecated and support will be removed from a future release. Use commands: instead.',
+        )
+    )
+
+
+def normalize_commands(config_filename, config):
+    '''
+    Given a configuration filename and a configuration dict, transform any "before_*"- and
+    "after_*"-style command hooks into "commands:".
+    '''
+    logs = []
+
+    # Normalize "before_actions" and "after_actions".
+    for preposition in ('before', 'after'):
+        option_name = f'{preposition}_actions'
+        commands = config.pop(option_name, None)
+
+        if commands:
+            logs.append(make_command_hook_deprecation_log(config_filename, option_name))
+            config.setdefault('commands', []).append(
+                {
+                    preposition: 'repository',
+                    'run': commands,
+                }
+            )
+
+    # Normalize "before_backup", "before_prune", "after_backup", "after_prune", etc.
+    for action_name in ('create', 'prune', 'compact', 'check', 'extract'):
+        for preposition in ('before', 'after'):
+            option_name = f'{preposition}_{"backup" if action_name == "create" else action_name}'
+            commands = config.pop(option_name, None)
+
+            if not commands:
+                continue
+
+            logs.append(make_command_hook_deprecation_log(config_filename, option_name))
+            config.setdefault('commands', []).append(
+                {
+                    preposition: 'action',
+                    'when': [action_name],
+                    'run': commands,
+                }
+            )
+
+    # Normalize "on_error".
+    commands = config.pop('on_error', None)
+
+    if commands:
+        logs.append(make_command_hook_deprecation_log(config_filename, 'on_error'))
+        config.setdefault('commands', []).append(
+            {
+                'after': 'error',
+                'when': ['create', 'prune', 'compact', 'check'],
+                'run': commands,
+            }
+        )
+
+    # Normalize "before_everything" and "after_everything".
+    for preposition in ('before', 'after'):
+        option_name = f'{preposition}_everything'
+        commands = config.pop(option_name, None)
+
+        if commands:
+            logs.append(make_command_hook_deprecation_log(config_filename, option_name))
+            config.setdefault('commands', []).append(
+                {
+                    preposition: 'everything',
+                    'when': ['create'],
+                    'run': commands,
+                }
+            )
+
+    return logs
+
+
 def normalize(config_filename, config):
     '''
     Given a configuration filename and a configuration dict of its loaded contents, apply particular
@@ -67,6 +151,7 @@ def normalize(config_filename, config):
     Raise ValueError the configuration cannot be normalized.
     '''
     logs = normalize_sections(config_filename, config)
+    logs += normalize_commands(config_filename, config)
 
     if config.get('borgmatic_source_directory'):
         logs.append(
@@ -241,7 +326,11 @@ def normalize(config_filename, config):
         config['repositories'] = []
 
         for repository_dict in repositories:
-            repository_path = repository_dict['path']
+            repository_path = repository_dict.get('path')
+
+            if repository_path is None:
+                continue
+
             if '~' in repository_path:
                 logs.append(
                     logging.makeLogRecord(

+ 8 - 0
borgmatic/config/override.py

@@ -1,7 +1,10 @@
 import io
+import logging
 
 import ruamel.yaml
 
+logger = logging.getLogger(__name__)
+
 
 def set_values(config, keys, value):
     '''
@@ -134,6 +137,11 @@ def apply_overrides(config, schema, raw_overrides):
     '''
     overrides = parse_overrides(raw_overrides, schema)
 
+    if overrides:
+        logger.warning(
+            "The --override flag is deprecated and will be removed from a future release. Instead, use a command-line flag corresponding to the configuration option you'd like to set."
+        )
+
     for keys, value in overrides:
         set_values(config, keys, value)
         set_values(config, strip_section_names(keys), value)

+ 1 - 1
borgmatic/config/paths.py

@@ -134,7 +134,7 @@ class Runtime_directory:
         '''
         return self.runtime_path
 
-    def __exit__(self, exception, value, traceback):
+    def __exit__(self, exception_type, exception, traceback):
         '''
         Delete any temporary directory that was created as part of initialization.
         '''

+ 72 - 0
borgmatic/config/schema.py

@@ -0,0 +1,72 @@
+import decimal
+import itertools
+
+
+def get_properties(schema):
+    '''
+    Given a schema dict, return its properties. But if it's got sub-schemas with multiple different
+    potential properties, return their merged properties instead (interleaved so the first
+    properties of each sub-schema come first). The idea is that the user should see all possible
+    options even if they're not all possible together.
+    '''
+    if 'oneOf' in schema:
+        return dict(
+            item
+            for item in itertools.chain(
+                *itertools.zip_longest(
+                    *[sub_schema['properties'].items() for sub_schema in schema['oneOf']]
+                )
+            )
+            if item is not None
+        )
+
+    return schema.get('properties', {})
+
+
+SCHEMA_TYPE_TO_PYTHON_TYPE = {
+    'array': list,
+    'boolean': bool,
+    'integer': int,
+    'number': decimal.Decimal,
+    'object': dict,
+    'string': str,
+}
+
+
+def parse_type(schema_type, **overrides):
+    '''
+    Given a schema type as a string, return the corresponding Python type.
+
+    If any overrides are given in the from of a schema type string to a Python type, then override
+    the default type mapping with them.
+
+    Raise ValueError if the schema type is unknown.
+    '''
+    try:
+        return dict(
+            SCHEMA_TYPE_TO_PYTHON_TYPE,
+            **overrides,
+        )[schema_type]
+    except KeyError:
+        raise ValueError(f'Unknown type in configuration schema: {schema_type}')
+
+
+def compare_types(schema_type, target_types, match=any):
+    '''
+    Given a schema type as a string or a list of strings (representing multiple types) and a set of
+    target type strings, return whether every schema type is in the set of target types.
+
+    If the schema type is a list of strings, use the given match function (such as any or all) to
+    compare elements. For instance, if match is given as all, then every element of the schema_type
+    list must be in the target types.
+    '''
+    if isinstance(schema_type, list):
+        if match(element_schema_type in target_types for element_schema_type in schema_type):
+            return True
+
+        return False
+
+    if schema_type in target_types:
+        return True
+
+    return False

Filskillnaden har hållts tillbaka eftersom den är för stor
+ 457 - 91
borgmatic/config/schema.yaml


+ 33 - 8
borgmatic/config/validate.py

@@ -4,7 +4,7 @@ import os
 import jsonschema
 import ruamel.yaml
 
-import borgmatic.config
+import borgmatic.config.arguments
 from borgmatic.config import constants, environment, load, normalize, override
 
 
@@ -21,6 +21,18 @@ def schema_filename():
         return schema_path
 
 
+def load_schema(schema_path):  # pragma: no cover
+    '''
+    Given a schema filename path, load the schema and return it as a dict.
+
+    Raise Validation_error if the schema could not be parsed.
+    '''
+    try:
+        return load.load_configuration(schema_path)
+    except (ruamel.yaml.error.YAMLError, RecursionError) as error:
+        raise Validation_error(schema_path, (str(error),))
+
+
 def format_json_error_path_element(path_element):
     '''
     Given a path element into a JSON data structure, format it for display as a string.
@@ -84,12 +96,17 @@ def apply_logical_validation(config_filename, parsed_configuration):
             )
 
 
-def parse_configuration(config_filename, schema_filename, overrides=None, resolve_env=True):
+def parse_configuration(
+    config_filename, schema_filename, arguments, overrides=None, resolve_env=True
+):
     '''
     Given the path to a config filename in YAML format, the path to a schema filename in a YAML
-    rendition of JSON Schema format, a sequence of configuration file override strings in the form
-    of "option.suboption=value", return the parsed configuration as a data structure of nested dicts
-    and lists corresponding to the schema. Example return value:
+    rendition of JSON Schema format, arguments as dict from action name to argparse.Namespace, a
+    sequence of configuration file override strings in the form of "option.suboption=value", and
+    whether to resolve environment variables, return the parsed configuration as a data structure of
+    nested dicts and lists corresponding to the schema. Example return value.
+
+    Example return value:
 
         {
             'source_directories': ['/home', '/etc'],
@@ -112,6 +129,7 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
     except (ruamel.yaml.error.YAMLError, RecursionError) as error:
         raise Validation_error(config_filename, (str(error),))
 
+    borgmatic.config.arguments.apply_arguments_to_config(config, schema, arguments)
     override.apply_overrides(config, schema, overrides)
     constants.apply_constants(config, config.get('constants') if config else {})
 
@@ -124,6 +142,7 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
         validator = jsonschema.Draft7Validator(schema)
     except AttributeError:  # pragma: no cover
         validator = jsonschema.Draft4Validator(schema)
+
     validation_errors = tuple(validator.iter_errors(config))
 
     if validation_errors:
@@ -136,16 +155,22 @@ def parse_configuration(config_filename, schema_filename, overrides=None, resolv
     return config, config_paths, logs
 
 
-def normalize_repository_path(repository):
+def normalize_repository_path(repository, base=None):
     '''
     Given a repository path, return the absolute path of it (for local repositories).
+    Optionally, use a base path for resolving relative paths, e.g. to the configured working directory.
     '''
     # A colon in the repository could mean that it's either a file:// URL or a remote repository.
     # If it's a remote repository, we don't want to normalize it. If it's a file:// URL, we do.
     if ':' not in repository:
-        return os.path.abspath(repository)
+        return (
+            os.path.abspath(os.path.join(base, repository)) if base else os.path.abspath(repository)
+        )
     elif repository.startswith('file://'):
-        return os.path.abspath(repository.partition('file://')[-1])
+        local_path = repository.partition('file://')[-1]
+        return (
+            os.path.abspath(os.path.join(base, local_path)) if base else os.path.abspath(local_path)
+        )
     else:
         return repository
 

+ 56 - 48
borgmatic/execute.py

@@ -1,7 +1,6 @@
 import collections
 import enum
 import logging
-import os
 import select
 import subprocess
 import textwrap
@@ -243,6 +242,9 @@ def mask_command_secrets(full_command):
 MAX_LOGGED_COMMAND_LENGTH = 1000
 
 
+PREFIXES_OF_ENVIRONMENT_VARIABLES_TO_LOG = ('BORG_', 'PG', 'MARIADB_', 'MYSQL_')
+
+
 def log_command(full_command, input_file=None, output_file=None, environment=None):
     '''
     Log the given command (a sequence of command/argument strings), along with its input/output file
@@ -251,14 +253,21 @@ def log_command(full_command, input_file=None, output_file=None, environment=Non
     logger.debug(
         textwrap.shorten(
             ' '.join(
-                tuple(f'{key}=***' for key in (environment or {}).keys())
+                tuple(
+                    f'{key}=***'
+                    for key in (environment or {}).keys()
+                    if any(
+                        key.startswith(prefix)
+                        for prefix in PREFIXES_OF_ENVIRONMENT_VARIABLES_TO_LOG
+                    )
+                )
                 + mask_command_secrets(full_command)
             ),
             width=MAX_LOGGED_COMMAND_LENGTH,
             placeholder=' ...',
         )
-        + (f" < {getattr(input_file, 'name', '')}" if input_file else '')
-        + (f" > {getattr(output_file, 'name', '')}" if output_file else '')
+        + (f" < {getattr(input_file, 'name', input_file)}" if input_file else '')
+        + (f" > {getattr(output_file, 'name', output_file)}" if output_file else '')
     )
 
 
@@ -274,7 +283,7 @@ def execute_command(
     output_file=None,
     input_file=None,
     shell=False,
-    extra_environment=None,
+    environment=None,
     working_directory=None,
     borg_local_path=None,
     borg_exit_codes=None,
@@ -284,18 +293,17 @@ def execute_command(
     Execute the given command (a sequence of command/argument strings) and log its output at the
     given log level. If an open output file object is given, then write stdout to the file and only
     log stderr. If an open input file object is given, then read stdin from the file. If shell is
-    True, execute the command within a shell. If an extra environment dict is given, then use it to
-    augment the current environment, and pass the result into the command. If a working directory is
-    given, use that as the present working directory when running the command. If a Borg local path
-    is given, and the command matches it (regardless of arguments), treat exit code 1 as a warning
-    instead of an error. But if Borg exit codes are given as a sequence of exit code configuration
-    dicts, then use that configuration to decide what's an error and what's a warning. If run to
-    completion is False, then return the process for the command without executing it to completion.
+    True, execute the command within a shell. If an environment variables dict is given, then pass
+    it into the command. If a working directory is given, use that as the present working directory
+    when running the command. If a Borg local path is given, and the command matches it (regardless
+    of arguments), treat exit code 1 as a warning instead of an error. But if Borg exit codes are
+    given as a sequence of exit code configuration dicts, then use that configuration to decide
+    what's an error and what's a warning. If run to completion is False, then return the process for
+    the command without executing it to completion.
 
     Raise subprocesses.CalledProcessError if an error occurs while running the command.
     '''
-    log_command(full_command, input_file, output_file, extra_environment)
-    environment = {**os.environ, **extra_environment} if extra_environment else None
+    log_command(full_command, input_file, output_file, environment)
     do_not_capture = bool(output_file is DO_NOT_CAPTURE)
     command = ' '.join(full_command) if shell else full_command
 
@@ -307,8 +315,8 @@ def execute_command(
         shell=shell,
         env=environment,
         cwd=working_directory,
-        # Necessary for the passcommand credential hook to work.
-        close_fds=not bool((extra_environment or {}).get('BORG_PASSPHRASE_FD')),
+        # Necessary for passing credentials via anonymous pipe.
+        close_fds=False,
     )
     if not run_to_completion:
         return process
@@ -325,39 +333,40 @@ def execute_command(
 
 def execute_command_and_capture_output(
     full_command,
+    input_file=None,
     capture_stderr=False,
     shell=False,
-    extra_environment=None,
+    environment=None,
     working_directory=None,
     borg_local_path=None,
     borg_exit_codes=None,
 ):
     '''
     Execute the given command (a sequence of command/argument strings), capturing and returning its
-    output (stdout). If capture stderr is True, then capture and return stderr in addition to
-    stdout. If shell is True, execute the command within a shell. If an extra environment dict is
-    given, then use it to augment the current environment, and pass the result into the command. If
-    a working directory is given, use that as the present working directory when running the
-    command. If a Borg local path is given, and the command matches it (regardless of arguments),
-    treat exit code 1 as a warning instead of an error. But if Borg exit codes are given as a
-    sequence of exit code configuration dicts, then use that configuration to decide what's an error
-    and what's a warning.
+    output (stdout). If an input file descriptor is given, then pipe it to the command's stdin. If
+    capture stderr is True, then capture and return stderr in addition to stdout. If shell is True,
+    execute the command within a shell. If an environment variables dict is given, then pass it into
+    the command. If a working directory is given, use that as the present working directory when
+    running the command. If a Borg local path is given, and the command matches it (regardless of
+    arguments), treat exit code 1 as a warning instead of an error. But if Borg exit codes are given
+    as a sequence of exit code configuration dicts, then use that configuration to decide what's an
+    error and what's a warning.
 
     Raise subprocesses.CalledProcessError if an error occurs while running the command.
     '''
-    log_command(full_command, environment=extra_environment)
-    environment = {**os.environ, **extra_environment} if extra_environment else None
+    log_command(full_command, input_file, environment=environment)
     command = ' '.join(full_command) if shell else full_command
 
     try:
         output = subprocess.check_output(
             command,
+            stdin=input_file,
             stderr=subprocess.STDOUT if capture_stderr else None,
             shell=shell,
             env=environment,
             cwd=working_directory,
-            # Necessary for the passcommand credential hook to work.
-            close_fds=not bool((extra_environment or {}).get('BORG_PASSPHRASE_FD')),
+            # Necessary for passing credentials via anonymous pipe.
+            close_fds=False,
         )
     except subprocess.CalledProcessError as error:
         if (
@@ -377,7 +386,7 @@ def execute_command_with_processes(
     output_file=None,
     input_file=None,
     shell=False,
-    extra_environment=None,
+    environment=None,
     working_directory=None,
     borg_local_path=None,
     borg_exit_codes=None,
@@ -391,19 +400,17 @@ def execute_command_with_processes(
     If an open output file object is given, then write stdout to the file and only log stderr. But
     if output log level is None, instead suppress logging and return the captured output for (only)
     the given command. If an open input file object is given, then read stdin from the file. If
-    shell is True, execute the command within a shell. If an extra environment dict is given, then
-    use it to augment the current environment, and pass the result into the command. If a working
-    directory is given, use that as the present working directory when running the command. If a
-    Borg local path is given, then for any matching command or process (regardless of arguments),
-    treat exit code 1 as a warning instead of an error. But if Borg exit codes are given as a
-    sequence of exit code configuration dicts, then use that configuration to decide what's an error
-    and what's a warning.
+    shell is True, execute the command within a shell. If an environment variables dict is given,
+    then pass it into the command. If a working directory is given, use that as the present working
+    directory when running the command. If a Borg local path is given, then for any matching command
+    or process (regardless of arguments), treat exit code 1 as a warning instead of an error. But if
+    Borg exit codes are given as a sequence of exit code configuration dicts, then use that
+    configuration to decide what's an error and what's a warning.
 
     Raise subprocesses.CalledProcessError if an error occurs while running the command or in the
     upstream process.
     '''
-    log_command(full_command, input_file, output_file, extra_environment)
-    environment = {**os.environ, **extra_environment} if extra_environment else None
+    log_command(full_command, input_file, output_file, environment)
     do_not_capture = bool(output_file is DO_NOT_CAPTURE)
     command = ' '.join(full_command) if shell else full_command
 
@@ -418,8 +425,8 @@ def execute_command_with_processes(
             shell=shell,
             env=environment,
             cwd=working_directory,
-            # Necessary for the passcommand credential hook to work.
-            close_fds=not bool((extra_environment or {}).get('BORG_PASSPHRASE_FD')),
+            # Necessary for passing credentials via anonymous pipe.
+            close_fds=False,
         )
     except (subprocess.CalledProcessError, OSError):
         # Something has gone wrong. So vent each process' output buffer to prevent it from hanging.
@@ -430,13 +437,14 @@ def execute_command_with_processes(
                 process.kill()
         raise
 
-    captured_outputs = log_outputs(
-        tuple(processes) + (command_process,),
-        (input_file, output_file),
-        output_log_level,
-        borg_local_path,
-        borg_exit_codes,
-    )
+    with borgmatic.logger.Log_prefix(None):  # Log command output without any prefix.
+        captured_outputs = log_outputs(
+            tuple(processes) + (command_process,),
+            (input_file, output_file),
+            output_log_level,
+            borg_local_path,
+            borg_exit_codes,
+        )
 
     if output_log_level is None:
         return captured_outputs.get(command_process)

+ 175 - 41
borgmatic/hooks/command.py

@@ -2,9 +2,11 @@ import logging
 import os
 import re
 import shlex
+import subprocess
 import sys
 
 import borgmatic.execute
+import borgmatic.logger
 
 logger = logging.getLogger(__name__)
 
@@ -30,66 +32,198 @@ def interpolate_context(hook_description, command, context):
 
 def make_environment(current_environment, sys_module=sys):
     '''
-    Given the existing system environment as a map from environment variable name to value, return
-    (in the same form) any extra environment variables that should be used when running command
-    hooks.
+    Given the existing system environment as a map from environment variable name to value, return a
+    copy of it, augmented with any extra environment variables that should be used when running
+    command hooks.
     '''
+    environment = dict(current_environment)
+
     # Detect whether we're running within a PyInstaller bundle. If so, set or clear LD_LIBRARY_PATH
     # based on the value of LD_LIBRARY_PATH_ORIG. This prevents library version information errors.
     if getattr(sys_module, 'frozen', False) and hasattr(sys_module, '_MEIPASS'):
-        return {'LD_LIBRARY_PATH': current_environment.get('LD_LIBRARY_PATH_ORIG', '')}
+        environment['LD_LIBRARY_PATH'] = environment.get('LD_LIBRARY_PATH_ORIG', '')
 
-    return {}
+    return environment
 
 
-def execute_hook(commands, umask, config_filename, description, dry_run, **context):
+def filter_hooks(command_hooks, before=None, after=None, hook_name=None, action_names=None):
+    '''
+    Given a sequence of command hook dicts from configuration and one or more filters (before name,
+    after name, calling hook name, or a sequence of action names), filter down the command hooks to
+    just the ones that match the given filters.
     '''
-    Given a list of hook commands to execute, a umask to execute with (or None), a config filename,
-    a hook description, and whether this is a dry run, run the given commands. Or, don't run them
-    if this is a dry run.
+    return tuple(
+        hook_config
+        for hook_config in command_hooks or ()
+        for config_action_names in (hook_config.get('when'),)
+        if before is None or hook_config.get('before') == before
+        if after is None or hook_config.get('after') == after
+        if action_names is None
+        or config_action_names is None
+        or set(config_action_names or ()).intersection(set(action_names))
+    )
+
+
+def execute_hooks(command_hooks, umask, working_directory, dry_run, **context):
+    '''
+    Given a sequence of command hook dicts from configuration, a umask to execute with (or None), a
+    working directory to execute with, and whether this is a dry run, run the commands for each
+    hook. Or don't run them if this is a dry run.
 
     The context contains optional values interpolated by name into the hook commands.
 
-    Raise ValueError if the umask cannot be parsed.
+    Raise ValueError if the umask cannot be parsed or a hook is invalid.
     Raise subprocesses.CalledProcessError if an error occurs in a hook.
     '''
-    if not commands:
-        logger.debug(f'No commands to run for {description} hook')
-        return
+    borgmatic.logger.add_custom_log_levels()
 
     dry_run_label = ' (dry run; not actually running hooks)' if dry_run else ''
 
-    context['configuration_filename'] = config_filename
-    commands = [interpolate_context(description, command, context) for command in commands]
+    for hook_config in command_hooks:
+        commands = hook_config.get('run')
 
-    if len(commands) == 1:
-        logger.info(f'Running command for {description} hook{dry_run_label}')
-    else:
-        logger.info(
-            f'Running {len(commands)} commands for {description} hook{dry_run_label}',
-        )
+        if 'before' in hook_config:
+            description = f'before {hook_config.get("before")}'
+        elif 'after' in hook_config:
+            description = f'after {hook_config.get("after")}'
+        else:
+            raise ValueError(f'Invalid hook configuration: {hook_config}')
+
+        if not commands:
+            logger.debug(f'No commands to run for {description} hook')
+            continue
+
+        commands = [interpolate_context(description, command, context) for command in commands]
 
-    if umask:
-        parsed_umask = int(str(umask), 8)
-        logger.debug(f'Set hook umask to {oct(parsed_umask)}')
-        original_umask = os.umask(parsed_umask)
-    else:
-        original_umask = None
-
-    try:
-        for command in commands:
-            if dry_run:
-                continue
-
-            borgmatic.execute.execute_command(
-                [command],
-                output_log_level=(logging.ERROR if description == 'on-error' else logging.WARNING),
-                shell=True,
-                extra_environment=make_environment(os.environ),
+        if len(commands) == 1:
+            logger.info(f'Running {description} command hook{dry_run_label}')
+        else:
+            logger.info(
+                f'Running {len(commands)} commands for {description} hook{dry_run_label}',
             )
-    finally:
-        if original_umask:
-            os.umask(original_umask)
+
+        if umask:
+            parsed_umask = int(str(umask), 8)
+            logger.debug(f'Setting hook umask to {oct(parsed_umask)}')
+            original_umask = os.umask(parsed_umask)
+        else:
+            original_umask = None
+
+        try:
+            for command in commands:
+                if dry_run:
+                    continue
+
+                borgmatic.execute.execute_command(
+                    [command],
+                    output_log_level=(
+                        logging.ERROR if hook_config.get('after') == 'error' else logging.ANSWER
+                    ),
+                    shell=True,
+                    environment=make_environment(os.environ),
+                    working_directory=working_directory,
+                )
+        finally:
+            if original_umask:
+                os.umask(original_umask)
+
+
+class Before_after_hooks:
+    '''
+    A Python context manager for executing command hooks both before and after the wrapped code.
+
+    Example use as a context manager:
+
+       with borgmatic.hooks.command.Before_after_hooks(
+           command_hooks=config.get('commands'),
+           before_after='do_stuff',
+           umask=config.get('umask'),
+           dry_run=dry_run,
+           hook_name='myhook',
+       ):
+            do()
+            some()
+            stuff()
+
+    With that context manager in place, "before" command hooks execute before the wrapped code runs,
+    and "after" command hooks execute after the wrapped code completes.
+    '''
+
+    def __init__(
+        self,
+        command_hooks,
+        before_after,
+        umask,
+        working_directory,
+        dry_run,
+        hook_name=None,
+        action_names=None,
+        **context,
+    ):
+        '''
+        Given a sequence of command hook configuration dicts, the before/after name, a umask to run
+        commands with, a working directory to run commands with, a dry run flag, the name of the
+        calling hook, a sequence of action names, and any context for the executed commands, save
+        those data points for use below.
+        '''
+        self.command_hooks = command_hooks
+        self.before_after = before_after
+        self.umask = umask
+        self.working_directory = working_directory
+        self.dry_run = dry_run
+        self.hook_name = hook_name
+        self.action_names = action_names
+        self.context = context
+
+    def __enter__(self):
+        '''
+        Run the configured "before" command hooks that match the initialized data points.
+        '''
+        try:
+            execute_hooks(
+                borgmatic.hooks.command.filter_hooks(
+                    self.command_hooks,
+                    before=self.before_after,
+                    hook_name=self.hook_name,
+                    action_names=self.action_names,
+                ),
+                self.umask,
+                self.working_directory,
+                self.dry_run,
+                **self.context,
+            )
+        except (OSError, subprocess.CalledProcessError) as error:
+            if considered_soft_failure(error):
+                return
+
+            # Trigger the after hook manually, since raising here will prevent it from being run
+            # otherwise.
+            self.__exit__(None, None, None)
+
+            raise ValueError(f'Error running before {self.before_after} hook: {error}')
+
+    def __exit__(self, exception_type, exception, traceback):
+        '''
+        Run the configured "after" command hooks that match the initialized data points.
+        '''
+        try:
+            execute_hooks(
+                borgmatic.hooks.command.filter_hooks(
+                    self.command_hooks,
+                    after=self.before_after,
+                    hook_name=self.hook_name,
+                    action_names=self.action_names,
+                ),
+                self.umask,
+                self.working_directory,
+                self.dry_run,
+                **self.context,
+            )
+        except (OSError, subprocess.CalledProcessError) as error:
+            if considered_soft_failure(error):
+                return
+
+            raise ValueError(f'Error running after {self.before_after} hook: {error}')
 
 
 def considered_soft_failure(error):

+ 43 - 0
borgmatic/hooks/credential/container.py

@@ -0,0 +1,43 @@
+import logging
+import os
+import re
+
+logger = logging.getLogger(__name__)
+
+
+SECRET_NAME_PATTERN = re.compile(r'^\w+$')
+DEFAULT_SECRETS_DIRECTORY = '/run/secrets'
+
+
+def load_credential(hook_config, config, credential_parameters):
+    '''
+    Given the hook configuration dict, the configuration dict, and a credential parameters tuple
+    containing a secret name to load, read the secret from the corresponding container secrets file
+    and return it.
+
+    Raise ValueError if the credential parameters is not one element, the secret name is invalid, or
+    the secret file cannot be read.
+    '''
+    try:
+        (secret_name,) = credential_parameters
+    except ValueError:
+        name = ' '.join(credential_parameters)
+
+        raise ValueError(f'Cannot load invalid secret name: "{name}"')
+
+    if not SECRET_NAME_PATTERN.match(secret_name):
+        raise ValueError(f'Cannot load invalid secret name: "{secret_name}"')
+
+    try:
+        with open(
+            os.path.join(
+                config.get('working_directory', ''),
+                (hook_config or {}).get('secrets_directory', DEFAULT_SECRETS_DIRECTORY),
+                secret_name,
+            )
+        ) as secret_file:
+            return secret_file.read().rstrip(os.linesep)
+    except (FileNotFoundError, OSError) as error:
+        logger.warning(error)
+
+        raise ValueError(f'Cannot load secret "{secret_name}" from file: {error.filename}')

+ 32 - 0
borgmatic/hooks/credential/file.py

@@ -0,0 +1,32 @@
+import logging
+import os
+
+logger = logging.getLogger(__name__)
+
+
+def load_credential(hook_config, config, credential_parameters):
+    '''
+    Given the hook configuration dict, the configuration dict, and a credential parameters tuple
+    containing a credential path to load, load the credential from file and return it.
+
+    Raise ValueError if the credential parameters is not one element or the secret file cannot be
+    read.
+    '''
+    try:
+        (credential_path,) = credential_parameters
+    except ValueError:
+        name = ' '.join(credential_parameters)
+
+        raise ValueError(f'Cannot load invalid credential: "{name}"')
+
+    expanded_credential_path = os.path.expanduser(credential_path)
+
+    try:
+        with open(
+            os.path.join(config.get('working_directory', ''), expanded_credential_path)
+        ) as credential_file:
+            return credential_file.read().rstrip(os.linesep)
+    except (FileNotFoundError, OSError) as error:
+        logger.warning(error)
+
+        raise ValueError(f'Cannot load credential file: {error.filename}')

+ 45 - 0
borgmatic/hooks/credential/keepassxc.py

@@ -0,0 +1,45 @@
+import logging
+import os
+import shlex
+
+import borgmatic.execute
+
+logger = logging.getLogger(__name__)
+
+
+def load_credential(hook_config, config, credential_parameters):
+    '''
+    Given the hook configuration dict, the configuration dict, and a credential parameters tuple
+    containing a KeePassXC database path and an attribute name to load, run keepassxc-cli to fetch
+    the corresponding KeePassXC credential and return it.
+
+    Raise ValueError if keepassxc-cli can't retrieve the credential.
+    '''
+    try:
+        (database_path, attribute_name) = credential_parameters
+    except ValueError:
+        raise ValueError(f'Invalid KeePassXC credential parameters: {credential_parameters}')
+
+    expanded_database_path = os.path.expanduser(database_path)
+
+    if not os.path.exists(expanded_database_path):
+        raise ValueError(f'KeePassXC database path does not exist: {database_path}')
+
+    # Build the keepassxc-cli command.
+    command = (
+        tuple(shlex.split((hook_config or {}).get('keepassxc_cli_command', 'keepassxc-cli')))
+        + ('show', '--show-protected', '--attributes', 'Password')
+        + (
+            ('--key-file', hook_config['key_file'])
+            if hook_config and hook_config.get('key_file')
+            else ()
+        )
+        + (
+            ('--yubikey', hook_config['yubikey'])
+            if hook_config and hook_config.get('yubikey')
+            else ()
+        )
+        + (expanded_database_path, attribute_name)  # Ensure database and entry are last.
+    )
+
+    return borgmatic.execute.execute_command_and_capture_output(command).rstrip(os.linesep)

+ 124 - 0
borgmatic/hooks/credential/parse.py

@@ -0,0 +1,124 @@
+import functools
+import re
+import shlex
+
+import borgmatic.hooks.dispatch
+
+IS_A_HOOK = False
+
+
+class Hash_adapter:
+    '''
+    A Hash_adapter instance wraps an unhashable object and pretends it's hashable. This is intended
+    for passing to a @functools.cache-decorated function to prevent it from complaining that an
+    argument is unhashable. It should only be used for arguments that you don't want to actually
+    impact the cache hashing, because Hash_adapter doesn't actually hash the object's contents.
+
+    Example usage:
+
+        @functools.cache
+        def func(a, b):
+            print(a, b.actual_value)
+            return a
+
+        func(5, Hash_adapter({1: 2, 3: 4}))  # Calls func(), prints, and returns.
+        func(5, Hash_adapter({1: 2, 3: 4}))  # Hits the cache and just returns the value.
+        func(5, Hash_adapter({5: 6, 7: 8}))  # Also uses cache, since the Hash_adapter is ignored.
+
+    In the above function, the "b" value is one that has been wrapped with Hash_adappter, and
+    therefore "b.actual_value" is necessary to access the original value.
+    '''
+
+    def __init__(self, actual_value):
+        self.actual_value = actual_value
+
+    def __eq__(self, other):
+        return True
+
+    def __hash__(self):
+        return 0
+
+
+UNHASHABLE_TYPES = (dict, list, set)
+
+
+def cache_ignoring_unhashable_arguments(function):
+    '''
+    A function decorator that caches calls to the decorated function but ignores any unhashable
+    arguments when performing cache lookups. This is intended to be a drop-in replacement for
+    functools.cache.
+
+    Example usage:
+
+        @cache_ignoring_unhashable_arguments
+        def func(a, b):
+            print(a, b)
+            return a
+
+        func(5, {1: 2, 3: 4})  # Calls func(), prints, and returns.
+        func(5, {1: 2, 3: 4})  # Hits the cache and just returns the value.
+        func(5, {5: 6, 7: 8})  # Also uses cache, since the unhashable value (the dict) is ignored.
+    '''
+
+    @functools.cache
+    def cached_function(*args, **kwargs):
+        return function(
+            *(arg.actual_value if isinstance(arg, Hash_adapter) else arg for arg in args),
+            **{
+                key: value.actual_value if isinstance(value, Hash_adapter) else value
+                for (key, value) in kwargs.items()
+            },
+        )
+
+    @functools.wraps(function)
+    def wrapper_function(*args, **kwargs):
+        return cached_function(
+            *(Hash_adapter(arg) if isinstance(arg, UNHASHABLE_TYPES) else arg for arg in args),
+            **{
+                key: Hash_adapter(value) if isinstance(value, UNHASHABLE_TYPES) else value
+                for (key, value) in kwargs.items()
+            },
+        )
+
+    wrapper_function.cache_clear = cached_function.cache_clear
+
+    return wrapper_function
+
+
+CREDENTIAL_PATTERN = re.compile(r'\{credential( +(?P<hook_and_parameters>.*))?\}')
+
+
+@cache_ignoring_unhashable_arguments
+def resolve_credential(value, config):
+    '''
+    Given a configuration value containing a string like "{credential hookname credentialname}" and
+    a configuration dict, resolve the credential by calling the relevant hook to get the actual
+    credential value. If the given value does not actually contain a credential tag, then return it
+    unchanged.
+
+    Cache the value (ignoring the config for purposes of caching), so repeated calls to this
+    function don't need to load the credential repeatedly.
+
+    Raise ValueError if the config could not be parsed or the credential could not be loaded.
+    '''
+    if value is None:
+        return value
+
+    matcher = CREDENTIAL_PATTERN.match(value)
+
+    if not matcher:
+        return value
+
+    hook_and_parameters = matcher.group('hook_and_parameters')
+
+    if not hook_and_parameters:
+        raise ValueError(f'Cannot load credential with invalid syntax "{value}"')
+
+    (hook_name, *credential_parameters) = shlex.split(hook_and_parameters)
+
+    if not credential_parameters:
+        raise ValueError(f'Cannot load credential with invalid syntax "{value}"')
+
+    return borgmatic.hooks.dispatch.call_hook(
+        'load_credential', config, hook_name, tuple(credential_parameters)
+    )

+ 43 - 0
borgmatic/hooks/credential/systemd.py

@@ -0,0 +1,43 @@
+import logging
+import os
+import re
+
+logger = logging.getLogger(__name__)
+
+
+CREDENTIAL_NAME_PATTERN = re.compile(r'^[\w.-]+$')
+
+
+def load_credential(hook_config, config, credential_parameters):
+    '''
+    Given the hook configuration dict, the configuration dict, and a credential parameters tuple
+    containing a credential name to load, read the credential from the corresponding systemd
+    credential file and return it.
+
+    Raise ValueError if the systemd CREDENTIALS_DIRECTORY environment variable is not set, the
+    credential name is invalid, or the credential file cannot be read.
+    '''
+    try:
+        (credential_name,) = credential_parameters
+    except ValueError:
+        name = ' '.join(credential_parameters)
+
+        raise ValueError(f'Cannot load invalid credential name: "{name}"')
+
+    credentials_directory = os.environ.get('CREDENTIALS_DIRECTORY')
+
+    if not credentials_directory:
+        raise ValueError(
+            f'Cannot load credential "{credential_name}" because the systemd CREDENTIALS_DIRECTORY environment variable is not set'
+        )
+
+    if not CREDENTIAL_NAME_PATTERN.match(credential_name):
+        raise ValueError(f'Cannot load invalid credential name "{credential_name}"')
+
+    try:
+        with open(os.path.join(credentials_directory, credential_name)) as credential_file:
+            return credential_file.read().rstrip(os.linesep)
+    except (FileNotFoundError, OSError) as error:
+        logger.warning(error)
+
+        raise ValueError(f'Cannot load credential "{credential_name}" from file: {error.filename}')

+ 10 - 2
borgmatic/hooks/data_source/bootstrap.py

@@ -55,9 +55,17 @@ def dump_data_sources(
             manifest_file,
         )
 
-    patterns.extend(borgmatic.borg.pattern.Pattern(config_path) for config_path in config_paths)
+    patterns.extend(
+        borgmatic.borg.pattern.Pattern(
+            config_path, source=borgmatic.borg.pattern.Pattern_source.HOOK
+        )
+        for config_path in config_paths
+    )
     patterns.append(
-        borgmatic.borg.pattern.Pattern(os.path.join(borgmatic_runtime_directory, 'bootstrap'))
+        borgmatic.borg.pattern.Pattern(
+            os.path.join(borgmatic_runtime_directory, 'bootstrap'),
+            source=borgmatic.borg.pattern.Pattern_source.HOOK,
+        )
     )
 
     return []

+ 65 - 7
borgmatic/hooks/data_source/btrfs.py

@@ -48,13 +48,56 @@ def get_subvolume_mount_points(findmnt_command):
 Subvolume = collections.namedtuple('Subvolume', ('path', 'contained_patterns'), defaults=((),))
 
 
+def get_subvolume_property(btrfs_command, subvolume_path, property_name):
+    output = borgmatic.execute.execute_command_and_capture_output(
+        tuple(btrfs_command.split(' '))
+        + (
+            'property',
+            'get',
+            '-t',  # Type.
+            'subvol',
+            subvolume_path,
+            property_name,
+        ),
+    )
+
+    try:
+        value = output.strip().split('=')[1]
+    except IndexError:
+        raise ValueError(f'Invalid {btrfs_command} property output')
+
+    return {
+        'true': True,
+        'false': False,
+    }.get(value, value)
+
+
+def omit_read_only_subvolume_mount_points(btrfs_command, subvolume_paths):
+    '''
+    Given a Btrfs command to run and a sequence of Btrfs subvolume mount points, filter them down to
+    just those that are read-write. The idea is that Btrfs can't actually snapshot a read-only
+    subvolume, so we should just ignore them.
+    '''
+    retained_subvolume_paths = []
+
+    for subvolume_path in subvolume_paths:
+        if get_subvolume_property(btrfs_command, subvolume_path, 'ro'):
+            logger.debug(f'Ignoring Btrfs subvolume {subvolume_path} because it is read-only')
+        else:
+            retained_subvolume_paths.append(subvolume_path)
+
+    return tuple(retained_subvolume_paths)
+
+
 def get_subvolumes(btrfs_command, findmnt_command, patterns=None):
     '''
     Given a Btrfs command to run and a sequence of configured patterns, find the intersection
     between the current Btrfs filesystem and subvolume mount points and the paths of any patterns.
     The idea is that these pattern paths represent the requested subvolumes to snapshot.
 
-    If patterns is None, then return all subvolumes, sorted by path.
+    Only include subvolumes that contain at least one root pattern sourced from borgmatic
+    configuration (as opposed to generated elsewhere in borgmatic). But if patterns is None, then
+    return all subvolumes instead, sorted by path.
 
     Return the result as a sequence of matching subvolume mount points.
     '''
@@ -65,7 +108,11 @@ def get_subvolumes(btrfs_command, findmnt_command, patterns=None):
     # backup. Sort the subvolumes from longest to shortest mount points, so longer mount points get
     # a whack at the candidate pattern piñata before their parents do. (Patterns are consumed during
     # this process, so no two subvolumes end up with the same contained patterns.)
-    for mount_point in reversed(get_subvolume_mount_points(findmnt_command)):
+    for mount_point in reversed(
+        omit_read_only_subvolume_mount_points(
+            btrfs_command, get_subvolume_mount_points(findmnt_command)
+        )
+    ):
         subvolumes.extend(
             Subvolume(mount_point, contained_patterns)
             for contained_patterns in (
@@ -73,7 +120,12 @@ def get_subvolumes(btrfs_command, findmnt_command, patterns=None):
                     mount_point, candidate_patterns
                 ),
             )
-            if patterns is None or contained_patterns
+            if patterns is None
+            or any(
+                pattern.type == borgmatic.borg.pattern.Pattern_type.ROOT
+                and pattern.source == borgmatic.borg.pattern.Pattern_source.CONFIG
+                for pattern in contained_patterns
+            )
         )
 
     return tuple(sorted(subvolumes, key=lambda subvolume: subvolume.path))
@@ -121,6 +173,7 @@ def make_snapshot_exclude_pattern(subvolume_path):  # pragma: no cover
         ),
         borgmatic.borg.pattern.Pattern_type.NO_RECURSE,
         borgmatic.borg.pattern.Pattern_style.FNMATCH,
+        source=borgmatic.borg.pattern.Pattern_source.HOOK,
     )
 
 
@@ -153,6 +206,7 @@ def make_borg_snapshot_pattern(subvolume_path, pattern):
         pattern.type,
         pattern.style,
         pattern.device,
+        source=borgmatic.borg.pattern.Pattern_source.HOOK,
     )
 
 
@@ -198,7 +252,8 @@ def dump_data_sources(
     dry_run_label = ' (dry run; not actually snapshotting anything)' if dry_run else ''
     logger.info(f'Snapshotting Btrfs subvolumes{dry_run_label}')
 
-    # Based on the configured patterns, determine Btrfs subvolumes to backup.
+    # Based on the configured patterns, determine Btrfs subvolumes to backup. Only consider those
+    # patterns that came from actual user configuration (as opposed to, say, other hooks).
     btrfs_command = hook_config.get('btrfs_command', 'btrfs')
     findmnt_command = hook_config.get('findmnt_command', 'findmnt')
     subvolumes = get_subvolumes(btrfs_command, findmnt_command, patterns)
@@ -299,9 +354,12 @@ def remove_data_source_dumps(hook_config, config, borgmatic_runtime_directory, d
                 logger.debug(error)
                 return
 
-            # Strip off the subvolume path from the end of the snapshot path and then delete the
-            # resulting directory.
-            shutil.rmtree(snapshot_path.rsplit(subvolume.path, 1)[0])
+            # Remove the snapshot parent directory if it still exists. (It might not exist if the
+            # snapshot was for "/".)
+            snapshot_parent_dir = snapshot_path.rsplit(subvolume.path, 1)[0]
+
+            if os.path.isdir(snapshot_parent_dir):
+                shutil.rmtree(snapshot_parent_dir)
 
 
 def make_data_source_dump_patterns(

+ 39 - 9
borgmatic/hooks/data_source/lvm.py

@@ -1,5 +1,6 @@
 import collections
 import glob
+import hashlib
 import json
 import logging
 import os
@@ -33,7 +34,9 @@ def get_logical_volumes(lsblk_command, patterns=None):
     between the current LVM logical volume mount points and the paths of any patterns. The idea is
     that these pattern paths represent the requested logical volumes to snapshot.
 
-    If patterns is None, include all logical volume mounts points, not just those in patterns.
+    Only include logical volumes that contain at least one root pattern sourced from borgmatic
+    configuration (as opposed to generated elsewhere in borgmatic). But if patterns is None, include
+    all logical volume mounts points instead, not just those in patterns.
 
     Return the result as a sequence of Logical_volume instances.
     '''
@@ -72,7 +75,12 @@ def get_logical_volumes(lsblk_command, patterns=None):
                     device['mountpoint'], candidate_patterns
                 ),
             )
-            if not patterns or contained_patterns
+            if not patterns
+            or any(
+                pattern.type == borgmatic.borg.pattern.Pattern_type.ROOT
+                and pattern.source == borgmatic.borg.pattern.Pattern_source.CONFIG
+                for pattern in contained_patterns
+            )
         )
     except KeyError as error:
         raise ValueError(f'Invalid {lsblk_command} output: Missing key "{error}"')
@@ -124,10 +132,14 @@ def mount_snapshot(mount_command, snapshot_device, snapshot_mount_path):  # prag
     )
 
 
-def make_borg_snapshot_pattern(pattern, normalized_runtime_directory):
+MOUNT_POINT_HASH_LENGTH = 10
+
+
+def make_borg_snapshot_pattern(pattern, logical_volume, normalized_runtime_directory):
     '''
-    Given a Borg pattern as a borgmatic.borg.pattern.Pattern instance, return a new Pattern with its
-    path rewritten to be in a snapshot directory based on the given runtime directory.
+    Given a Borg pattern as a borgmatic.borg.pattern.Pattern instance and a Logical_volume
+    containing it, return a new Pattern with its path rewritten to be in a snapshot directory based
+    on both the given runtime directory and the given Logical_volume's mount point.
 
     Move any initial caret in a regular expression pattern path to the beginning, so as not to break
     the regular expression.
@@ -142,6 +154,13 @@ def make_borg_snapshot_pattern(pattern, normalized_runtime_directory):
     rewritten_path = initial_caret + os.path.join(
         normalized_runtime_directory,
         'lvm_snapshots',
+        # Including this hash prevents conflicts between snapshot patterns for different logical
+        # volumes. For instance, without this, snapshotting a logical volume at /var and another at
+        # /var/spool would result in overlapping snapshot patterns and therefore colliding mount
+        # attempts.
+        hashlib.shake_256(logical_volume.mount_point.encode('utf-8')).hexdigest(
+            MOUNT_POINT_HASH_LENGTH
+        ),
         '.',  # Borg 1.4+ "slashdot" hack.
         # Included so that the source directory ends up in the Borg archive at its "original" path.
         pattern.path.lstrip('^').lstrip(os.path.sep),
@@ -152,6 +171,7 @@ def make_borg_snapshot_pattern(pattern, normalized_runtime_directory):
         pattern.type,
         pattern.style,
         pattern.device,
+        source=borgmatic.borg.pattern.Pattern_source.HOOK,
     )
 
 
@@ -180,7 +200,8 @@ def dump_data_sources(
     dry_run_label = ' (dry run; not actually snapshotting anything)' if dry_run else ''
     logger.info(f'Snapshotting LVM logical volumes{dry_run_label}')
 
-    # List logical volumes to get their mount points.
+    # List logical volumes to get their mount points, but only consider those patterns that came
+    # from actual user configuration (as opposed to, say, other hooks).
     lsblk_command = hook_config.get('lsblk_command', 'lsblk')
     requested_logical_volumes = get_logical_volumes(lsblk_command, patterns)
 
@@ -218,6 +239,9 @@ def dump_data_sources(
         snapshot_mount_path = os.path.join(
             normalized_runtime_directory,
             'lvm_snapshots',
+            hashlib.shake_256(logical_volume.mount_point.encode('utf-8')).hexdigest(
+                MOUNT_POINT_HASH_LENGTH
+            ),
             logical_volume.mount_point.lstrip(os.path.sep),
         )
 
@@ -233,7 +257,9 @@ def dump_data_sources(
         )
 
         for pattern in logical_volume.contained_patterns:
-            snapshot_pattern = make_borg_snapshot_pattern(pattern, normalized_runtime_directory)
+            snapshot_pattern = make_borg_snapshot_pattern(
+                pattern, logical_volume, normalized_runtime_directory
+            )
 
             # Attempt to update the pattern in place, since pattern order matters to Borg.
             try:
@@ -337,6 +363,7 @@ def remove_data_source_dumps(hook_config, config, borgmatic_runtime_directory, d
             os.path.normpath(borgmatic_runtime_directory),
         ),
         'lvm_snapshots',
+        '*',
     )
     logger.debug(f'Looking for snapshots to remove in {snapshots_glob}{dry_run_label}')
     umount_command = hook_config.get('umount_command', 'umount')
@@ -349,7 +376,10 @@ def remove_data_source_dumps(hook_config, config, borgmatic_runtime_directory, d
             snapshot_mount_path = os.path.join(
                 snapshots_directory, logical_volume.mount_point.lstrip(os.path.sep)
             )
-            if not os.path.isdir(snapshot_mount_path):
+
+            # If the snapshot mount path is empty, this is probably just a "shadow" of a nested
+            # logical volume and therefore there's nothing to unmount.
+            if not os.path.isdir(snapshot_mount_path) or not os.listdir(snapshot_mount_path):
                 continue
 
             # This might fail if the directory is already mounted, but we swallow errors here since
@@ -374,7 +404,7 @@ def remove_data_source_dumps(hook_config, config, borgmatic_runtime_directory, d
                 return
             except subprocess.CalledProcessError as error:
                 logger.debug(error)
-                return
+                continue
 
         if not dry_run:
             shutil.rmtree(snapshots_directory)

+ 151 - 31
borgmatic/hooks/data_source/mariadb.py

@@ -1,10 +1,12 @@
 import copy
 import logging
 import os
+import re
 import shlex
 
 import borgmatic.borg.pattern
 import borgmatic.config.paths
+import borgmatic.hooks.credential.parse
 from borgmatic.execute import (
     execute_command,
     execute_command_and_capture_output,
@@ -22,14 +24,92 @@ def make_dump_path(base_directory):  # pragma: no cover
     return dump.make_data_source_dump_path(base_directory, 'mariadb_databases')
 
 
-SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
+DEFAULTS_EXTRA_FILE_FLAG_PATTERN = re.compile('^--defaults-extra-file=(?P<filename>.*)$')
+
+
+def parse_extra_options(extra_options):
+    '''
+    Given an extra options string, split the options into a tuple and return it. Additionally, if
+    the first option is "--defaults-extra-file=...", then remove it from the options and return the
+    filename.
+
+    So the return value is a tuple of: (parsed options, defaults extra filename).
+
+    The intent is to support downstream merging of multiple "--defaults-extra-file"s, as
+    MariaDB/MySQL only allows one at a time.
+    '''
+    split_extra_options = tuple(shlex.split(extra_options)) if extra_options else ()
+
+    if not split_extra_options:
+        return ((), None)
+
+    match = DEFAULTS_EXTRA_FILE_FLAG_PATTERN.match(split_extra_options[0])
+
+    if not match:
+        return (split_extra_options, None)
+
+    return (split_extra_options[1:], match.group('filename'))
 
 
-def database_names_to_dump(database, extra_environment, dry_run):
+def make_defaults_file_options(username=None, password=None, defaults_extra_filename=None):
     '''
-    Given a requested database config, return the corresponding sequence of database names to dump.
-    In the case of "all", query for the names of databases on the configured host and return them,
-    excluding any system databases that will cause problems during restore.
+    Given a database username and/or password, write it to an anonymous pipe and return the flags
+    for passing that file descriptor to an executed command. The idea is that this is a more secure
+    way to transmit credentials to a database client than using an environment variable.
+
+    If no username or password are given, then return the options for the given defaults extra
+    filename (if any). But if there is a username and/or password and a defaults extra filename is
+    given, then "!include" it from the generated file, effectively allowing multiple defaults extra
+    files.
+
+    Do not use the returned value for multiple different command invocations. That will not work
+    because each pipe is "used up" once read.
+    '''
+    escaped_password = None if password is None else password.replace('\\', '\\\\')
+
+    values = '\n'.join(
+        (
+            (f'user={username}' if username is not None else ''),
+            (f'password="{escaped_password}"' if escaped_password is not None else ''),
+        )
+    ).strip()
+
+    if not values:
+        if defaults_extra_filename:
+            return (f'--defaults-extra-file={defaults_extra_filename}',)
+
+        return ()
+
+    fields_message = ' and '.join(
+        field_name
+        for field_name in (
+            (f'username ({username})' if username is not None else None),
+            ('password' if password is not None else None),
+        )
+        if field_name is not None
+    )
+    include_message = f' (including {defaults_extra_filename})' if defaults_extra_filename else ''
+    logger.debug(f'Writing database {fields_message} to defaults extra file pipe{include_message}')
+
+    include = f'!include {defaults_extra_filename}\n' if defaults_extra_filename else ''
+
+    read_file_descriptor, write_file_descriptor = os.pipe()
+    os.write(write_file_descriptor, f'{include}[client]\n{values}'.encode('utf-8'))
+    os.close(write_file_descriptor)
+
+    # This plus subprocess.Popen(..., close_fds=False) in execute.py is necessary for the database
+    # client child process to inherit the file descriptor.
+    os.set_inheritable(read_file_descriptor, True)
+
+    return (f'--defaults-extra-file=/dev/fd/{read_file_descriptor}',)
+
+
+def database_names_to_dump(database, config, username, password, environment, dry_run):
+    '''
+    Given a requested database config, a configuration dict, a database username and password, an
+    environment dict, and whether this is a dry run, return the corresponding sequence of database
+    names to dump. In the case of "all", query for the names of databases on the configured host and
+    return them, excluding any system databases that will cause problems during restore.
     '''
     if database['name'] != 'all':
         return (database['name'],)
@@ -39,20 +119,23 @@ def database_names_to_dump(database, extra_environment, dry_run):
     mariadb_show_command = tuple(
         shlex.quote(part) for part in shlex.split(database.get('mariadb_command') or 'mariadb')
     )
+    extra_options, defaults_extra_filename = parse_extra_options(database.get('list_options'))
     show_command = (
         mariadb_show_command
-        + (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
+        + make_defaults_file_options(username, password, defaults_extra_filename)
+        + extra_options
         + (('--host', database['hostname']) if 'hostname' in database else ())
         + (('--port', str(database['port'])) if 'port' in database else ())
         + (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
-        + (('--user', database['username']) if 'username' in database else ())
+        + (('--ssl',) if database.get('tls') is True else ())
+        + (('--skip-ssl',) if database.get('tls') is False else ())
         + ('--skip-column-names', '--batch')
         + ('--execute', 'show schemas')
     )
+
     logger.debug('Querying for "all" MariaDB databases to dump')
-    show_output = execute_command_and_capture_output(
-        show_command, extra_environment=extra_environment
-    )
+
+    show_output = execute_command_and_capture_output(show_command, environment=environment)
 
     return tuple(
         show_name
@@ -61,8 +144,19 @@ def database_names_to_dump(database, extra_environment, dry_run):
     )
 
 
+SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
+
+
 def execute_dump_command(
-    database, dump_path, database_names, extra_environment, dry_run, dry_run_label
+    database,
+    config,
+    username,
+    password,
+    dump_path,
+    database_names,
+    environment,
+    dry_run,
+    dry_run_label,
 ):
     '''
     Kick off a dump for the given MariaDB database (provided as a configuration dict) to a named
@@ -89,14 +183,17 @@ def execute_dump_command(
         shlex.quote(part)
         for part in shlex.split(database.get('mariadb_dump_command') or 'mariadb-dump')
     )
+    extra_options, defaults_extra_filename = parse_extra_options(database.get('options'))
     dump_command = (
         mariadb_dump_command
-        + (tuple(database['options'].split(' ')) if 'options' in database else ())
+        + make_defaults_file_options(username, password, defaults_extra_filename)
+        + extra_options
         + (('--add-drop-database',) if database.get('add_drop_database', True) else ())
         + (('--host', database['hostname']) if 'hostname' in database else ())
         + (('--port', str(database['port'])) if 'port' in database else ())
         + (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
-        + (('--user', database['username']) if 'username' in database else ())
+        + (('--ssl',) if database.get('tls') is True else ())
+        + (('--skip-ssl',) if database.get('tls') is False else ())
         + ('--databases',)
         + database_names
         + ('--result-file', dump_filename)
@@ -110,7 +207,7 @@ def execute_dump_command(
 
     return execute_command(
         dump_command,
-        extra_environment=extra_environment,
+        environment=environment,
         run_to_completion=False,
     )
 
@@ -152,8 +249,16 @@ def dump_data_sources(
 
     for database in databases:
         dump_path = make_dump_path(borgmatic_runtime_directory)
-        extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
-        dump_database_names = database_names_to_dump(database, extra_environment, dry_run)
+        username = borgmatic.hooks.credential.parse.resolve_credential(
+            database.get('username'), config
+        )
+        password = borgmatic.hooks.credential.parse.resolve_credential(
+            database.get('password'), config
+        )
+        environment = dict(os.environ)
+        dump_database_names = database_names_to_dump(
+            database, config, username, password, environment, dry_run
+        )
 
         if not dump_database_names:
             if dry_run:
@@ -168,9 +273,12 @@ def dump_data_sources(
                 processes.append(
                     execute_dump_command(
                         renamed_database,
+                        config,
+                        username,
+                        password,
                         dump_path,
                         (dump_name,),
-                        extra_environment,
+                        environment,
                         dry_run,
                         dry_run_label,
                     )
@@ -179,9 +287,12 @@ def dump_data_sources(
             processes.append(
                 execute_dump_command(
                     database,
+                    config,
+                    username,
+                    password,
                     dump_path,
                     dump_database_names,
-                    extra_environment,
+                    environment,
                     dry_run,
                     dry_run_label,
                 )
@@ -190,7 +301,8 @@ def dump_data_sources(
     if not dry_run:
         patterns.append(
             borgmatic.borg.pattern.Pattern(
-                os.path.join(borgmatic_runtime_directory, 'mariadb_databases')
+                os.path.join(borgmatic_runtime_directory, 'mariadb_databases'),
+                source=borgmatic.borg.pattern.Pattern_source.HOOK,
             )
         )
 
@@ -251,30 +363,38 @@ def restore_data_source_dump(
     port = str(
         connection_params['port'] or data_source.get('restore_port', data_source.get('port', ''))
     )
-    username = connection_params['username'] or data_source.get(
-        'restore_username', data_source.get('username')
+    tls = data_source.get('restore_tls', data_source.get('tls'))
+    username = borgmatic.hooks.credential.parse.resolve_credential(
+        (
+            connection_params['username']
+            or data_source.get('restore_username', data_source.get('username'))
+        ),
+        config,
     )
-    password = connection_params['password'] or data_source.get(
-        'restore_password', data_source.get('password')
+    password = borgmatic.hooks.credential.parse.resolve_credential(
+        (
+            connection_params['password']
+            or data_source.get('restore_password', data_source.get('password'))
+        ),
+        config,
     )
 
     mariadb_restore_command = tuple(
         shlex.quote(part) for part in shlex.split(data_source.get('mariadb_command') or 'mariadb')
     )
+    extra_options, defaults_extra_filename = parse_extra_options(data_source.get('restore_options'))
     restore_command = (
         mariadb_restore_command
+        + make_defaults_file_options(username, password, defaults_extra_filename)
+        + extra_options
         + ('--batch',)
-        + (
-            tuple(data_source['restore_options'].split(' '))
-            if 'restore_options' in data_source
-            else ()
-        )
         + (('--host', hostname) if hostname else ())
         + (('--port', str(port)) if port else ())
         + (('--protocol', 'tcp') if hostname or port else ())
-        + (('--user', username) if username else ())
+        + (('--ssl',) if tls is True else ())
+        + (('--skip-ssl',) if tls is False else ())
     )
-    extra_environment = {'MYSQL_PWD': password} if password else None
+    environment = dict(os.environ)
 
     logger.debug(f"Restoring MariaDB database {data_source['name']}{dry_run_label}")
     if dry_run:
@@ -287,5 +407,5 @@ def restore_data_source_dump(
         [extract_process],
         output_log_level=logging.DEBUG,
         input_file=extract_process.stdout,
-        extra_environment=extra_environment,
+        environment=environment,
     )

+ 68 - 16
borgmatic/hooks/data_source/mongodb.py

@@ -4,6 +4,7 @@ import shlex
 
 import borgmatic.borg.pattern
 import borgmatic.config.paths
+import borgmatic.hooks.credential.parse
 from borgmatic.execute import execute_command, execute_command_with_processes
 from borgmatic.hooks.data_source import dump
 
@@ -52,6 +53,7 @@ def dump_data_sources(
     logger.info(f'Dumping MongoDB databases{dry_run_label}')
 
     processes = []
+
     for database in databases:
         name = database['name']
         dump_filename = dump.make_data_source_dump_filename(
@@ -68,7 +70,7 @@ def dump_data_sources(
         if dry_run:
             continue
 
-        command = build_dump_command(database, dump_filename, dump_format)
+        command = build_dump_command(database, config, dump_filename, dump_format)
 
         if dump_format == 'directory':
             dump.create_parent_directory_for_dump(dump_filename)
@@ -80,26 +82,65 @@ def dump_data_sources(
     if not dry_run:
         patterns.append(
             borgmatic.borg.pattern.Pattern(
-                os.path.join(borgmatic_runtime_directory, 'mongodb_databases')
+                os.path.join(borgmatic_runtime_directory, 'mongodb_databases'),
+                source=borgmatic.borg.pattern.Pattern_source.HOOK,
             )
         )
 
     return processes
 
 
-def build_dump_command(database, dump_filename, dump_format):
+def make_password_config_file(password):
+    '''
+    Given a database password, write it as a MongoDB configuration file to an anonymous pipe and
+    return its filename. The idea is that this is a more secure way to transmit a password to
+    MongoDB than providing it directly on the command-line.
+
+    Do not use the returned value for multiple different command invocations. That will not work
+    because each pipe is "used up" once read.
+    '''
+    logger.debug('Writing MongoDB password to configuration file pipe')
+
+    read_file_descriptor, write_file_descriptor = os.pipe()
+    os.write(write_file_descriptor, f'password: {password}'.encode('utf-8'))
+    os.close(write_file_descriptor)
+
+    # This plus subprocess.Popen(..., close_fds=False) in execute.py is necessary for the database
+    # client child process to inherit the file descriptor.
+    os.set_inheritable(read_file_descriptor, True)
+
+    return f'/dev/fd/{read_file_descriptor}'
+
+
+def build_dump_command(database, config, dump_filename, dump_format):
     '''
-    Return the mongodump command from a single database configuration.
+    Return the custom mongodump_command from a single database configuration.
     '''
     all_databases = database['name'] == 'all'
 
+    password = borgmatic.hooks.credential.parse.resolve_credential(database.get('password'), config)
+
+    dump_command = tuple(
+        shlex.quote(part) for part in shlex.split(database.get('mongodump_command') or 'mongodump')
+    )
     return (
-        ('mongodump',)
+        dump_command
         + (('--out', shlex.quote(dump_filename)) if dump_format == 'directory' else ())
         + (('--host', shlex.quote(database['hostname'])) if 'hostname' in database else ())
         + (('--port', shlex.quote(str(database['port']))) if 'port' in database else ())
-        + (('--username', shlex.quote(database['username'])) if 'username' in database else ())
-        + (('--password', shlex.quote(database['password'])) if 'password' in database else ())
+        + (
+            (
+                '--username',
+                shlex.quote(
+                    borgmatic.hooks.credential.parse.resolve_credential(
+                        database['username'], config
+                    )
+                ),
+            )
+            if 'username' in database
+            else ()
+        )
+        + (('--config', make_password_config_file(password)) if password else ())
         + (
             ('--authenticationDatabase', shlex.quote(database['authentication_database']))
             if 'authentication_database' in database
@@ -173,7 +214,7 @@ def restore_data_source_dump(
         data_source.get('hostname'),
     )
     restore_command = build_restore_command(
-        extract_process, data_source, dump_filename, connection_params
+        extract_process, data_source, config, dump_filename, connection_params
     )
 
     logger.debug(f"Restoring MongoDB database {data_source['name']}{dry_run_label}")
@@ -190,22 +231,33 @@ def restore_data_source_dump(
     )
 
 
-def build_restore_command(extract_process, database, dump_filename, connection_params):
+def build_restore_command(extract_process, database, config, dump_filename, connection_params):
     '''
-    Return the mongorestore command from a single database configuration.
+    Return the custom mongorestore_command from a single database configuration.
     '''
     hostname = connection_params['hostname'] or database.get(
         'restore_hostname', database.get('hostname')
     )
     port = str(connection_params['port'] or database.get('restore_port', database.get('port', '')))
-    username = connection_params['username'] or database.get(
-        'restore_username', database.get('username')
+    username = borgmatic.hooks.credential.parse.resolve_credential(
+        (
+            connection_params['username']
+            or database.get('restore_username', database.get('username'))
+        ),
+        config,
     )
-    password = connection_params['password'] or database.get(
-        'restore_password', database.get('password')
+    password = borgmatic.hooks.credential.parse.resolve_credential(
+        (
+            connection_params['password']
+            or database.get('restore_password', database.get('password'))
+        ),
+        config,
     )
 
-    command = ['mongorestore']
+    command = list(
+        shlex.quote(part)
+        for part in shlex.split(database.get('mongorestore_command') or 'mongorestore')
+    )
     if extract_process:
         command.append('--archive')
     else:
@@ -219,7 +271,7 @@ def build_restore_command(extract_process, database, dump_filename, connection_p
     if username:
         command.extend(('--username', username))
     if password:
-        command.extend(('--password', password))
+        command.extend(('--config', make_password_config_file(password)))
     if 'authentication_database' in database:
         command.extend(('--authenticationDatabase', database['authentication_database']))
     if 'restore_options' in database:

+ 82 - 30
borgmatic/hooks/data_source/mysql.py

@@ -5,6 +5,8 @@ import shlex
 
 import borgmatic.borg.pattern
 import borgmatic.config.paths
+import borgmatic.hooks.credential.parse
+import borgmatic.hooks.data_source.mariadb
 from borgmatic.execute import (
     execute_command,
     execute_command_and_capture_output,
@@ -25,11 +27,12 @@ def make_dump_path(base_directory):  # pragma: no cover
 SYSTEM_DATABASE_NAMES = ('information_schema', 'mysql', 'performance_schema', 'sys')
 
 
-def database_names_to_dump(database, extra_environment, dry_run):
+def database_names_to_dump(database, config, username, password, environment, dry_run):
     '''
-    Given a requested database config, return the corresponding sequence of database names to dump.
-    In the case of "all", query for the names of databases on the configured host and return them,
-    excluding any system databases that will cause problems during restore.
+    Given a requested database config, a configuration dict, a database username and password, an
+    environment dict, and whether this is a dry run, return the corresponding sequence of database
+    names to dump. In the case of "all", query for the names of databases on the configured host and
+    return them, excluding any system databases that will cause problems during restore.
     '''
     if database['name'] != 'all':
         return (database['name'],)
@@ -39,20 +42,27 @@ def database_names_to_dump(database, extra_environment, dry_run):
     mysql_show_command = tuple(
         shlex.quote(part) for part in shlex.split(database.get('mysql_command') or 'mysql')
     )
+    extra_options, defaults_extra_filename = (
+        borgmatic.hooks.data_source.mariadb.parse_extra_options(database.get('list_options'))
+    )
     show_command = (
         mysql_show_command
-        + (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
+        + borgmatic.hooks.data_source.mariadb.make_defaults_file_options(
+            username, password, defaults_extra_filename
+        )
+        + extra_options
         + (('--host', database['hostname']) if 'hostname' in database else ())
         + (('--port', str(database['port'])) if 'port' in database else ())
         + (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
-        + (('--user', database['username']) if 'username' in database else ())
+        + (('--ssl',) if database.get('tls') is True else ())
+        + (('--skip-ssl',) if database.get('tls') is False else ())
         + ('--skip-column-names', '--batch')
         + ('--execute', 'show schemas')
     )
+
     logger.debug('Querying for "all" MySQL databases to dump')
-    show_output = execute_command_and_capture_output(
-        show_command, extra_environment=extra_environment
-    )
+
+    show_output = execute_command_and_capture_output(show_command, environment=environment)
 
     return tuple(
         show_name
@@ -62,7 +72,15 @@ def database_names_to_dump(database, extra_environment, dry_run):
 
 
 def execute_dump_command(
-    database, dump_path, database_names, extra_environment, dry_run, dry_run_label
+    database,
+    config,
+    username,
+    password,
+    dump_path,
+    database_names,
+    environment,
+    dry_run,
+    dry_run_label,
 ):
     '''
     Kick off a dump for the given MySQL/MariaDB database (provided as a configuration dict) to a
@@ -88,14 +106,21 @@ def execute_dump_command(
     mysql_dump_command = tuple(
         shlex.quote(part) for part in shlex.split(database.get('mysql_dump_command') or 'mysqldump')
     )
+    extra_options, defaults_extra_filename = (
+        borgmatic.hooks.data_source.mariadb.parse_extra_options(database.get('options'))
+    )
     dump_command = (
         mysql_dump_command
-        + (tuple(database['options'].split(' ')) if 'options' in database else ())
+        + borgmatic.hooks.data_source.mariadb.make_defaults_file_options(
+            username, password, defaults_extra_filename
+        )
+        + extra_options
         + (('--add-drop-database',) if database.get('add_drop_database', True) else ())
         + (('--host', database['hostname']) if 'hostname' in database else ())
         + (('--port', str(database['port'])) if 'port' in database else ())
         + (('--protocol', 'tcp') if 'hostname' in database or 'port' in database else ())
-        + (('--user', database['username']) if 'username' in database else ())
+        + (('--ssl',) if database.get('tls') is True else ())
+        + (('--skip-ssl',) if database.get('tls') is False else ())
         + ('--databases',)
         + database_names
         + ('--result-file', dump_filename)
@@ -109,7 +134,7 @@ def execute_dump_command(
 
     return execute_command(
         dump_command,
-        extra_environment=extra_environment,
+        environment=environment,
         run_to_completion=False,
     )
 
@@ -151,8 +176,16 @@ def dump_data_sources(
 
     for database in databases:
         dump_path = make_dump_path(borgmatic_runtime_directory)
-        extra_environment = {'MYSQL_PWD': database['password']} if 'password' in database else None
-        dump_database_names = database_names_to_dump(database, extra_environment, dry_run)
+        username = borgmatic.hooks.credential.parse.resolve_credential(
+            database.get('username'), config
+        )
+        password = borgmatic.hooks.credential.parse.resolve_credential(
+            database.get('password'), config
+        )
+        environment = dict(os.environ)
+        dump_database_names = database_names_to_dump(
+            database, config, username, password, environment, dry_run
+        )
 
         if not dump_database_names:
             if dry_run:
@@ -167,9 +200,12 @@ def dump_data_sources(
                 processes.append(
                     execute_dump_command(
                         renamed_database,
+                        config,
+                        username,
+                        password,
                         dump_path,
                         (dump_name,),
-                        extra_environment,
+                        environment,
                         dry_run,
                         dry_run_label,
                     )
@@ -178,9 +214,12 @@ def dump_data_sources(
             processes.append(
                 execute_dump_command(
                     database,
+                    config,
+                    username,
+                    password,
                     dump_path,
                     dump_database_names,
-                    extra_environment,
+                    environment,
                     dry_run,
                     dry_run_label,
                 )
@@ -189,7 +228,8 @@ def dump_data_sources(
     if not dry_run:
         patterns.append(
             borgmatic.borg.pattern.Pattern(
-                os.path.join(borgmatic_runtime_directory, 'mysql_databases')
+                os.path.join(borgmatic_runtime_directory, 'mysql_databases'),
+                source=borgmatic.borg.pattern.Pattern_source.HOOK,
             )
         )
 
@@ -250,30 +290,42 @@ def restore_data_source_dump(
     port = str(
         connection_params['port'] or data_source.get('restore_port', data_source.get('port', ''))
     )
-    username = connection_params['username'] or data_source.get(
-        'restore_username', data_source.get('username')
+    tls = data_source.get('restore_tls', data_source.get('tls'))
+    username = borgmatic.hooks.credential.parse.resolve_credential(
+        (
+            connection_params['username']
+            or data_source.get('restore_username', data_source.get('username'))
+        ),
+        config,
     )
-    password = connection_params['password'] or data_source.get(
-        'restore_password', data_source.get('password')
+    password = borgmatic.hooks.credential.parse.resolve_credential(
+        (
+            connection_params['password']
+            or data_source.get('restore_password', data_source.get('password'))
+        ),
+        config,
     )
 
     mysql_restore_command = tuple(
         shlex.quote(part) for part in shlex.split(data_source.get('mysql_command') or 'mysql')
     )
+    extra_options, defaults_extra_filename = (
+        borgmatic.hooks.data_source.mariadb.parse_extra_options(data_source.get('restore_options'))
+    )
     restore_command = (
         mysql_restore_command
-        + ('--batch',)
-        + (
-            tuple(data_source['restore_options'].split(' '))
-            if 'restore_options' in data_source
-            else ()
+        + borgmatic.hooks.data_source.mariadb.make_defaults_file_options(
+            username, password, defaults_extra_filename
         )
+        + extra_options
+        + ('--batch',)
         + (('--host', hostname) if hostname else ())
         + (('--port', str(port)) if port else ())
         + (('--protocol', 'tcp') if hostname or port else ())
-        + (('--user', username) if username else ())
+        + (('--ssl',) if tls is True else ())
+        + (('--skip-ssl',) if tls is False else ())
     )
-    extra_environment = {'MYSQL_PWD': password} if password else None
+    environment = dict(os.environ)
 
     logger.debug(f"Restoring MySQL database {data_source['name']}{dry_run_label}")
     if dry_run:
@@ -286,5 +338,5 @@ def restore_data_source_dump(
         [extract_process],
         output_log_level=logging.DEBUG,
         input_file=extract_process.stdout,
-        extra_environment=extra_environment,
+        environment=environment,
     )

+ 59 - 35
borgmatic/hooks/data_source/postgresql.py

@@ -7,6 +7,7 @@ import shlex
 
 import borgmatic.borg.pattern
 import borgmatic.config.paths
+import borgmatic.hooks.credential.parse
 from borgmatic.execute import (
     execute_command,
     execute_command_and_capture_output,
@@ -24,46 +25,52 @@ def make_dump_path(base_directory):  # pragma: no cover
     return dump.make_data_source_dump_path(base_directory, 'postgresql_databases')
 
 
-def make_extra_environment(database, restore_connection_params=None):
+def make_environment(database, config, restore_connection_params=None):
     '''
-    Make the extra_environment dict from the given database configuration. If restore connection
-    params are given, this is for a restore operation.
+    Make an environment dict from the current environment variables and the given database
+    configuration. If restore connection params are given, this is for a restore operation.
     '''
-    extra = dict()
+    environment = dict(os.environ)
 
     try:
         if restore_connection_params:
-            extra['PGPASSWORD'] = restore_connection_params.get('password') or database.get(
-                'restore_password', database['password']
+            environment['PGPASSWORD'] = borgmatic.hooks.credential.parse.resolve_credential(
+                (
+                    restore_connection_params.get('password')
+                    or database.get('restore_password', database['password'])
+                ),
+                config,
             )
         else:
-            extra['PGPASSWORD'] = database['password']
+            environment['PGPASSWORD'] = borgmatic.hooks.credential.parse.resolve_credential(
+                database['password'], config
+            )
     except (AttributeError, KeyError):
         pass
 
     if 'ssl_mode' in database:
-        extra['PGSSLMODE'] = database['ssl_mode']
+        environment['PGSSLMODE'] = database['ssl_mode']
     if 'ssl_cert' in database:
-        extra['PGSSLCERT'] = database['ssl_cert']
+        environment['PGSSLCERT'] = database['ssl_cert']
     if 'ssl_key' in database:
-        extra['PGSSLKEY'] = database['ssl_key']
+        environment['PGSSLKEY'] = database['ssl_key']
     if 'ssl_root_cert' in database:
-        extra['PGSSLROOTCERT'] = database['ssl_root_cert']
+        environment['PGSSLROOTCERT'] = database['ssl_root_cert']
     if 'ssl_crl' in database:
-        extra['PGSSLCRL'] = database['ssl_crl']
+        environment['PGSSLCRL'] = database['ssl_crl']
 
-    return extra
+    return environment
 
 
 EXCLUDED_DATABASE_NAMES = ('template0', 'template1')
 
 
-def database_names_to_dump(database, extra_environment, dry_run):
+def database_names_to_dump(database, config, environment, dry_run):
     '''
-    Given a requested database config, return the corresponding sequence of database names to dump.
-    In the case of "all" when a database format is given, query for the names of databases on the
-    configured host and return them. For "all" without a database format, just return a sequence
-    containing "all".
+    Given a requested database config and a configuration dict, return the corresponding sequence of
+    database names to dump. In the case of "all" when a database format is given, query for the
+    names of databases on the configured host and return them. For "all" without a database format,
+    just return a sequence containing "all".
     '''
     requested_name = database['name']
 
@@ -82,13 +89,18 @@ def database_names_to_dump(database, extra_environment, dry_run):
         + ('--list', '--no-password', '--no-psqlrc', '--csv', '--tuples-only')
         + (('--host', database['hostname']) if 'hostname' in database else ())
         + (('--port', str(database['port'])) if 'port' in database else ())
-        + (('--username', database['username']) if 'username' in database else ())
+        + (
+            (
+                '--username',
+                borgmatic.hooks.credential.parse.resolve_credential(database['username'], config),
+            )
+            if 'username' in database
+            else ()
+        )
         + (tuple(database['list_options'].split(' ')) if 'list_options' in database else ())
     )
     logger.debug('Querying for "all" PostgreSQL databases to dump')
-    list_output = execute_command_and_capture_output(
-        list_command, extra_environment=extra_environment
-    )
+    list_output = execute_command_and_capture_output(list_command, environment=environment)
 
     return tuple(
         row[0]
@@ -135,9 +147,9 @@ def dump_data_sources(
     logger.info(f'Dumping PostgreSQL databases{dry_run_label}')
 
     for database in databases:
-        extra_environment = make_extra_environment(database)
+        environment = make_environment(database, config)
         dump_path = make_dump_path(borgmatic_runtime_directory)
-        dump_database_names = database_names_to_dump(database, extra_environment, dry_run)
+        dump_database_names = database_names_to_dump(database, config, environment, dry_run)
 
         if not dump_database_names:
             if dry_run:
@@ -147,6 +159,7 @@ def dump_data_sources(
 
         for database_name in dump_database_names:
             dump_format = database.get('format', None if database_name == 'all' else 'custom')
+            compression = database.get('compression')
             default_dump_command = 'pg_dumpall' if database_name == 'all' else 'pg_dump'
             dump_command = tuple(
                 shlex.quote(part)
@@ -174,12 +187,20 @@ def dump_data_sources(
                 + (('--host', shlex.quote(database['hostname'])) if 'hostname' in database else ())
                 + (('--port', shlex.quote(str(database['port']))) if 'port' in database else ())
                 + (
-                    ('--username', shlex.quote(database['username']))
+                    (
+                        '--username',
+                        shlex.quote(
+                            borgmatic.hooks.credential.parse.resolve_credential(
+                                database['username'], config
+                            )
+                        ),
+                    )
                     if 'username' in database
                     else ()
                 )
                 + (('--no-owner',) if database.get('no_owner', False) else ())
                 + (('--format', shlex.quote(dump_format)) if dump_format else ())
+                + (('--compress', shlex.quote(str(compression))) if compression is not None else ())
                 + (('--file', shlex.quote(dump_filename)) if dump_format == 'directory' else ())
                 + (
                     tuple(shlex.quote(option) for option in database['options'].split(' '))
@@ -204,7 +225,7 @@ def dump_data_sources(
                 execute_command(
                     command,
                     shell=True,
-                    extra_environment=extra_environment,
+                    environment=environment,
                 )
             else:
                 dump.create_named_pipe_for_dump(dump_filename)
@@ -212,7 +233,7 @@ def dump_data_sources(
                     execute_command(
                         command,
                         shell=True,
-                        extra_environment=extra_environment,
+                        environment=environment,
                         run_to_completion=False,
                     )
                 )
@@ -220,7 +241,8 @@ def dump_data_sources(
     if not dry_run:
         patterns.append(
             borgmatic.borg.pattern.Pattern(
-                os.path.join(borgmatic_runtime_directory, 'postgresql_databases')
+                os.path.join(borgmatic_runtime_directory, 'postgresql_databases'),
+                source=borgmatic.borg.pattern.Pattern_source.HOOK,
             )
         )
 
@@ -290,8 +312,12 @@ def restore_data_source_dump(
     port = str(
         connection_params['port'] or data_source.get('restore_port', data_source.get('port', ''))
     )
-    username = connection_params['username'] or data_source.get(
-        'restore_username', data_source.get('username')
+    username = borgmatic.hooks.credential.parse.resolve_credential(
+        (
+            connection_params['username']
+            or data_source.get('restore_username', data_source.get('username'))
+        ),
+        config,
     )
 
     all_databases = bool(data_source['name'] == 'all')
@@ -344,9 +370,7 @@ def restore_data_source_dump(
         )
     )
 
-    extra_environment = make_extra_environment(
-        data_source, restore_connection_params=connection_params
-    )
+    environment = make_environment(data_source, config, restore_connection_params=connection_params)
 
     logger.debug(f"Restoring PostgreSQL database {data_source['name']}{dry_run_label}")
     if dry_run:
@@ -359,6 +383,6 @@ def restore_data_source_dump(
         [extract_process] if extract_process else [],
         output_log_level=logging.DEBUG,
         input_file=extract_process.stdout if extract_process else None,
-        extra_environment=extra_environment,
+        environment=environment,
     )
-    execute_command(analyze_command, extra_environment=extra_environment)
+    execute_command(analyze_command, environment=environment)

+ 16 - 6
borgmatic/hooks/data_source/snapshot.py

@@ -1,3 +1,4 @@
+import os
 import pathlib
 
 IS_A_HOOK = False
@@ -11,10 +12,14 @@ def get_contained_patterns(parent_directory, candidate_patterns):
     paths, but there's a parent directory (logical volume, dataset, subvolume, etc.) at /var, then
     /var is what we want to snapshot.
 
-    For this to work, a candidate pattern path can't have any globs or other non-literal characters
-    in the initial portion of the path that matches the parent directory. For instance, a parent
-    directory of /var would match a candidate pattern path of /var/log/*/data, but not a pattern
-    path like /v*/log/*/data.
+    If a parent directory and a candidate pattern are on different devices, skip the pattern. That's
+    because any snapshot of a parent directory won't actually include "contained" directories if
+    they reside on separate devices.
+
+    For this function to work, a candidate pattern path can't have any globs or other non-literal
+    characters in the initial portion of the path that matches the parent directory. For instance, a
+    parent directory of /var would match a candidate pattern path of /var/log/*/data, but not a
+    pattern path like /v*/log/*/data.
 
     The one exception is that if a regular expression pattern path starts with "^", that will get
     stripped off for purposes of matching against a parent directory.
@@ -27,12 +32,17 @@ def get_contained_patterns(parent_directory, candidate_patterns):
     if not candidate_patterns:
         return ()
 
+    parent_device = os.stat(parent_directory).st_dev if os.path.exists(parent_directory) else None
+
     contained_patterns = tuple(
         candidate
         for candidate in candidate_patterns
         for candidate_path in (pathlib.PurePath(candidate.path.lstrip('^')),)
-        if pathlib.PurePath(parent_directory) == candidate_path
-        or pathlib.PurePath(parent_directory) in candidate_path.parents
+        if (
+            pathlib.PurePath(parent_directory) == candidate_path
+            or pathlib.PurePath(parent_directory) in candidate_path.parents
+        )
+        if candidate.device == parent_device
     )
     candidate_patterns -= set(contained_patterns)
 

+ 11 - 7
borgmatic/hooks/data_source/sqlite.py

@@ -71,13 +71,16 @@ def dump_data_sources(
             )
             continue
 
-        command = (
-            'sqlite3',
+        sqlite_command = tuple(
+            shlex.quote(part) for part in shlex.split(database.get('sqlite_command') or 'sqlite3')
+        )
+        command = sqlite_command + (
             shlex.quote(database_path),
             '.dump',
             '>',
             shlex.quote(dump_filename),
         )
+
         logger.debug(
             f'Dumping SQLite database at {database_path} to {dump_filename}{dry_run_label}'
         )
@@ -90,7 +93,8 @@ def dump_data_sources(
     if not dry_run:
         patterns.append(
             borgmatic.borg.pattern.Pattern(
-                os.path.join(borgmatic_runtime_directory, 'sqlite_databases')
+                os.path.join(borgmatic_runtime_directory, 'sqlite_databases'),
+                source=borgmatic.borg.pattern.Pattern_source.HOOK,
             )
         )
 
@@ -159,11 +163,11 @@ def restore_data_source_dump(
     except FileNotFoundError:  # pragma: no cover
         pass
 
-    restore_command = (
-        'sqlite3',
-        database_path,
+    sqlite_restore_command = tuple(
+        shlex.quote(part)
+        for part in shlex.split(data_source.get('sqlite_restore_command') or 'sqlite3')
     )
-
+    restore_command = sqlite_restore_command + (shlex.quote(database_path),)
     # Don't give Borg local path so as to error on warnings, as "borg extract" only gives a warning
     # if the restore paths don't exist in the archive.
     execute_command_with_processes(

+ 60 - 13
borgmatic/hooks/data_source/zfs.py

@@ -1,5 +1,6 @@
 import collections
 import glob
+import hashlib
 import logging
 import os
 import shutil
@@ -38,6 +39,9 @@ def get_datasets_to_backup(zfs_command, patterns):
     pattern paths represent the requested datasets to snapshot. But also include any datasets tagged
     with a borgmatic-specific user property, whether or not they appear in the patterns.
 
+    Only include datasets that contain at least one root pattern sourced from borgmatic
+    configuration (as opposed to generated elsewhere in borgmatic).
+
     Return the result as a sequence of Dataset instances, sorted by mount point.
     '''
     list_output = borgmatic.execute.execute_command_and_capture_output(
@@ -48,7 +52,7 @@ def get_datasets_to_backup(zfs_command, patterns):
             '-t',
             'filesystem',
             '-o',
-            f'name,mountpoint,{BORGMATIC_USER_PROPERTY}',
+            f'name,mountpoint,canmount,{BORGMATIC_USER_PROPERTY}',
         )
     )
 
@@ -60,7 +64,12 @@ def get_datasets_to_backup(zfs_command, patterns):
             (
                 Dataset(dataset_name, mount_point, (user_property_value == 'auto'), ())
                 for line in list_output.splitlines()
-                for (dataset_name, mount_point, user_property_value) in (line.rstrip().split('\t'),)
+                for (dataset_name, mount_point, can_mount, user_property_value) in (
+                    line.rstrip().split('\t'),
+                )
+                # Skip datasets that are marked "canmount=off", because mounting their snapshots will
+                # result in completely empty mount points—thereby preventing us from backing them up.
+                if can_mount == 'on'
             ),
             key=lambda dataset: dataset.mount_point,
             reverse=True,
@@ -83,7 +92,12 @@ def get_datasets_to_backup(zfs_command, patterns):
                 for contained_patterns in (
                     (
                         (
-                            (borgmatic.borg.pattern.Pattern(dataset.mount_point),)
+                            (
+                                borgmatic.borg.pattern.Pattern(
+                                    dataset.mount_point,
+                                    source=borgmatic.borg.pattern.Pattern_source.HOOK,
+                                ),
+                            )
                             if dataset.auto_backup
                             else ()
                         )
@@ -92,7 +106,12 @@ def get_datasets_to_backup(zfs_command, patterns):
                         )
                     ),
                 )
-                if contained_patterns
+                if dataset.auto_backup
+                or any(
+                    pattern.type == borgmatic.borg.pattern.Pattern_type.ROOT
+                    and pattern.source == borgmatic.borg.pattern.Pattern_source.CONFIG
+                    for pattern in contained_patterns
+                )
             ),
             key=lambda dataset: dataset.mount_point,
         )
@@ -115,7 +134,16 @@ def get_all_dataset_mount_points(zfs_command):
         )
     )
 
-    return tuple(sorted(line.rstrip() for line in list_output.splitlines()))
+    return tuple(
+        sorted(
+            {
+                mount_point
+                for line in list_output.splitlines()
+                for mount_point in (line.rstrip(),)
+                if mount_point != 'none'
+            }
+        )
+    )
 
 
 def snapshot_dataset(zfs_command, full_snapshot_name):  # pragma: no cover
@@ -155,10 +183,14 @@ def mount_snapshot(mount_command, full_snapshot_name, snapshot_mount_path):  # p
     )
 
 
-def make_borg_snapshot_pattern(pattern, normalized_runtime_directory):
+MOUNT_POINT_HASH_LENGTH = 10
+
+
+def make_borg_snapshot_pattern(pattern, dataset, normalized_runtime_directory):
     '''
-    Given a Borg pattern as a borgmatic.borg.pattern.Pattern instance, return a new Pattern with its
-    path rewritten to be in a snapshot directory based on the given runtime directory.
+    Given a Borg pattern as a borgmatic.borg.pattern.Pattern instance and the Dataset containing it,
+    return a new Pattern with its path rewritten to be in a snapshot directory based on both the
+    given runtime directory and the given Dataset's mount point.
 
     Move any initial caret in a regular expression pattern path to the beginning, so as not to break
     the regular expression.
@@ -173,6 +205,10 @@ def make_borg_snapshot_pattern(pattern, normalized_runtime_directory):
     rewritten_path = initial_caret + os.path.join(
         normalized_runtime_directory,
         'zfs_snapshots',
+        # Including this hash prevents conflicts between snapshot patterns for different datasets.
+        # For instance, without this, snapshotting a dataset at /var and another at /var/spool would
+        # result in overlapping snapshot patterns and therefore colliding mount attempts.
+        hashlib.shake_256(dataset.mount_point.encode('utf-8')).hexdigest(MOUNT_POINT_HASH_LENGTH),
         '.',  # Borg 1.4+ "slashdot" hack.
         # Included so that the source directory ends up in the Borg archive at its "original" path.
         pattern.path.lstrip('^').lstrip(os.path.sep),
@@ -183,6 +219,7 @@ def make_borg_snapshot_pattern(pattern, normalized_runtime_directory):
         pattern.type,
         pattern.style,
         pattern.device,
+        source=borgmatic.borg.pattern.Pattern_source.HOOK,
     )
 
 
@@ -209,7 +246,8 @@ def dump_data_sources(
     dry_run_label = ' (dry run; not actually snapshotting anything)' if dry_run else ''
     logger.info(f'Snapshotting ZFS datasets{dry_run_label}')
 
-    # List ZFS datasets to get their mount points.
+    # List ZFS datasets to get their mount points, but only consider those patterns that came from
+    # actual user configuration (as opposed to, say, other hooks).
     zfs_command = hook_config.get('zfs_command', 'zfs')
     requested_datasets = get_datasets_to_backup(zfs_command, patterns)
 
@@ -234,6 +272,9 @@ def dump_data_sources(
         snapshot_mount_path = os.path.join(
             normalized_runtime_directory,
             'zfs_snapshots',
+            hashlib.shake_256(dataset.mount_point.encode('utf-8')).hexdigest(
+                MOUNT_POINT_HASH_LENGTH
+            ),
             dataset.mount_point.lstrip(os.path.sep),
         )
 
@@ -249,7 +290,9 @@ def dump_data_sources(
         )
 
         for pattern in dataset.contained_patterns:
-            snapshot_pattern = make_borg_snapshot_pattern(pattern, normalized_runtime_directory)
+            snapshot_pattern = make_borg_snapshot_pattern(
+                pattern, dataset, normalized_runtime_directory
+            )
 
             # Attempt to update the pattern in place, since pattern order matters to Borg.
             try:
@@ -334,6 +377,7 @@ def remove_data_source_dumps(hook_config, config, borgmatic_runtime_directory, d
             os.path.normpath(borgmatic_runtime_directory),
         ),
         'zfs_snapshots',
+        '*',
     )
     logger.debug(f'Looking for snapshots to remove in {snapshots_glob}{dry_run_label}')
     umount_command = hook_config.get('umount_command', 'umount')
@@ -346,7 +390,10 @@ def remove_data_source_dumps(hook_config, config, borgmatic_runtime_directory, d
         # child datasets before the shorter mount point paths of parent datasets.
         for mount_point in reversed(dataset_mount_points):
             snapshot_mount_path = os.path.join(snapshots_directory, mount_point.lstrip(os.path.sep))
-            if not os.path.isdir(snapshot_mount_path):
+
+            # If the snapshot mount path is empty, this is probably just a "shadow" of a nested
+            # dataset and therefore there's nothing to unmount.
+            if not os.path.isdir(snapshot_mount_path) or not os.listdir(snapshot_mount_path):
                 continue
 
             # This might fail if the path is already mounted, but we swallow errors here since we'll
@@ -370,10 +417,10 @@ def remove_data_source_dumps(hook_config, config, borgmatic_runtime_directory, d
                     return
                 except subprocess.CalledProcessError as error:
                     logger.debug(error)
-                    return
+                    continue
 
         if not dry_run:
-            shutil.rmtree(snapshots_directory)
+            shutil.rmtree(snapshot_mount_path, ignore_errors=True)
 
     # Destroy snapshots.
     full_snapshot_names = get_all_snapshots(zfs_command)

+ 1 - 0
borgmatic/hooks/dispatch.py

@@ -3,6 +3,7 @@ import importlib
 import logging
 import pkgutil
 
+import borgmatic.hooks.command
 import borgmatic.hooks.credential
 import borgmatic.hooks.data_source
 import borgmatic.hooks.monitoring

+ 1 - 1
borgmatic/hooks/monitoring/cronhub.py

@@ -28,7 +28,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
     filename in any log entries. If this is a dry run, then don't actually ping anything.
     '''
     if state not in MONITOR_STATE_TO_CRONHUB:
-        logger.debug(f'Ignoring unsupported monitoring {state.name.lower()} in Cronhub hook')
+        logger.debug(f'Ignoring unsupported monitoring state {state.name.lower()} in Cronhub hook')
         return
 
     dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''

+ 1 - 1
borgmatic/hooks/monitoring/cronitor.py

@@ -28,7 +28,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
     filename in any log entries. If this is a dry run, then don't actually ping anything.
     '''
     if state not in MONITOR_STATE_TO_CRONITOR:
-        logger.debug(f'Ignoring unsupported monitoring {state.name.lower()} in Cronitor hook')
+        logger.debug(f'Ignoring unsupported monitoring state {state.name.lower()} in Cronitor hook')
         return
 
     dry_run_label = ' (dry run; not actually pinging)' if dry_run else ''

+ 1 - 1
borgmatic/hooks/monitoring/logs.py

@@ -64,7 +64,7 @@ def get_handler(identifier):
 def format_buffered_logs_for_payload(identifier):
     '''
     Get the handler previously added to the root logger, and slurp buffered logs out of it to
-    send to Healthchecks.
+    send to the monitoring service.
     '''
     try:
         buffering_handler = get_handler(identifier)

+ 16 - 3
borgmatic/hooks/monitoring/ntfy.py

@@ -2,6 +2,8 @@ import logging
 
 import requests
 
+import borgmatic.hooks.credential.parse
+
 logger = logging.getLogger(__name__)
 
 
@@ -47,9 +49,20 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
             'X-Tags': state_config.get('tags'),
         }
 
-        username = hook_config.get('username')
-        password = hook_config.get('password')
-        access_token = hook_config.get('access_token')
+        try:
+            username = borgmatic.hooks.credential.parse.resolve_credential(
+                hook_config.get('username'), config
+            )
+            password = borgmatic.hooks.credential.parse.resolve_credential(
+                hook_config.get('password'), config
+            )
+            access_token = borgmatic.hooks.credential.parse.resolve_credential(
+                hook_config.get('access_token'), config
+            )
+        except ValueError as error:
+            logger.warning(f'Ntfy credential error: {error}')
+            return
+
         auth = None
 
         if access_token is not None:

+ 41 - 11
borgmatic/hooks/monitoring/pagerduty.py

@@ -5,20 +5,37 @@ import platform
 
 import requests
 
+import borgmatic.hooks.credential.parse
+import borgmatic.hooks.monitoring.logs
 from borgmatic.hooks.monitoring import monitor
 
 logger = logging.getLogger(__name__)
 
 EVENTS_API_URL = 'https://events.pagerduty.com/v2/enqueue'
+DEFAULT_LOGS_PAYLOAD_LIMIT_BYTES = 10000
+HANDLER_IDENTIFIER = 'pagerduty'
 
 
-def initialize_monitor(
-    integration_key, config, config_filename, monitoring_log_level, dry_run
-):  # pragma: no cover
+def initialize_monitor(hook_config, config, config_filename, monitoring_log_level, dry_run):
     '''
-    No initialization is necessary for this monitor.
+    Add a handler to the root logger that stores in memory the most recent logs emitted. That way,
+    we can send them all to PagerDuty upon a failure state. But skip this if the "send_logs" option
+    is false.
     '''
-    pass
+    if hook_config.get('send_logs') is False:
+        return
+
+    ping_body_limit = max(
+        DEFAULT_LOGS_PAYLOAD_LIMIT_BYTES
+        - len(borgmatic.hooks.monitoring.logs.PAYLOAD_TRUNCATION_INDICATOR),
+        0,
+    )
+
+    borgmatic.hooks.monitoring.logs.add_handler(
+        borgmatic.hooks.monitoring.logs.Forgetful_buffering_handler(
+            HANDLER_IDENTIFIER, ping_body_limit, monitoring_log_level
+        )
+    )
 
 
 def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run):
@@ -29,21 +46,30 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
     '''
     if state != monitor.State.FAIL:
         logger.debug(
-            f'Ignoring unsupported monitoring {state.name.lower()} in PagerDuty hook',
+            f'Ignoring unsupported monitoring state {state.name.lower()} in PagerDuty hook',
         )
         return
 
     dry_run_label = ' (dry run; not actually sending)' if dry_run else ''
     logger.info(f'Sending failure event to PagerDuty {dry_run_label}')
 
-    if dry_run:
+    try:
+        integration_key = borgmatic.hooks.credential.parse.resolve_credential(
+            hook_config.get('integration_key'), config
+        )
+    except ValueError as error:
+        logger.warning(f'PagerDuty credential error: {error}')
         return
 
+    logs_payload = borgmatic.hooks.monitoring.logs.format_buffered_logs_for_payload(
+        HANDLER_IDENTIFIER
+    )
+
     hostname = platform.node()
     local_timestamp = datetime.datetime.now(datetime.timezone.utc).astimezone().isoformat()
     payload = json.dumps(
         {
-            'routing_key': hook_config['integration_key'],
+            'routing_key': integration_key,
             'event_action': 'trigger',
             'payload': {
                 'summary': f'backup failed on {hostname}',
@@ -57,11 +83,14 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
                     'hostname': hostname,
                     'configuration filename': config_filename,
                     'server time': local_timestamp,
+                    'logs': logs_payload,
                 },
             },
         }
     )
-    logger.debug(f'Using PagerDuty payload: {payload}')
+
+    if dry_run:
+        return
 
     logging.getLogger('urllib3').setLevel(logging.ERROR)
     try:
@@ -74,6 +103,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
 
 def destroy_monitor(ping_url_or_uuid, config, monitoring_log_level, dry_run):  # pragma: no cover
     '''
-    No destruction is necessary for this monitor.
+    Remove the monitor handler that was added to the root logger. This prevents the handler from
+    getting reused by other instances of this monitor.
     '''
-    pass
+    borgmatic.hooks.monitoring.logs.remove_handler(HANDLER_IDENTIFIER)

+ 10 - 2
borgmatic/hooks/monitoring/pushover.py

@@ -2,6 +2,8 @@ import logging
 
 import requests
 
+import borgmatic.hooks.credential.parse
+
 logger = logging.getLogger(__name__)
 
 
@@ -32,8 +34,14 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
 
     state_config = hook_config.get(state.name.lower(), {})
 
-    token = hook_config.get('token')
-    user = hook_config.get('user')
+    try:
+        token = borgmatic.hooks.credential.parse.resolve_credential(
+            hook_config.get('token'), config
+        )
+        user = borgmatic.hooks.credential.parse.resolve_credential(hook_config.get('user'), config)
+    except ValueError as error:
+        logger.warning(f'Pushover credential error: {error}')
+        return
 
     logger.info(f'Updating Pushover{dry_run_label}')
 

+ 1 - 1
borgmatic/hooks/monitoring/uptime_kuma.py

@@ -37,7 +37,7 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
     logging.getLogger('urllib3').setLevel(logging.ERROR)
 
     try:
-        response = requests.get(f'{push_url}?{query}')
+        response = requests.get(f'{push_url}?{query}', verify=hook_config.get('verify_tls', True))
         if not response.ok:
             response.raise_for_status()
     except requests.exceptions.RequestException as error:

+ 83 - 32
borgmatic/hooks/monitoring/zabbix.py

@@ -2,6 +2,8 @@ import logging
 
 import requests
 
+import borgmatic.hooks.credential.parse
+
 logger = logging.getLogger(__name__)
 
 
@@ -14,6 +16,42 @@ def initialize_monitor(
     pass
 
 
+def send_zabbix_request(server, headers, data):
+    '''
+    Given a Zabbix server URL, HTTP headers as a dict, and valid Zabbix JSON payload data as a dict,
+    send a request to the Zabbix server via API.
+
+    Return the response "result" value or None.
+    '''
+    logging.getLogger('urllib3').setLevel(logging.ERROR)
+
+    logger.debug(f'Sending a "{data["method"]}" request to the Zabbix server')
+
+    try:
+        response = requests.post(server, headers=headers, json=data)
+
+        if not response.ok:
+            response.raise_for_status()
+    except requests.exceptions.RequestException as error:
+        logger.warning(f'Zabbix error: {error}')
+
+        return None
+
+    try:
+        result = response.json().get('result')
+        error_message = result['data'][0]['error']
+    except requests.exceptions.JSONDecodeError:
+        logger.warning('Zabbix error: Cannot parse API response')
+
+        return None
+    except (TypeError, KeyError, IndexError):
+        return result
+    else:
+        logger.warning(f'Zabbix error: {error_message}')
+
+        return None
+
+
 def ping_monitor(hook_config, config, config_filename, state, monitoring_log_level, dry_run):
     '''
     Update the configured Zabbix item using either the itemid, or a host and key.
@@ -34,23 +72,31 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
         },
     )
 
+    try:
+        username = borgmatic.hooks.credential.parse.resolve_credential(
+            hook_config.get('username'), config
+        )
+        password = borgmatic.hooks.credential.parse.resolve_credential(
+            hook_config.get('password'), config
+        )
+        api_key = borgmatic.hooks.credential.parse.resolve_credential(
+            hook_config.get('api_key'), config
+        )
+    except ValueError as error:
+        logger.warning(f'Zabbix credential error: {error}')
+
+        return
+
     server = hook_config.get('server')
-    username = hook_config.get('username')
-    password = hook_config.get('password')
-    api_key = hook_config.get('api_key')
     itemid = hook_config.get('itemid')
     host = hook_config.get('host')
     key = hook_config.get('key')
     value = state_config.get('value')
     headers = {'Content-Type': 'application/json-rpc'}
 
-    logger.info(f'Updating Zabbix{dry_run_label}')
+    logger.info(f'Pinging Zabbix{dry_run_label}')
     logger.debug(f'Using Zabbix URL: {server}')
 
-    if server is None:
-        logger.warning('Server missing for Zabbix')
-        return
-
     # Determine the Zabbix method used to store the value: itemid or host/key
     if itemid is not None:
         logger.info(f'Updating {itemid} on Zabbix')
@@ -61,8 +107,8 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
             'id': 1,
         }
 
-    elif (host and key) is not None:
-        logger.info(f'Updating Host:{host} and Key:{key} on Zabbix')
+    elif host is not None and key is not None:
+        logger.info(f'Updating Host: "{host}" and Key: "{key}" on Zabbix')
         data = {
             'jsonrpc': '2.0',
             'method': 'history.push',
@@ -72,58 +118,63 @@ def ping_monitor(hook_config, config, config_filename, state, monitoring_log_lev
 
     elif host is not None:
         logger.warning('Key missing for Zabbix')
-        return
 
+        return
     elif key is not None:
         logger.warning('Host missing for Zabbix')
+
         return
     else:
         logger.warning('No Zabbix itemid or host/key provided')
+
         return
 
     # Determine the authentication method: API key or username/password
     if api_key is not None:
         logger.info('Using API key auth for Zabbix')
-        headers['Authorization'] = 'Bearer ' + api_key
-
-    elif (username and password) is not None:
-        logger.info('Using user/pass auth with user {username} for Zabbix')
-        auth_data = {
+        headers['Authorization'] = f'Bearer {api_key}'
+    elif username is not None and password is not None:
+        logger.info(f'Using user/pass auth with user {username} for Zabbix')
+        login_data = {
             'jsonrpc': '2.0',
             'method': 'user.login',
             'params': {'username': username, 'password': password},
             'id': 1,
         }
+
         if not dry_run:
-            logging.getLogger('urllib3').setLevel(logging.ERROR)
-            try:
-                response = requests.post(server, headers=headers, json=auth_data)
-                data['auth'] = response.json().get('result')
-                if not response.ok:
-                    response.raise_for_status()
-            except requests.exceptions.RequestException as error:
-                logger.warning(f'Zabbix error: {error}')
+            result = send_zabbix_request(server, headers, login_data)
+
+            if not result:
                 return
 
+            headers['Authorization'] = f'Bearer {result}'
     elif username is not None:
         logger.warning('Password missing for Zabbix authentication')
-        return
 
+        return
     elif password is not None:
         logger.warning('Username missing for Zabbix authentication')
+
         return
     else:
         logger.warning('Authentication data missing for Zabbix')
+
         return
 
     if not dry_run:
-        logging.getLogger('urllib3').setLevel(logging.ERROR)
-        try:
-            response = requests.post(server, headers=headers, json=data)
-            if not response.ok:
-                response.raise_for_status()
-        except requests.exceptions.RequestException as error:
-            logger.warning(f'Zabbix error: {error}')
+        send_zabbix_request(server, headers, data)
+
+    if username is not None and password is not None:
+        logout_data = {
+            'jsonrpc': '2.0',
+            'method': 'user.logout',
+            'params': [],
+            'id': 1,
+        }
+
+        if not dry_run:
+            send_zabbix_request(server, headers, logout_data)
 
 
 def destroy_monitor(ping_url_or_uuid, config, monitoring_log_level, dry_run):  # pragma: no cover

+ 16 - 9
borgmatic/logger.py

@@ -29,12 +29,13 @@ def interactive_console():
     return sys.stderr.isatty() and os.environ.get('TERM') != 'dumb'
 
 
-def should_do_markup(no_color, configs):
+def should_do_markup(configs, json_enabled):
     '''
-    Given the value of the command-line no-color argument, and a dict of configuration filename to
-    corresponding parsed configuration, determine if we should enable color marking up.
+    Given a dict of configuration filename to corresponding parsed configuration (which already have
+    any command-line overrides applied) and whether json is enabled, determine if we should enable
+    color marking up.
     '''
-    if no_color:
+    if json_enabled:
         return False
 
     if any(config.get('color', True) is False for config in configs.values()):
@@ -256,7 +257,7 @@ class Log_prefix:
         self.original_prefix = get_log_prefix()
         set_log_prefix(self.prefix)
 
-    def __exit__(self, exception, value, traceback):
+    def __exit__(self, exception_type, exception, traceback):
         '''
         Restore any original prefix.
         '''
@@ -265,9 +266,14 @@ class Log_prefix:
 
 class Delayed_logging_handler(logging.handlers.BufferingHandler):
     '''
-    A logging handler that buffers logs and doesn't flush them until explicitly flushed (after target
-    handlers are actually set). It's useful for holding onto logged records from before logging is
-    configured to ensure those records eventually make their way to the relevant logging handlers.
+    A logging handler that buffers logs and doesn't flush them until explicitly flushed (after
+    target handlers are actually set). It's useful for holding onto messages logged before logging
+    is configured, ensuring those records eventually make their way to the relevant logging
+    handlers.
+
+    When flushing, don't forward log records to a target handler if the record's log level is below
+    that of the handler. This recreates the standard logging behavior of, say, logging.DEBUG records
+    getting suppressed if a handler's level is only set to logging.INFO.
     '''
 
     def __init__(self):
@@ -287,7 +293,8 @@ class Delayed_logging_handler(logging.handlers.BufferingHandler):
 
             for record in self.buffer:
                 for target in self.targets:
-                    target.handle(record)
+                    if record.levelno >= target.level:
+                        target.handle(record)
 
             self.buffer.clear()
         finally:

+ 3 - 0
borgmatic/signals.py

@@ -24,6 +24,9 @@ def handle_signal(signal_number, frame):
         logger.critical('Exiting due to TERM signal')
         sys.exit(EXIT_CODE_FROM_SIGNAL + signal.SIGTERM)
     elif signal_number == signal.SIGINT:
+        # Borg doesn't always exit on a SIGINT, so give it a little encouragement.
+        os.killpg(os.getpgrp(), signal.SIGTERM)
+
         raise KeyboardInterrupt()
 
 

+ 1 - 1
docs/Dockerfile

@@ -4,7 +4,7 @@ COPY . /app
 RUN apk add --no-cache py3-pip py3-ruamel.yaml py3-ruamel.yaml.clib
 RUN pip install --break-system-packages --no-cache /app && borgmatic config generate && chmod +r /etc/borgmatic/config.yaml
 RUN borgmatic --help > /command-line.txt \
-    && for action in repo-create transfer create prune compact check delete extract config "config bootstrap" "config generate" "config validate" export-tar mount umount repo-delete restore repo-list list repo-info info break-lock "key export" "key change-passphrase" borg; do \
+    && for action in repo-create transfer create prune compact check delete extract config "config bootstrap" "config generate" "config validate" export-tar mount umount repo-delete restore repo-list list repo-info info break-lock "key export" "key import" "key change-passphrase" recreate borg; do \
            echo -e "\n--------------------------------------------------------------------------------\n" >> /command-line.txt \
            && borgmatic $action --help >> /command-line.txt; done
 RUN /app/docs/fetch-contributors >> /contributors.html

+ 1 - 0
docs/_includes/index.css

@@ -165,6 +165,7 @@ ul {
 }
 li {
 	padding: .25em 0;
+        line-height: 1.5;
 }
 li ul {
         list-style-type: disc;

+ 1 - 1
docs/_includes/layouts/base.njk

@@ -4,7 +4,7 @@
 		<meta charset="utf-8">
 		<meta name="viewport" content="width=device-width, initial-scale=1.0">
                 <meta name="generator" content="{{ eleventy.generator }}">
-		<link rel="icon" href="docs/static/borgmatic.png" type="image/x-icon">
+		<link rel="icon" href="https://torsion.org/borgmatic/docs/static/borgmatic.png" type="image/x-icon">
 		<title>{{ subtitle + ' - ' if subtitle}}{{ title }}</title>
 {%- set css %}
 {% include 'index.css' %}

+ 2 - 3
docs/fetch-contributors

@@ -26,8 +26,7 @@ def list_merged_pulls(url):
 
 
 def list_contributing_issues(url):
-    # labels = bug, design finalized, etc.
-    response = requests.get(f'{url}?labels=19,20,22,23,32,52,53,54', headers={'Accept': 'application/json', 'Content-Type': 'application/json'})
+    response = requests.get(url, headers={'Accept': 'application/json', 'Content-Type': 'application/json'})
 
     if not response.ok:
         response.raise_for_status()
@@ -39,7 +38,7 @@ PULLS_API_ENDPOINT_URLS = (
     'https://projects.torsion.org/api/v1/repos/borgmatic-collective/borgmatic/pulls',
     'https://api.github.com/repos/borgmatic-collective/borgmatic/pulls',
 )
-ISSUES_API_ENDPOINT_URL = 'https://projects.torsion.org/api/v1/repos/borgmatic-collective/borgmatic/issues'
+ISSUES_API_ENDPOINT_URL = 'https://projects.torsion.org/api/v1/repos/borgmatic-collective/borgmatic/issues?state=all'
 RECENT_CONTRIBUTORS_CUTOFF_DAYS = 365
 
 

+ 207 - 51
docs/how-to/add-preparation-and-cleanup-steps-to-backups.md

@@ -7,18 +7,112 @@ eleventyNavigation:
 ---
 ## Preparation and cleanup hooks
 
-If you find yourself performing preparation tasks before your backup runs, or
-cleanup work afterwards, borgmatic hooks may be of interest. Hooks are shell
-commands that borgmatic executes for you at various points as it runs, and
-they're configured in the `hooks` section of your configuration file. But if
-you're looking to backup a database, it's probably easier to use the [database
-backup
+If you find yourself performing preparation tasks before your backup runs or
+doing cleanup work afterwards, borgmatic command hooks may be of interest. These
+are custom shell commands you can configure borgmatic to execute at various
+points as it runs.
+
+(But if you're looking to backup a database, it's probably easier to use the
+[database backup
 feature](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/)
-instead.
+instead.)
+
+<span class="minilink minilink-addedin">New in version 2.0.0 (**not yet
+released**)</span> Command hooks are now configured via a list of `commands:` in
+your borgmatic configuration file. For example:
+
+```yaml
+commands:
+    - before: action
+      when: [create]
+      run:
+          - echo "Before create!"
+    - after: action
+      when:
+          - create
+          - prune
+      run:
+          - echo "After create or prune!"
+    - after: error
+      run:
+          - echo "Something went wrong!"
+```
+
+If you're coming from an older version of borgmatic, there is tooling to help
+you [upgrade your
+configuration](https://torsion.org/borgmatic/docs/how-to/upgrade/#upgrading-your-configuration)
+to this new command hook format.
+
+Note that if a `run:` command contains a special YAML character such as a colon,
+you may need to quote the entire string (or use a [multiline
+string](https://yaml-multiline.info/)) to avoid an error:
+
+```yaml
+commands:
+    - before: action
+      when: [create]
+      run:
+          - "echo Backup: start"
+```
+
+Each command in the `commands:` list has the following options:
+
+ * `before` or `after`: Name for the point in borgmatic's execution that the commands should be run before or after, one of:
+    * `action` runs before each action for each repository. This replaces the deprecated `before_create`, `after_prune`, etc.
+    * `repository` runs before or after all actions for each repository. This replaces the deprecated `before_actions` and `after_actions`.
+    * `configuration` runs before or after all actions and repositories in the current configuration file.
+    * `everything` runs before or after all configuration files. Errors here do not trigger `error` hooks or the `fail` state in monitoring hooks. This replaces the deprecated `before_everything` and `after_everything`.
+    * `error` runs after an error occurs—and it's only available for `after`. This replaces the deprecated `on_error` hook.
+ * `when`: Only trigger the hook when borgmatic is run with particular actions (`create`, `prune`, etc.) listed here. Defaults to running for all actions.
+ * `run`: List of one or more shell commands or scripts to run when this command hook is triggered.
+
+An `after` command hook runs even if an error occurs in the corresponding
+`before` hook or between those two hooks. This allows you to perform cleanup
+steps that correspond to `before` preparation commands—even when something goes
+wrong. This is a departure from the way that the deprecated `after_*` hooks
+worked in borgmatic prior to version 2.0.0.
+
+Additionally, when command hooks run, they respect the `working_directory`
+option if it is configured, meaning that the hook commands are run in that
+directory.
+
+
+### Order of execution
+
+Here's a way of visualizing how all of these command hooks slot into borgmatic's
+execution.
+
+Let's say you've got a borgmatic configuration file with a configured
+repository. And suppose you configure several command hooks and then run
+borgmatic for the `create` and `prune` actions. Here's the order of execution:
+
+ * Run `before: everything` hooks (from all configuration files).
+    * Run `before: configuration` hooks (from the first configuration file).
+        * Run `before: repository` hooks (for the first repository).
+            * Run `before: action` hooks for `create`.
+            * Actually run the `create` action (e.g. `borg create`).
+            * Run `after: action` hooks for `create`.
+            * Run `before: action` hooks for `prune`.
+            * Actually run the `prune` action (e.g. `borg prune`).
+            * Run `after: action` hooks for `prune`.
+        * Run `after: repository` hooks (for the first repository).
+    * Run `after: configuration` hooks (from the first configuration file).
+ * Run `after: everything` hooks (from all configuration files).
+
+This same order of execution extends to multiple repositories and/or
+configuration files.
+
+
+### Deprecated command hooks
 
-You can specify `before_backup` hooks to perform preparation steps before
+<span class="minilink minilink-addedin">Prior to version 2.0.0</span> The
+command hooks worked a little differently. In these older versions of borgmatic,
+you can specify `before_backup` hooks to perform preparation steps before
 running backups and specify `after_backup` hooks to perform cleanup steps
-afterwards. Here's an example:
+afterwards. These deprecated command hooks still work in version 2.0.0+,
+although see below about a few semantic differences starting in that version.
+
+Here's an example of these deprecated hooks:
 
 ```yaml
 before_backup:
@@ -43,6 +137,15 @@ instance, `before_prune` runs before a `prune` action for a repository, while
 <span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
 these options in the `hooks:` section of your configuration.
 
+<span class="minilink minilink-addedin">New in version 2.0.0</span> An `after_*`
+command hook runs even if an error occurs in the corresponding `before_*` hook
+or between those two hooks. This allows you to perform cleanup steps that
+correspond to `before_*` preparation commands—even when something goes wrong.
+
+<span class="minilink minilink-addedin">New in version 2.0.0</span> When command
+hooks run, they respect the `working_directory` option if it is configured,
+meaning that the hook commands are run in that directory.
+
 <span class="minilink minilink-addedin">New in version 1.7.0</span> The
 `before_actions` and `after_actions` hooks run before/after all the actions
 (like `create`, `prune`, etc.) for each repository. These hooks are a good
@@ -57,49 +160,13 @@ but not if an error occurs in a previous hook or in the backups themselves.
 (Prior to borgmatic 1.6.0, these hooks instead ran once per configuration file
 rather than once per repository.)
 
-
-## Variable interpolation
-
-The before and after action hooks support interpolating particular runtime
-variables into the hook command. Here's an example that assumes you provide a
-separate shell script:
-
-```yaml
-after_prune:
-    - record-prune.sh "{configuration_filename}" "{repository}"
-```
-
-<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
-this option in the `hooks:` section of your configuration.
-
-In this example, when the hook is triggered, borgmatic interpolates runtime
-values into the hook command: the borgmatic configuration filename and the
-paths of the current Borg repository. Here's the full set of supported
-variables you can use here:
-
- * `configuration_filename`: borgmatic configuration filename in which the
-   hook was defined
- * `log_file`
-   <span class="minilink minilink-addedin">New in version 1.7.12</span>:
-   path of the borgmatic log file, only set when the `--log-file` flag is used
- * `repository`: path of the current repository as configured in the current
-   borgmatic configuration file
- * `repository_label` <span class="minilink minilink-addedin">New in version
-   1.8.12</span>: label of the current repository as configured in the current
-   borgmatic configuration file
-
-Note that you can also interpolate in [arbitrary environment
-variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
-
-
-## Global hooks
-
 You can also use `before_everything` and `after_everything` hooks to perform
 global setup or cleanup:
 
 ```yaml
 before_everything:
     - set-up-stuff-globally
+
 after_everything:
     - clean-up-stuff-globally
 ```
@@ -117,13 +184,102 @@ but only if there is a `create` action. It runs even if an error occurs during
 a backup or a backup hook, but not if an error occurs during a
 `before_everything` hook.
 
+`on_error` hooks run when an error occurs, but only if there is a `create`,
+`prune`, `compact`, or `check` action. For instance, borgmatic can run
+configurable shell commands to fire off custom error notifications or take other
+actions, so you can get alerted as soon as something goes wrong. Here's a
+not-so-useful example:
+
+```yaml
+on_error:
+    - echo "Error while creating a backup or running a backup hook."
+```
+
+<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
+this option in the `hooks:` section of your configuration.
+
+The `on_error` hook supports interpolating particular runtime variables into
+the hook command. Here's an example that assumes you provide a separate shell
+script to handle the alerting:
+
+```yaml
+on_error:
+    - send-text-message.sh
+```
+
+borgmatic does not run `on_error` hooks if an error occurs within a
+`before_everything` or `after_everything` hook.
+
+
+## Variable interpolation
+
+The command action hooks support interpolating particular runtime variables into
+the commands that are run. Here's are a couple examples that assume you provide
+separate shell scripts:
 
-## Error hooks
+```yaml
+commands:
+    - after: action
+      when: [prune]
+      run:
+          - record-prune.sh {configuration_filename} {repository}
+    - after: error
+      when: [create]
+      run:
+          - send-text-message.sh {configuration_filename} {repository}
+```
+
+In this example, when the hook is triggered, borgmatic interpolates runtime
+values into each hook command: the borgmatic configuration filename and the
+paths of the current Borg repository.
+
+Here's the full set of supported variables you can use here:
+
+ * `configuration_filename`: borgmatic configuration filename in which the
+   hook was defined
+ * `log_file`
+   <span class="minilink minilink-addedin">New in version 1.7.12</span>:
+   path of the borgmatic log file, only set when the `--log-file` flag is used
+ * `repository`: path of the current repository as configured in the current
+   borgmatic configuration file, if applicable to the current hook
+ * `repository_label` <span class="minilink minilink-addedin">New in version
+   1.8.12</span>: label of the current repository as configured in the current
+   borgmatic configuration file, if applicable to the current hook
+ * `error`: the error message itself, only applies to `error` hooks
+ * `output`: output of the command that failed, only applies to `error` hooks
+   (may be blank if an error occurred without running a command)
+
+Not all command hooks support all variables. For instance, the `everything` and
+`configuration` hooks don't support repository variables because those hooks
+don't run in the context of a single repository. But the deprecated command
+hooks (`before_backup`, `on_error`, etc.) do generally support variable
+interpolation.
+
+borgmatic automatically escapes these interpolated values to prevent shell
+injection attacks. One implication is that you shouldn't wrap the interpolated
+values in your own quotes, as that will interfere with the quoting performed by
+borgmatic and result in your command receiving incorrect arguments. For
+instance, this won't work:
+
+```yaml
+commands:
+    - after: error
+      run:
+          # Don't do this! It won't work, as the {error} value is already quoted.
+          - send-text-message.sh "Uh oh: {error}"
+```
 
-borgmatic also runs `on_error` hooks if an error occurs, either when creating
-a backup or running a backup hook. See the [monitoring and alerting
-documentation](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/)
-for more information.
+Do this instead:
+
+```yaml
+commands:
+    - after: error
+      run:
+          - send-text-message.sh {error}
+```
+
+Note that you can also interpolate [arbitrary environment
+variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
 
 
 ## Hook output

+ 47 - 46
docs/how-to/backup-to-a-removable-drive-or-an-intermittent-server.md

@@ -29,17 +29,14 @@ concept of "soft failure" come in.
 
 This feature leverages [borgmatic command
 hooks](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/),
-so first familiarize yourself with them. The idea is that you write a simple
-test in the form of a borgmatic hook to see if backups should proceed or not.
+so familiarize yourself with them first. The idea is that you write a simple
+test in the form of a borgmatic command hook to see if backups should proceed or
+not.
 
 The way the test works is that if any of your hook commands return a special
 exit status of 75, that indicates to borgmatic that it's a temporary failure,
 and borgmatic should skip all subsequent actions for the current repository.
 
-<span class="minilink minilink-addedin">Prior to version 1.9.0</span> Soft
-failures skipped subsequent actions for *all* repositories in the
-configuration file, rather than just for the current repository.
-
 If you return any status besides 75, then it's a standard success or error.
 (Zero is success; anything else other than 75 is an error).
 
@@ -62,33 +59,37 @@ these options in the `location:` section of your configuration.
 <span class="minilink minilink-addedin">Prior to version 1.7.10</span> Omit
 the `path:` portion of the `repositories` list.
 
-Then, write a `before_backup` hook in that same configuration file that uses
-the external `findmnt` utility to see whether the drive is mounted before
-proceeding.
+Then, make a command hook in that same configuration file that uses the external
+`findmnt` utility to see whether the drive is mounted before proceeding.
+
+```yaml
+commands:
+    - before: repository
+      run:
+          - findmnt /mnt/removable > /dev/null || exit 75
+```
+
+<span class="minilink minilink-addedin">Prior to version 2.0.0</span> Use the
+deprecated `before_actions` hook instead:
 
 ```yaml
-before_backup:
+before_actions:
     - findmnt /mnt/removable > /dev/null || exit 75
 ```
 
 <span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put this
 option in the `hooks:` section of your configuration.
 
+<span class="minilink minilink-addedin">Prior to version 1.7.0</span> Use
+`before_create` or similar instead of `before_actions`, which was introduced in
+borgmatic 1.7.0.
+
 What this does is check if the `findmnt` command errors when probing for a
 particular mount point. If it does error, then it returns exit code 75 to
 borgmatic. borgmatic logs the soft failure, skips all further actions for the
 current repository, and proceeds onward to any other repositories and/or
 configuration files you may have.
 
-If you'd prefer not to use a separate configuration file, and you'd rather
-have multiple repositories in a single configuration file, you can make your
-`before_backup` soft failure test [vary by
-repository](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation).
-That might require calling out to a separate script though.
-
-Note that `before_backup` only runs on the `create` action. See below about
-optionally using `before_actions` instead.
-
 You can imagine a similar check for the sometimes-online server case:
 
 ```yaml
@@ -98,50 +99,50 @@ source_directories:
 repositories:
     - path: ssh://me@buddys-server.org/./backup.borg
 
-before_backup:
-    - ping -q -c 1 buddys-server.org > /dev/null || exit 75
+commands:
+    - before: repository
+      run:
+          - ping -q -c 1 buddys-server.org > /dev/null || exit 75
 ```
 
 Or to only run backups if the battery level is high enough:
 
 ```yaml
-before_backup:
-    - is_battery_percent_at_least.sh 25
+commands:
+    - before: repository
+      run:
+          - is_battery_percent_at_least.sh 25
 ```
 
-(Writing the battery script is left as an exercise to the reader.)
-
-<span class="minilink minilink-addedin">New in version 1.7.0</span> The
-`before_actions` and `after_actions` hooks run before/after all the actions
-(like `create`, `prune`, etc.) for each repository. So if you'd like your soft
-failure command hook to run regardless of action, consider using
-`before_actions` instead of `before_backup`.
+Writing the battery script is left as an exercise to the reader.
 
 
 ## Caveats and details
 
 There are some caveats you should be aware of with this feature.
 
- * You'll generally want to put a soft failure command in the `before_backup`
+ * You'll generally want to put a soft failure command in a `before` command
    hook, so as to gate whether the backup action occurs. While a soft failure is
-   also supported in the `after_backup` hook, returning a soft failure there
+   also supported in an `after` command hook, returning a soft failure there
    won't prevent any actions from occurring, because they've already occurred!
-   Similarly, you can return a soft failure from an `on_error` hook, but at
+   Similarly, you can return a soft failure from an `error` command hook, but at
    that point it's too late to prevent the error.
  * Returning a soft failure does prevent further commands in the same hook from
-   executing. So, like a standard error, it is an "early out". Unlike a standard
+   executing. So, like a standard error, it is an "early out." Unlike a standard
    error, borgmatic does not display it in angry red text or consider it a
    failure.
- * Any given soft failure only applies to the a single borgmatic repository
-   (as of borgmatic 1.9.0). So if you have other repositories you don't want
-   soft-failed, then make your soft fail test [vary by
-   repository](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation)—or
-   put anything that you don't want soft-failed (like always-online cloud
-   backups) in separate configuration files from your soft-failing
-   repositories.
+ * <span class="minilink minilink-addedin">New in version 1.9.0</span> Soft
+   failures in `action` or `before_*` command hooks only skip the current
+   repository rather than all repositories in a configuration file.
+ * If you're writing a soft failure script that you want to vary based on the
+   current repository, for instance so you can have multiple repositories in a
+   single configuration file, have a look at [command hook variable
+   interpolation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation).
+   And there's always still the option of putting anything that you don't want
+   soft-failed (like always-online cloud backups) in separate configuration
+   files from your soft-failing repositories.
  * The soft failure doesn't have to test anything related to a repository. You
-   can even perform a test to make sure that individual source directories are
-   mounted and available. Use your imagination!
- * The soft failure feature also works for before/after hooks for other
-   actions as well. But it is not implemented for `before_everything` or
-   `after_everything`.
+   can even perform a test that individual source directories are mounted and
+   available. Use your imagination!
+ * Soft failures are not currently implemented for `everything`,
+   `before_everything`, or `after_everything` command hooks.

+ 35 - 17
docs/how-to/backup-your-databases.md

@@ -193,14 +193,14 @@ mysql_databases:
 
 ### Containers
 
-If your database is running within a container and borgmatic is too, no
+If your database server is running within a container and borgmatic is too, no
 problem—configure borgmatic to connect to the container's name on its exposed
 port. For instance:
 
 ```yaml
 postgresql_databases:
     - name: users
-      hostname: your-database-container-name
+      hostname: your-database-server-container-name
       port: 5433
       username: postgres
       password: trustsome1
@@ -210,21 +210,22 @@ postgresql_databases:
 these options in the `hooks:` section of your configuration.
 
 But what if borgmatic is running on the host? You can still connect to a
-database container if its ports are properly exposed to the host. For
+database server container if its ports are properly exposed to the host. For
 instance, when running the database container, you can specify `--publish
 127.0.0.1:5433:5432` so that it exposes the container's port 5432 to port 5433
-on the host (only reachable on localhost, in this case). Or the same thing
-with Docker Compose:
+on the host (only reachable on localhost, in this case). Or the same thing with
+Docker Compose:
 
 ```yaml
 services:
-   your-database-container-name:
+   your-database-server-container-name:
        image: postgres
        ports:
            - 127.0.0.1:5433:5432
 ```
 
-And then you can connect to the database from borgmatic running on the host:
+And then you can configure borgmatic running on the host to connect to the
+database:
 
 ```yaml
 hooks:
@@ -240,9 +241,9 @@ Alter the ports in these examples to suit your particular database system.
 
 Normally, borgmatic dumps a database by running a database dump command (e.g.
 `pg_dump`) on the host or wherever borgmatic is running, and this command
-connects to your containerized database via the given `hostname` and `port`.
-But if you don't have any database dump commands installed on your host and
-you'd rather use the commands inside your database container itself, borgmatic
+connects to your containerized database via the given `hostname` and `port`. But
+if you don't have any database dump commands installed on your host and you'd
+rather use the commands inside your running database container itself, borgmatic
 supports that too. For that, configure borgmatic to `exec` into your container
 to run the dump command.
 
@@ -259,9 +260,10 @@ hooks:
           pg_dump_command: docker exec my_pg_container pg_dump
 ```
 
-... where `my_pg_container` is the name of your database container. In this
-example, you'd also need to set the `pg_restore_command` and `psql_command`
-options.
+... where `my_pg_container` is the name of your running database container.
+Running `pg_dump` this way takes advantage of the localhost "trust"
+authentication within that container. In this example, you'd also need to set
+the `pg_restore_command` and `psql_command` options.
 
 If you choose to use the `pg_dump` command within the container, and you're
 using the `directory` format in particular, you'll also need to mount the
@@ -280,6 +282,24 @@ services:
       - /run/user/1000:/run/user/1000
 ```
 
+Another variation: If you're running borgmatic on the host but want to spin up a
+temporary `pg_dump` container whenever borgmatic dumps a database, for
+instance to make use of a `pg_dump` version not present on the host, try
+something like this:
+
+```yaml
+hooks:
+    postgresql_databases:
+        - name: users
+          hostname: your-database-hostname
+          username: postgres
+          password: trustsome1
+          pg_dump_command: docker run --rm --env PGPASSWORD postgres:17-alpine pg_dump
+```
+
+The `--env PGPASSWORD` is necessary here for borgmatic to provide your database
+password to the temporary `pg_dump` container.
+
 Similar command override options are available for (some of) the other
 supported database types as well. See the [configuration
 reference](https://torsion.org/borgmatic/docs/reference/configuration/) for
@@ -309,10 +329,8 @@ hooks:
 ### External passwords
 
 If you don't want to keep your database passwords in your borgmatic
-configuration file, you can instead pass them in via [environment
-variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/)
-or command-line [configuration
-overrides](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-overrides).
+configuration file, you can instead pass them in [from external credential
+sources](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
 
 
 ### Configuration backups

+ 77 - 8
docs/how-to/make-per-application-backups.md

@@ -482,16 +482,89 @@ applications, but then set the repository for each application at runtime. Or
 you might want to try a variant of an option for testing purposes without
 actually touching your configuration file.
 
+<span class="minilink minilink-addedin">New in version 2.0.0</span>
 Whatever the reason, you can override borgmatic configuration options at the
-command-line via the `--override` flag. Here's an example:
+command-line, as there's a command-line flag corresponding to every
+configuration option (with its underscores converted to dashes).
+
+For instance, to override the `compression` configuration option, use the
+corresponding `--compression` flag on the command-line:
+
+```bash
+borgmatic create --compression zstd
+```
+
+What this does is load your given configuration files and for each one, disregard
+the configured value for the `compression` option and use the value given on the
+command-line instead—but just for the duration of the borgmatic run.
+
+You can override nested configuration options too by separating such option
+names with a period. For instance:
+
+```bash
+borgmatic create --bootstrap.store-config-files false
+```
+
+You can even set complex option data structures by using inline YAML syntax. For
+example, set the `repositories` option with a YAML list of key/value pairs:
+
+```bash
+borgmatic create --repositories "[{path: /mnt/backup, label: local}]"
+```
+
+If your override value contains characters like colons or spaces, then you'll
+need to use quotes for it to parse correctly.
+
+You can also set individual nested options within existing list elements:
+
+```bash
+borgmatic create --repositories[0].path /mnt/backup
+```
+
+This updates the `path` option for the first repository in `repositories`.
+Change the `[0]` index as needed to address different list elements. And note
+that this only works for elements already set in configuration; you can't append
+new list elements from the command-line.
+
+See the [command-line reference
+documentation](https://torsion.org/borgmatic/docs/reference/command-line/) for
+the full set of available arguments, including examples of each for the complex
+values.
+
+There are a handful of configuration options that don't have corresponding
+command-line flags at the global scope, but instead have flags within individual
+borgmatic actions. For instance, the `list_details` option can be overridden by
+the `--list` flag that's only present on particular actions. Similarly with
+`progress` and `--progress`, `statistics` and `--stats`, and `match_archives`
+and `--match-archives`.
+
+Also note that if you want to pass a command-line flag itself as a value to one
+of these override flags, that may not work. For instance, specifying
+`--extra-borg-options.create --no-cache-sync` results in an error, because
+`--no-cache-sync` gets interpreted as a borgmatic option (which in this case
+doesn't exist) rather than a Borg option.
+
+An alternate to command-line overrides is passing in your values via
+[environment
+variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
+
+
+### Deprecated overrides
+
+<span class="minilink minilink-addedin">Prior to version 2.0.0</span>
+Configuration overrides were performed with an `--override` flag. You can still
+use `--override` with borgmatic 2.0.0+, but it's deprecated in favor of the new
+command-line flags described above.
+
+Here's an example of `--override`:
 
 ```bash
 borgmatic create --override remote_path=/usr/local/bin/borg1
 ```
 
-What this does is load your configuration files and for each one, disregard
-the configured value for the `remote_path` option and use the value of
-`/usr/local/bin/borg1` instead.
+What this does is load your given configuration files and for each one, disregard
+the configured value for the `remote_path` option and use the value given on the
+command-line instead—but just for the duration of the borgmatic run.
 
 You can even override nested values or multiple values at once. For instance:
 
@@ -540,10 +613,6 @@ reference](https://torsion.org/borgmatic/docs/reference/configuration/) for
 which options are list types. (YAML list values look like `- this` with an
 indentation and a leading dash.)
 
-An alternate to command-line overrides is passing in your values via
-[environment
-variables](https://torsion.org/borgmatic/docs/how-to/provide-your-passwords/).
-
 
 ## Constant interpolation
 

+ 86 - 139
docs/how-to/monitor-your-backups.md

@@ -14,140 +14,55 @@ and alerting comes in.
 
 There are several different ways you can monitor your backups and find out
 whether they're succeeding. Which of these you choose to do is up to you and
-your particular infrastructure.
-
-### Job runner alerts
-
-The easiest place to start is with failure alerts from the [scheduled job
-runner](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#autopilot)
-(cron, systemd, etc.) that's running borgmatic. But note that if the job
-doesn't even get scheduled (e.g. due to the job runner not running), you
-probably won't get an alert at all! Still, this is a decent first line of
-defense, especially when combined with some of the other approaches below.
-
-### Commands run on error
-
-The `on_error` hook allows you to run an arbitrary command or script when
-borgmatic itself encounters an error running your backups. So for instance,
-you can run a script to send yourself a text message alert. But note that if
-borgmatic doesn't actually run, this alert won't fire.  See [error
-hooks](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#error-hooks)
-below for how to configure this.
-
-### Third-party monitoring services
-
-borgmatic integrates with these monitoring services and libraries, pinging
-them as backups happen:
-
- * [Apprise](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook)
- * [Cronhub](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook)
- * [Cronitor](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook)
- * [Grafana Loki](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#loki-hook)
- * [Healthchecks](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook)
- * [ntfy](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook)
- * [PagerDuty](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook)
- * [Pushover](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pushover-hook)
- * [Sentry](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#sentry-hook)
- * [Uptime Kuma](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#uptime-kuma-hook)
- * [Zabbix](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#zabbix-hook)
-
-The idea is that you'll receive an alert when something goes wrong or when the
-service doesn't hear from borgmatic for a configured interval (if supported).
-See the documentation links above for configuration information.
-
-While these services and libraries offer different features, you probably only
-need to use one of them at most.
-
-
-### Third-party monitoring software
-
-You can use traditional monitoring software to consume borgmatic JSON output
-and track when the last successful backup occurred. See [scripting
-borgmatic](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#scripting-borgmatic)
-below for how to configure this.
-
-### Borg hosting providers
-
-Most [Borg hosting
-providers](https://torsion.org/borgmatic/#hosting-providers) include
-monitoring and alerting as part of their offering. This gives you a dashboard
-to check on all of your backups, and can alert you if the service doesn't hear
-from borgmatic for a configured interval.
-
-### Consistency checks
-
-While not strictly part of monitoring, if you want confidence that your
-backups are not only running but are restorable as well, you can configure
-particular [consistency
-checks](https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/#consistency-check-configuration)
-or even script full [extract
-tests](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/).
-
-
-## Error hooks
-
-When an error occurs during a `create`, `prune`, `compact`, or `check` action,
-borgmatic can run configurable shell commands to fire off custom error
-notifications or take other actions, so you can get alerted as soon as
-something goes wrong. Here's a not-so-useful example:
-
-```yaml
-on_error:
-    - echo "Error while creating a backup or running a backup hook."
-```
-
-<span class="minilink minilink-addedin">Prior to version 1.8.0</span> Put
-this option in the `hooks:` section of your configuration.
-
-The `on_error` hook supports interpolating particular runtime variables into
-the hook command. Here's an example that assumes you provide a separate shell
-script to handle the alerting:
-
-```yaml
-on_error:
-    - send-text-message.sh {configuration_filename} {repository}
-```
-
-In this example, when the error occurs, borgmatic interpolates runtime values
-into the hook command: the borgmatic configuration filename and the path of
-the repository. Here's the full set of supported variables you can use here:
-
- * `configuration_filename`: borgmatic configuration filename in which the
-   error occurred
- * `repository`: path of the repository in which the error occurred (may be
-   blank if the error occurs in a hook)
- * `error`: the error message itself
- * `output`: output of the command that failed (may be blank if an error
-   occurred without running a command)
-
-Note that borgmatic runs the `on_error` hooks only for `create`, `prune`,
-`compact`, or `check` actions/hooks in which an error occurs and not other
-actions. borgmatic does not run `on_error` hooks if an error occurs within a
-`before_everything` or `after_everything` hook. For more about hooks, see the
-[borgmatic hooks
-documentation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/),
-especially the security information.
-
-<span class="minilink minilink-addedin">New in version 1.8.7</span> borgmatic
-automatically escapes these interpolated values to prevent shell injection
-attacks. One implication of this change is that you shouldn't wrap the
-interpolated values in your own quotes, as that will interfere with the
-quoting performed by borgmatic and result in your command receiving incorrect
-arguments. For instance, this won't work:
-
-
-```yaml
-on_error:
-    # Don't do this! It won't work, as the {error} value is already quoted.
-    - send-text-message.sh "Uh oh: {error}"
-```
-
-Do this instead:
-
-```yaml
-on_error:
-    - send-text-message.sh {error}
-```
+your particular infrastructure:
+
+ * **Job runner alerts**: The easiest place to start is with failure alerts from
+   the [scheduled job
+   runner](https://torsion.org/borgmatic/docs/how-to/set-up-backups/#autopilot)
+   (cron, systemd, etc.) that's running borgmatic. But note that if the job
+   doesn't even get scheduled (e.g. due to the job runner not running), you
+   probably won't get an alert at all! Still, this is a decent first line of
+   defense, especially when combined with some of the other approaches below.
+ * **Third-party monitoring services:** borgmatic integrates with these monitoring
+   services and libraries, pinging them as backups happen. The idea is that
+   you'll receive an alert when something goes wrong or when the service doesn't
+   hear from borgmatic for a configured interval (if supported). While these
+   services and libraries offer different features, you probably only need to
+   use one of them at most. See these documentation links for configuration
+   information:
+     * [Apprise](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#apprise-hook)
+     * [Cronhub](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronhub-hook)
+     * [Cronitor](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#cronitor-hook)
+     * [Grafana Loki](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#loki-hook)
+     * [Healthchecks](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#healthchecks-hook)
+     * [ntfy](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#ntfy-hook)
+     * [PagerDuty](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pagerduty-hook)
+     * [Pushover](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#pushover-hook)
+     * [Sentry](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#sentry-hook)
+     * [Uptime Kuma](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#uptime-kuma-hook)
+     * [Zabbix](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#zabbix-hook)
+ * **Third-party monitoring software:** You can use traditional monitoring
+   software to consume borgmatic JSON output and track when the last successful
+   backup occurred. See [scripting
+   borgmatic](https://torsion.org/borgmatic/docs/how-to/monitor-your-backups/#scripting-borgmatic)
+   below for how to configure this.
+ * **Borg hosting providers:** Some [Borg hosting
+   providers](https://torsion.org/borgmatic/#hosting-providers) include
+   monitoring and alerting as part of their offering. This gives you a dashboard
+   to check on all of your backups, and can alert you if the service doesn't
+   hear from borgmatic for a configured interval.
+ * **Consistency checks:** While not strictly part of monitoring, if you want
+   confidence that your backups are not only running but are restorable as well,
+   you can configure particular [consistency
+   checks](https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/#consistency-check-configuration)
+   or even script full [extract
+   tests](https://torsion.org/borgmatic/docs/how-to/extract-a-backup/).
+ * **Commands run on error:** borgmatic's command hooks support running
+   arbitrary commands or scripts when borgmatic itself encounters an error
+   running your backups. So for instance, you can run a script to send yourself
+   a text message alert. But note that if borgmatic doesn't actually run, this
+   alert won't fire. See the [documentation on command hooks](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/)
+   for details.
 
 
 ## Healthchecks hook
@@ -292,6 +207,27 @@ If you have any issues with the integration, [please contact
 us](https://torsion.org/borgmatic/#support-and-contributing).
 
 
+### Sending logs
+
+<span class="minilink minilink-addedin">New in version 1.9.14</span> borgmatic
+logs are included in the payload data sent to PagerDuty. This means that
+(truncated) borgmatic logs, including error messages, show up in the PagerDuty
+incident UI and corresponding notification emails.
+
+You can customize the verbosity of the logs that are sent with borgmatic's
+`--monitoring-verbosity` flag. The `--list` and `--stats` flags may also be of
+use. See `borgmatic create --help` for more information.
+
+If you don't want any logs sent, you can disable this feature by setting
+`send_logs` to `false`:
+
+```yaml
+pagerduty:
+    integration_key: a177cad45bd374409f78906a810a3074
+    send_logs: false
+```
+
+
 ## Pushover hook
 
 <span class="minilink minilink-addedin">New in version 1.9.2</span>
@@ -710,7 +646,10 @@ zabbix:
         - fail
 ```
 
-This hook requires the Zabbix server be running version 7.0+
+This hook requires the Zabbix server be running version 7.0.
+
+<span class="minilink minilink-addedin">New in version 1.9.3</span> Zabbix 7.2+
+is supported as well.
 
 
 ### Authentication methods
@@ -721,11 +660,19 @@ Authentication can be accomplished via `api_key` or both `username` and
 
 ### Items
 
-The item to be updated can be chosen by either declaring the `itemid` or both
-`host` and `key`. If all three are declared, only `itemid` is used.
+borgmatic writes its monitoring updates to a particular Zabbix item, which
+you'll need to create in advance. In the Zabbix web UI, [make a new item with a
+Type of "Zabbix
+trapper"](https://www.zabbix.com/documentation/current/en/manual/config/items/itemtypes/trapper)
+and a named Key. The "Type of information" for the item should be "Text", and
+"History" designates how much data you want to retain.
+
+When configuring borgmatic with this item to be updated, you can either declare
+the `itemid` or both `host` and `key`. If all three are declared, only `itemid`
+is used.
 
-Keep in mind that `host` is referring to the "Host name" on the Zabbix server
-and not the "Visual name".
+Keep in mind that `host` refers to the "Host name" on the Zabbix server and not
+the "Visual name".
 
 
 ## Scripting borgmatic

+ 230 - 35
docs/how-to/provide-your-passwords.md

@@ -19,6 +19,7 @@ encryption_passphrase: yourpassphrase
 But if you'd rather store them outside of borgmatic, whether for convenience
 or security reasons, read on.
 
+
 ### Delegating to another application
 
 borgmatic supports calling another application such as a password manager to 
@@ -31,71 +32,262 @@ to provide the passphrase:
 encryption_passcommand: pass path/to/borg-passphrase
 ```
 
-Another example for [KeePassXC](https://keepassxc.org/):
-
-```yaml
-encryption_passcommand: keepassxc-cli show --show-protected --attributes Password credentials.kdbx borg_passphrase
-```
-
-... where `borg_passphrase` is the title of the KeePassXC entry containing your
-Borg encryption passphrase in its `Password` field.
-
 <span class="minilink minilink-addedin">New in version 1.9.9</span> Instead of
 letting Borg run the passcommand—potentially multiple times since borgmatic runs
 Borg multiple times—borgmatic now runs the passcommand itself and passes the
-resulting passprhase securely to Borg via an anonymous pipe. This means you
+resulting passphrase securely to Borg via an anonymous pipe. This means you
 should only ever get prompted for your password manager's passphrase at most
 once per borgmatic run.
 
 
-### Using systemd service credentials
+### systemd service credentials
 
-Borgmatic supports using encrypted [credentials](https://systemd.io/CREDENTIALS/).
-
-Save your password as an encrypted credential to `/etc/credstore.encrypted/borgmatic.pw`, e.g.,
+borgmatic supports reading encrypted [systemd
+credentials](https://systemd.io/CREDENTIALS/). To use this feature, start by
+saving your password as an encrypted credential to
+`/etc/credstore.encrypted/borgmatic.pw`, e.g.,
 
+```bash
+systemd-ask-password -n | systemd-creds encrypt - /etc/credstore.encrypted/borgmatic.pw
 ```
-# systemd-ask-password -n | systemd-creds encrypt - /etc/credstore.encrypted/borgmatic.pw
+
+Then use the following in your configuration file:
+
+```yaml
+encryption_passphrase: "{credential systemd borgmatic.pw}"
 ```
 
-Then uncomment or use the following in your configuration file:
+<span class="minilink minilink-addedin">Prior to version 1.9.10</span> You can
+accomplish the same thing with this configuration:
 
 ```yaml
-encryption_passcommand: "cat ${CREDENTIALS_DIRECTORY}/borgmatic.pw"
+encryption_passcommand: cat ${CREDENTIALS_DIRECTORY}/borgmatic.pw
 ```
 
 Note that the name `borgmatic.pw` is hardcoded in the systemd service file.
 
-To use multiple different passwords, save them as encrypted credentials to `/etc/credstore.encrypted/borgmatic/`, e.g.,
+The `{credential ...}` syntax works for several different options in a borgmatic
+configuration file besides just `encryption_passphrase`. For instance, the
+username, password, and API token options within database and monitoring hooks
+support `{credential ...}`:
 
+```yaml
+postgresql_databases:
+    - name: invoices
+      username: postgres
+      password: "{credential systemd borgmatic_db1}"
 ```
-# mkdir /etc/credstore.encrypted/borgmatic
-# systemd-ask-password -n | systemd-creds encrypt --name=borgmatic_backupserver1 - /etc/credstore.encrypted/borgmatic/backupserver1
-# systemd-ask-password -n | systemd-creds encrypt --name=borgmatic_pw2 - /etc/credstore.encrypted/borgmatic/pw2
+
+For specifics about which options are supported, see the
+[configuration
+reference](https://torsion.org/borgmatic/docs/reference/configuration/).
+
+To use these credentials, you'll need to modify the borgmatic systemd service
+file to support loading multiple credentials (assuming you need to load more
+than one or anything not named `borgmatic.pw`).
+
+Start by saving each encrypted credentials to
+`/etc/credstore.encrypted/borgmatic/`. E.g.,
+
+```bash
+mkdir /etc/credstore.encrypted/borgmatic
+systemd-ask-password -n | systemd-creds encrypt --name=borgmatic_backupserver1 - /etc/credstore.encrypted/borgmatic/backupserver1
+systemd-ask-password -n | systemd-creds encrypt --name=borgmatic_pw2 - /etc/credstore.encrypted/borgmatic/pw2
 ...
 ```
 
-Ensure that the file names, (e.g. `backupserver1`) match the corresponding part of
-the `--name` option *after* the underscore (_), and that the part *before* 
+Ensure that the file names, (e.g. `backupserver1`) match the corresponding part
+of the `--name` option *after* the underscore (_), and that the part *before*
 the underscore matches the directory name (e.g. `borgmatic`).
 
 Then, uncomment the appropriate line in the systemd service file:
 
 ```
-# systemctl edit borgmatic.service
+systemctl edit borgmatic.service
 ...
 # Load multiple encrypted credentials.
 LoadCredentialEncrypted=borgmatic:/etc/credstore.encrypted/borgmatic/
 ```
 
-Finally, use the following in your configuration file:
+Finally, use something like the following in your borgmatic configuration file
+for each option value you'd like to load from systemd:
+
+```yaml
+encryption_passphrase: "{credential systemd borgmatic_backupserver1}"
+```
+
+<span class="minilink minilink-addedin">Prior to version 1.9.10</span> Use the
+following instead, but only for the `encryption_passcommand` option and
+not other options:
+
+```yaml
+encryption_passcommand: cat ${CREDENTIALS_DIRECTORY}/borgmatic_backupserver1
+```
+
+Adjust `borgmatic_backupserver1` according to the name of the credential and the
+directory set in the service file.
+
+Be aware that when using this systemd `{credential ...}` feature, you may no
+longer be able to run certain borgmatic actions outside of the systemd service,
+as the credentials are only available from within the context of that service.
+So for instance, `borgmatic list` necessarily relies on the
+`encryption_passphrase` in order to access the Borg repository, but `list`
+shouldn't need to load any credentials for your database or monitoring hooks.
+
+The one exception is `borgmatic config validate`, which doesn't actually load
+any credentials and should continue working anywhere.
+
+
+### Container secrets
+
+<span class="minilink minilink-addedin">New in version 1.9.11</span> When
+running inside a container, borgmatic can read [Docker
+secrets](https://docs.docker.com/compose/how-tos/use-secrets/) and [Podman
+secrets](https://www.redhat.com/en/blog/new-podman-secrets-command). Creating
+those secrets and passing them into your borgmatic container is outside the
+scope of this documentation, but here's a simple example of that with [Docker
+Compose](https://docs.docker.com/compose/):
+
+```yaml
+services:
+  borgmatic:
+    # Use the actual image name of your borgmatic container here.
+    image: borgmatic:latest
+    secrets:
+      - borgmatic_passphrase
+secrets:
+  borgmatic_passphrase:
+    file: /etc/borgmatic/passphrase.txt
+```
+
+This assumes there's a file on the host at `/etc/borgmatic/passphrase.txt`
+containing your passphrase. Docker or Podman mounts the contents of that file
+into a secret named `borgmatic_passphrase` in the borgmatic container at
+`/run/secrets/`.
+
+Once your container secret is in place, you can consume it within your borgmatic
+configuration file:
+
+```yaml
+encryption_passphrase: "{credential container borgmatic_passphrase}"
+```
+
+This reads the secret securely from a file mounted at
+`/run/secrets/borgmatic_passphrase` within the borgmatic container.
+
+The `{credential ...}` syntax works for several different options in a borgmatic
+configuration file besides just `encryption_passphrase`. For instance, the
+username, password, and API token options within database and monitoring hooks
+support `{credential ...}`:
+
+```yaml
+postgresql_databases:
+    - name: invoices
+      username: postgres
+      password: "{credential container borgmatic_db1}"
+```
+
+For specifics about which options are supported, see the
+[configuration
+reference](https://torsion.org/borgmatic/docs/reference/configuration/).
+
+You can also optionally override the `/run/secrets` directory that borgmatic reads secrets from
+inside a container:
+
+```yaml
+container:
+    secrets_directory: /path/to/secrets
+```
+
+But you should only need to do this for development or testing purposes.
+
+
+### KeePassXC passwords
+
+<span class="minilink minilink-addedin">New in version 1.9.11</span> borgmatic
+supports reading passwords from the [KeePassXC](https://keepassxc.org/) password
+manager. To use this feature, start by creating an entry in your KeePassXC
+database, putting your password into the "Password" field of that entry and
+making sure it's saved.
+
+Then, you can consume that password in your borgmatic configuration file. For
+instance, if the entry's title is "borgmatic" and your KeePassXC database is
+located at `/etc/keys.kdbx`, do this:
+
+```yaml
+encryption_passphrase: "{credential keepassxc /etc/keys.kdbx borgmatic}"
+```
+
+But if the entry's title is multiple words like `borg pw`, you'll
+need to quote it:
 
+```yaml
+encryption_passphrase: "{credential keepassxc /etc/keys.kdbx 'borg pw'}"
 ```
-encryption_passcommand: "cat ${CREDENTIALS_DIRECTORY}/borgmatic_backupserver1"
+
+With this in place, borgmatic runs the `keepassxc-cli` command to retrieve the
+passphrase on demand. But note that `keepassxc-cli` will prompt for its own
+passphrase in order to unlock its database, so be prepared to enter it when
+running borgmatic.
+
+The `{credential ...}` syntax works for several different options in a borgmatic
+configuration file besides just `encryption_passphrase`. For instance, the
+username, password, and API token options within database and monitoring hooks
+support `{credential ...}`:
+
+```yaml
+postgresql_databases:
+    - name: invoices
+      username: postgres
+      password: "{credential keepassxc /etc/keys.kdbx database}"
 ```
 
-Adjust `borgmatic_backupserver1` according to the name given to the credential 
-and the directory set in the service file.
+For specifics about which options are supported, see the
+[configuration
+reference](https://torsion.org/borgmatic/docs/reference/configuration/).
+
+You can also optionally override the `keepassxc-cli` command that borgmatic calls to load
+passwords:
+
+```yaml
+keepassxc:
+    keepassxc_cli_command: /usr/local/bin/keepassxc-cli
+```
+
+
+### File-based credentials
+
+<span class="minilink minilink-addedin">New in version 1.9.11</span> borgmatic
+supports reading credentials from arbitrary file paths. To use this feature,
+start by writing your credential into a file that borgmatic has permission to
+read. Take care not to include anything in the file other than your credential.
+(borgmatic is smart enough to strip off a trailing newline though.)
+
+You can consume that credential file in your borgmatic configuration. For
+instance, if your credential file is at `/credentials/borgmatic.txt`, do this:
+
+```yaml
+encryption_passphrase: "{credential file /credentials/borgmatic.txt}"
+```
+
+With this in place, borgmatic reads the credential from the file path.
+
+The `{credential ...}` syntax works for several different options in a borgmatic
+configuration file besides just `encryption_passphrase`. For instance, the
+username, password, and API token options within database and monitoring hooks
+support `{credential ...}`:
+
+```yaml
+postgresql_databases:
+    - name: invoices
+      username: postgres
+      password: "{credential file /credentials/database.txt}"
+```
+
+For specifics about which options are supported, see the
+[configuration
+reference](https://torsion.org/borgmatic/docs/reference/configuration/).
+
 
 ### Environment variable interpolation
 
@@ -103,7 +295,15 @@ and the directory set in the service file.
 supports interpolating arbitrary environment variables directly into option
 values in your configuration file. That means you can instruct borgmatic to
 pull your repository passphrase, your database passwords, or any other option
-values from environment variables. For instance:
+values from environment variables.
+
+Be aware though that environment variables may be less secure than some of the
+other approaches above for getting credentials into borgmatic. That's because
+environment variables may be visible from within child processes and/or OS-level
+process metadata.
+
+Here's an example of using an environment variable from borgmatic's
+configuration file:
 
 ```yaml
 encryption_passphrase: ${YOUR_PASSPHRASE}
@@ -165,6 +365,7 @@ can escape it with a backslash. For instance, if your password is literally
 encryption_passphrase: \${A}@!
 ```
 
+
 ## Related features
 
 Another way to override particular options within a borgmatic configuration
@@ -177,9 +378,3 @@ Additionally, borgmatic action hooks support their own [variable
 interpolation](https://torsion.org/borgmatic/docs/how-to/add-preparation-and-cleanup-steps-to-backups/#variable-interpolation),
 although in that case it's for particular borgmatic runtime values rather than
 (only) environment variables.
-
-Lastly, if you do want to specify your passhprase directly within borgmatic
-configuration, but you'd like to keep it in a separate file from your main
-configuration, you can [use a configuration include or a merge
-include](https://torsion.org/borgmatic/docs/how-to/make-per-application-backups/#configuration-includes)
-to pull in an external password.

+ 14 - 1
docs/how-to/set-up-backups.md

@@ -296,6 +296,20 @@ skip_actions:
     - compact
 ```
 
+### Disabling default actions
+
+By default, running `borgmatic` without any arguments will perform the default
+backup actions (create, prune, compact and check). If you want to disable this
+behavior and require explicit actions to be specified, add the following to
+your configuration:
+
+```yaml
+default_actions: false
+```
+
+With this setting, running `borgmatic` without arguments will show the help
+message instead of performing any actions.
+
 
 ## Autopilot
 
@@ -311,7 +325,6 @@ Then, from the directory where you downloaded it:
 
 ```bash
 sudo mv borgmatic /etc/cron.d/borgmatic
-sudo chmod +x /etc/cron.d/borgmatic
 ```
 
 If borgmatic is installed at a different location than

+ 36 - 20
docs/how-to/snapshot-your-filesystems.md

@@ -54,8 +54,8 @@ You have a couple of options for borgmatic to find and backup your ZFS datasets:
  * For any dataset you'd like backed up, add its mount point to borgmatic's
    `source_directories` option.
  * <span class="minilink minilink-addedin">New in version 1.9.6</span> Or
-   include the mount point with borgmatic's `patterns` or `patterns_from`
-   options.
+   include the mount point as a root pattern with borgmatic's `patterns` or
+   `patterns_from` options.
  * Or set the borgmatic-specific user property
    `org.torsion.borgmatic:backup=auto` onto your dataset, e.g. by running `zfs
    set org.torsion.borgmatic:backup=auto datasetname`. Then borgmatic can find
@@ -65,6 +65,11 @@ If you have multiple borgmatic configuration files with ZFS enabled, and you'd
 like particular datasets to be backed up only for particular configuration
 files, use the `source_directories` option instead of the user property.
 
+<span class="minilink minilink-addedin">New in version 1.9.11</span> borgmatic
+won't snapshot datasets with the `canmount=off` property, which is often set on
+datasets that only serve as a container for other datasets. Use `zfs get
+canmount datasetname` to see the `canmount` value for a dataset.
+
 During a backup, borgmatic automatically snapshots these discovered datasets
 (non-recursively), temporarily mounts the snapshots within its [runtime
 directory](https://torsion.org/borgmatic/docs/how-to/backup-your-databases/#runtime-directory),
@@ -143,38 +148,40 @@ feedback](https://torsion.org/borgmatic/#issues) you have on this feature.
 
 #### Subvolume discovery
 
-For any subvolume you'd like backed up, add its path to borgmatic's
-`source_directories` option.
+For any read-write subvolume you'd like backed up, add its mount point path to
+borgmatic's `source_directories` option. Btrfs does not support snapshotting
+read-only subvolumes.
 
 <span class="minilink minilink-addedin">New in version 1.9.6</span> Or include
-the mount point with borgmatic's `patterns` or `patterns_from` options.
+the mount point as a root pattern with borgmatic's `patterns` or `patterns_from`
+options.
 
 During a backup, borgmatic snapshots these subvolumes (non-recursively) and
 includes the snapshotted files in the paths sent to Borg. borgmatic is also
 responsible for cleaning up (deleting) these snapshots after a backup completes.
 
 borgmatic is smart enough to look at the parent (and grandparent, etc.)
-directories of each of your `source_directories` to discover any subvolumes.
-For instance, let's say you add `/var/log` and `/var/lib` to your source
-directories, but `/var` is a subvolume. borgmatic will discover that and
-snapshot `/var` accordingly. This also works even with nested subvolumes;
+directories of each of your `source_directories` to discover any subvolumes. For
+instance, let's say you add `/var/log` and `/var/lib` to your source
+directories, but `/var` is a subvolume mount point. borgmatic will discover that
+and snapshot `/var` accordingly. This also works even with nested subvolumes;
 borgmatic selects the subvolume that's the "closest" parent to your source
 directories.
 
 <span class="minilink minilink-addedin">New in version 1.9.6</span> When using
 [patterns](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-help-patterns),
 the initial portion of a pattern's path that you intend borgmatic to match
-against a subvolume can't have globs or other non-literal characters in it—or it
-won't actually match. For instance, a subvolume of `/var` would match a pattern
-of `+ fm:/var/*/data`, but borgmatic isn't currently smart enough to match
-`/var` to a pattern like `+ fm:/v*/lib/data`.
-
-Additionally, borgmatic rewrites the snapshot file paths so that they appear
-at their original subvolume locations in a Borg archive. For instance, if your
-subvolume exists at `/var/subvolume`, then the snapshotted files will appear
+against a subvolume mount point can't have globs or other non-literal characters
+in it—or it won't actually match. For instance, a subvolume mount point of
+`/var` would match a pattern of `+ fm:/var/*/data`, but borgmatic isn't
+currently smart enough to match `/var` to a pattern like `+ fm:/v*/lib/data`.
+
+Additionally, borgmatic rewrites the snapshot file paths so that they appear at
+their original subvolume locations in a Borg archive. For instance, if your
+subvolume is mounted at `/var/subvolume`, then the snapshotted files will appear
 in an archive at `/var/subvolume` as well—even if borgmatic has to mount the
-snapshot somewhere in `/var/subvolume/.borgmatic-snapshot-1234/` to perform
-the backup.
+snapshot somewhere in `/var/subvolume/.borgmatic-snapshot-1234/` to perform the
+backup.
 
 <span class="minilink minilink-addedin">With Borg version 1.2 and
 earlier</span>Snapshotted files are instead stored at a path dependent on the
@@ -199,6 +206,14 @@ Volume Manager) and sending those snapshots to Borg for backup. LVM isn't
 itself a filesystem, but it can take snapshots at the layer right below your
 filesystem.
 
+Note that, due to Borg being a file-level backup, this feature is really only
+suitable for filesystems, not whole disk or raw images containing multiple
+filesystems (for example, if you're using a LVM volume to run a Windows
+KVM that contains an MBR, partitions, etc.).
+
+In those cases, you can omit the `lvm:` option and use Borg's own support for
+[image backup](https://borgbackup.readthedocs.io/en/stable/deployment/image-backup.html).
+
 To use this feature, first you need one or more mounted LVM logical volumes.
 Then, enable LVM within borgmatic by adding the following line to your
 configuration file:
@@ -252,7 +267,8 @@ For any logical volume you'd like backed up, add its mount point to
 borgmatic's `source_directories` option.
 
 <span class="minilink minilink-addedin">New in version 1.9.6</span> Or include
-the mount point with borgmatic's `patterns` or `patterns_from` options.
+the mount point as a root pattern with borgmatic's `patterns` or `patterns_from`
+options.
 
 During a backup, borgmatic automatically snapshots these discovered logical volumes
 (non-recursively), temporarily mounts the snapshots within its [runtime

BIN
docs/static/docker.png


BIN
docs/static/keepassxc.png


BIN
docs/static/podman.png


BIN
docs/static/pushover.png


BIN
docs/static/systemd.png


Vissa filer visades inte eftersom för många filer har ändrats