소스 검색

Merge pull request #3375 from ThomasWaldmann/docs-fixes

misc. docs fixes / updates
TW 7 년 전
부모
커밋
c848141f06
7개의 변경된 파일40개의 추가작업 그리고 12개의 파일을 삭제
  1. 2 2
      docs/faq.rst
  2. 1 1
      docs/usage/create.rst
  3. 6 0
      docs/usage/mount.rst
  4. 10 0
      docs/usage/notes.rst
  5. 3 3
      docs/usage/tar.rst
  6. 18 0
      docs/usage_general.rst.inc
  7. 0 6
      src/borg/archiver.py

+ 2 - 2
docs/faq.rst

@@ -94,7 +94,7 @@ retransfer the data since the last checkpoint.
 
 
 If a backup was interrupted, you normally do not need to do anything special,
 If a backup was interrupted, you normally do not need to do anything special,
 just invoke ``borg create`` as you always do. If the repository is still locked,
 just invoke ``borg create`` as you always do. If the repository is still locked,
-you may need to run ``borg break-lock`` before the next backup. You may use the 
+you may need to run ``borg break-lock`` before the next backup. You may use the
 same archive name as in previous attempt or a different one (e.g. if you always
 same archive name as in previous attempt or a different one (e.g. if you always
 include the current datetime), it does not matter.
 include the current datetime), it does not matter.
 
 
@@ -265,7 +265,7 @@ Say you want to prune ``/var/log`` faster than the rest of
 archive *names* and then implement different prune policies for
 archive *names* and then implement different prune policies for
 different prefixes. For example, you could have a script that does::
 different prefixes. For example, you could have a script that does::
 
 
-    borg create $REPOSITORY:main-$(date +%Y-%m-%d) --exclude /var/log /
+    borg create --exclude /var/log $REPOSITORY:main-$(date +%Y-%m-%d) /
     borg create $REPOSITORY:logs-$(date +%Y-%m-%d) /var/log
     borg create $REPOSITORY:logs-$(date +%Y-%m-%d) /var/log
 
 
 Then you would have two different prune calls with different policies::
 Then you would have two different prune calls with different policies::

+ 1 - 1
docs/usage/create.rst

@@ -23,7 +23,7 @@ Examples
 
 
     # Backup the root filesystem into an archive named "root-YYYY-MM-DD"
     # Backup the root filesystem into an archive named "root-YYYY-MM-DD"
     # use zlib compression (good, but slow) - default is lz4 (fast, low compression ratio)
     # use zlib compression (good, but slow) - default is lz4 (fast, low compression ratio)
-    $ borg create -C zlib,6 /path/to/repo::root-{now:%Y-%m-%d} / --one-file-system
+    $ borg create -C zlib,6 --one-file-system /path/to/repo::root-{now:%Y-%m-%d} /
 
 
     # Backup a remote host locally ("pull" style) using sshfs
     # Backup a remote host locally ("pull" style) using sshfs
     $ mkdir sshfs-mount
     $ mkdir sshfs-mount

+ 6 - 0
docs/usage/mount.rst

@@ -36,6 +36,12 @@ Examples
     # which does not support lazy processing of archives.
     # which does not support lazy processing of archives.
     $ borg mount -o versions --glob-archives '*-my-home' --last 10 /path/to/repo /tmp/mymountpoint
     $ borg mount -o versions --glob-archives '*-my-home' --last 10 /path/to/repo /tmp/mymountpoint
 
 
+    # Exclusion options are supported.
+    # These can speed up mounting and lower memory needs significantly.
+    $ borg mount /path/to/repo /tmp/mymountpoint only/that/path
+    $ borg mount --exclude '...' /path/to/repo /tmp/mymountpoint
+
+
 borgfs
 borgfs
 ++++++
 ++++++
 
 

+ 10 - 0
docs/usage/notes.rst

@@ -61,6 +61,16 @@ affect metadata stream deduplication: if only this timestamp changes between
 backups and is stored into the metadata stream, the metadata stream chunks
 backups and is stored into the metadata stream, the metadata stream chunks
 won't deduplicate just because of that.
 won't deduplicate just because of that.
 
 
+``--nobsdflags``
+~~~~~~~~~~~~~~~~
+
+You can use this to not query and store (or not extract and set) bsdflags -
+in case you don't need them or if they are broken somehow for your fs.
+
+On Linux, dealing with the bsflags needs some additional syscalls.
+Especially when dealing with lots of small files, this causes a noticable
+overhead, so you can use this option also for speeding up operations.
+
 ``--umask``
 ``--umask``
 ~~~~~~~~~~~
 ~~~~~~~~~~~
 
 

+ 3 - 3
docs/usage/tar.rst

@@ -11,11 +11,11 @@ Examples
     $ borg export-tar /path/to/repo::Monday Monday.tar.gz --exclude '*.so'
     $ borg export-tar /path/to/repo::Monday Monday.tar.gz --exclude '*.so'
 
 
     # use higher compression level with gzip
     # use higher compression level with gzip
-    $ borg export-tar testrepo::linux --tar-filter="gzip -9" Monday.tar.gz
+    $ borg export-tar --tar-filter="gzip -9" testrepo::linux Monday.tar.gz
 
 
-    # export a gzipped tar, but instead of storing it on disk,
+    # export a tar, but instead of storing it on disk,
     # upload it to a remote site using curl.
     # upload it to a remote site using curl.
-    $ borg export-tar ... --tar-filter="gzip" - | curl --data-binary @- https://somewhere/to/POST
+    $ borg export-tar /path/to/repo::Monday - | curl --data-binary @- https://somewhere/to/POST
 
 
     # remote extraction via "tarpipe"
     # remote extraction via "tarpipe"
     $ borg export-tar /path/to/repo::Monday - | ssh somewhere "cd extracted; tar x"
     $ borg export-tar /path/to/repo::Monday - | ssh somewhere "cd extracted; tar x"

+ 18 - 0
docs/usage_general.rst.inc

@@ -1,3 +1,20 @@
+Positional Arguments and Options: Order matters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Borg only supports taking options (``-s`` and ``--progress`` in the example)
+to the left or right of all positional arguments (``repo::archive`` and ``path``
+in the example), but not in between them:
+
+::
+
+    borg create -s --progress repo::archive path  # good and preferred
+    borg create repo::archive path -s --progress  # also works
+    borg create -s repo::archive path --progress  # works, but ugly
+    borg create repo::archive -s --progress path  # BAD
+
+This is due to a problem in the argparse module: http://bugs.python.org/issue15112
+
+
 Repository URLs
 Repository URLs
 ~~~~~~~~~~~~~~~
 ~~~~~~~~~~~~~~~
 
 
@@ -373,6 +390,7 @@ Besides regular file and directory structures, Borg can preserve
     By default the metadata to create them with mknod(2), mkfifo(2) etc. is stored.
     By default the metadata to create them with mknod(2), mkfifo(2) etc. is stored.
 * hardlinked regular files, devices, FIFOs (considering all items in the same archive)
 * hardlinked regular files, devices, FIFOs (considering all items in the same archive)
 * timestamps in nanosecond precision: mtime, atime, ctime
 * timestamps in nanosecond precision: mtime, atime, ctime
+* other timestamps: birthtime (on platforms supporting it)
 * permissions:
 * permissions:
 
 
   * IDs of owning user and owning group
   * IDs of owning user and owning group

+ 0 - 6
src/borg/archiver.py

@@ -2761,12 +2761,6 @@ class Archiver:
           it had before a content change happened. This can be used maliciously as well as
           it had before a content change happened. This can be used maliciously as well as
           well-meant, but in both cases mtime based cache modes can be problematic.
           well-meant, but in both cases mtime based cache modes can be problematic.
 
 
-        By default, borg tries to archive all metadata that it supports archiving.
-        If that is not what you want or need, there are some tuning options:
-
-        - --nobsdflags (getting bsdflags has a speed penalty under Linux)
-        - --noatime (if atime changes frequently, the metadata stream will dedup badly)
-
         The mount points of filesystems or filesystem snapshots should be the same for every
         The mount points of filesystems or filesystem snapshots should be the same for every
         creation of a new archive to ensure fast operation. This is because the file cache that
         creation of a new archive to ensure fast operation. This is because the file cache that
         is used to determine changed files quickly uses absolute filenames.
         is used to determine changed files quickly uses absolute filenames.