pull-backup.rst 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303
  1. .. include:: ../global.rst.inc
  2. .. highlight:: none
  3. =======================
  4. Backing up in pull mode
  5. =======================
  6. Typically the borg client connects to a backup server using SSH as a transport
  7. when initiating a backup. This is referred to as push mode.
  8. If you however require the backup server to initiate the connection or prefer
  9. it to initiate the backup run, one of the following workarounds is required to
  10. allow such a pull mode setup.
  11. A common use case for pull mode is to backup a remote server to a local personal
  12. computer.
  13. SSHFS
  14. =====
  15. Assuming you have a pull backup system set up with borg, where a backup server
  16. pulls the data from the target via SSHFS. In this mode, the backup client's file
  17. system is mounted remotely on the backup server. Pull mode is even possible if
  18. the SSH connection must be established by the client via a remote tunnel. Other
  19. network file systems like NFS or SMB could be used as well, but SSHFS is very
  20. simple to set up and probably the most secure one.
  21. There are some restrictions caused by SSHFS. For example, unless you define UID
  22. and GID mappings when mounting via ``sshfs``, owners and groups of the mounted
  23. file system will probably change, and you may not have access to those files if
  24. BorgBackup is not run with root privileges.
  25. SSHFS is a FUSE file system and uses the SFTP protocol, so there may be also
  26. other unsupported features that the actual implementations of ssfs, libfuse and
  27. sftp on the backup server do not support, like file name encodings, ACLs, xattrs
  28. or flags. So there is no guarantee that you are able to restore a system
  29. completely in every aspect from such a backup.
  30. .. warning::
  31. To mount the client's root file system you will need root access to the
  32. client. This contradicts to the usual threat model of BorgBackup, where
  33. clients don't need to trust the backup server (data is encrypted). In pull
  34. mode the server (when logged in as root) could cause unlimited damage to the
  35. client. Therefore, pull mode should be used only from servers you do fully
  36. trust!
  37. Creating a backup
  38. -----------------
  39. Generally, in a pull backup situation there is no direct way for borg to know
  40. the client's original UID:GID name mapping of files, because Borg would use
  41. ``/etc/passwd`` and ``/etc/group`` of the backup server to map the names. To
  42. derive the right names, Borg needs to have access to the client's passwd and
  43. group files and use them in the backup process.
  44. The solution to this problem is chrooting into an sshfs mounted directory. In
  45. this example the whole client root file system is mounted. We use the
  46. stand-alone BorgBackup executable and copy it into the mounted file system to
  47. make Borg available after entering chroot; this can be skipped if Borg is
  48. already installed on the client.
  49. ::
  50. # Mount client root file system.
  51. mkdir /tmp/sshfs
  52. sshfs root@host:/ /tmp/sshfs
  53. # Mount BorgBackup repository inside it.
  54. mkdir /tmp/sshfs/borgrepo
  55. mount --bind /path/to/repo /tmp/sshfs/borgrepo
  56. # Make borg executable available.
  57. cp /usr/local/bin/borg /tmp/sshfs/usr/local/bin/borg
  58. # Mount important system directories and enter chroot.
  59. cd /tmp/sshfs
  60. for i in dev proc sys; do mount --bind /$i $i; done
  61. chroot /tmp/sshfs
  62. Now we are on the backup system but inside a chroot with the client's root file
  63. system. We have a copy of Borg binary in ``/usr/local/bin`` and the repository
  64. in ``/borgrepo``. Borg will back up the client's user/group names, and we can
  65. create the backup, retaining the original paths, excluding the repository:
  66. ::
  67. borg create --exclude /borgrepo --files-cache ctime,size /borgrepo::archive /
  68. For the sake of simplicity only ``/borgrepo`` is excluded here. You may want to
  69. set up an exclude file with additional files and folders to be excluded. Also
  70. note that we have to modify Borg's file change detection behaviour – SSHFS
  71. cannot guarantee stable inode numbers, so we have to supply the
  72. ``--files-cache`` option.
  73. Finally, we need to exit chroot, unmount all the stuff and clean up:
  74. ::
  75. exit # exit chroot
  76. rm /tmp/sshfs/usr/local/bin/borg
  77. cd /tmp/sshfs
  78. for i in dev proc sys borgrepo; do umount ./$i; done
  79. rmdir borgrepo
  80. cd ~
  81. umount /tmp/sshfs
  82. rmdir /tmp/sshfs
  83. Thanks to secuser on IRC for this how-to!
  84. Restore methods
  85. ---------------
  86. The counterpart of a pull backup is a push restore. Depending on the type of
  87. restore – full restore or partial restore – there are different methods to make
  88. sure the correct IDs are restored.
  89. Partial restore
  90. ~~~~~~~~~~~~~~~
  91. In case of a partial restore, using the archived UIDs/GIDs might lead to wrong
  92. results if the name-to-ID mapping on the target system has changed compared to
  93. backup time (might be the case e.g. for a fresh OS install).
  94. The workaround again is chrooting into an sshfs mounted directory, so Borg is
  95. able to map the user/group names of the backup files to the actual IDs on the
  96. client. This example is similar to the backup above – only the Borg command is
  97. different:
  98. ::
  99. # Mount client root file system.
  100. mkdir /tmp/sshfs
  101. sshfs root@host:/ /tmp/sshfs
  102. # Mount BorgBackup repository inside it.
  103. mkdir /tmp/sshfs/borgrepo
  104. mount --bind /path/to/repo /tmp/sshfs/borgrepo
  105. # Make borg executable available.
  106. cp /usr/local/bin/borg /tmp/sshfs/usr/local/bin/borg
  107. # Mount important system directories and enter chroot.
  108. cd /tmp/sshfs
  109. for i in dev proc sys; do mount --bind /$i $i; done
  110. chroot /tmp/sshfs
  111. Now we can run
  112. ::
  113. borg extract /borgrepo::archive PATH
  114. to partially restore whatever we like. Finally, do the clean-up:
  115. ::
  116. exit # exit chroot
  117. rm /tmp/sshfs/usr/local/bin/borg
  118. cd /tmp/sshfs
  119. for i in dev proc sys borgrepo; do umount ./$i; done
  120. rmdir borgrepo
  121. cd ~
  122. umount /tmp/sshfs
  123. rmdir /tmp/sshfs
  124. Full restore
  125. ~~~~~~~~~~~~
  126. When doing a full restore, we restore all files (including the ones containing
  127. the ID-to-name mapping, ``/etc/passwd`` and ``/etc/group``). Everything will be
  128. consistent automatically if we restore the numeric IDs stored in the archive. So
  129. there is no need for a chroot environment; we just mount the client file system
  130. and extract a backup, utilizing the ``--numeric-owner`` option:
  131. ::
  132. sshfs root@host:/ /mnt/sshfs
  133. cd /mnt/sshfs
  134. borg extract --numeric-owner /path/to/repo::archive
  135. cd ~
  136. umount /mnt/sshfs
  137. Simple (lossy) full restore
  138. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  139. Using ``borg export-tar`` it is possible to stream a backup to the client and
  140. directly extract it without the need of mounting with SSHFS:
  141. ::
  142. borg export-tar /path/to/repo::archive - | ssh root@host 'tar -C / -x'
  143. Note that in this scenario the tar format is the limiting factor – it cannot
  144. restore all the advanced features that BorgBackup supports. See
  145. :ref:`borg_export-tar` for limitations.
  146. socat
  147. =====
  148. In this setup a SSH connection from the backup server to the client is
  149. established that uses SSH reverse port forwarding to transparently
  150. tunnel data between UNIX domain sockets on the client and server and the socat
  151. tool to connect these with the borg client and server processes, respectively.
  152. The program socat has to be available on the backup server and on the client
  153. to be backed up.
  154. When **pushing** a backup the borg client (holding the data to be backed up)
  155. connects to the backup server via ssh, starts ``borg serve`` on the backup
  156. server and communicates via standard input and output (transported via SSH)
  157. with the process on the backup server.
  158. With the help of socat this process can be reversed. The backup server will
  159. create a connection to the client (holding the data to be backed up) and will
  160. **pull** the data.
  161. In the following example *borg-server* connects to *borg-client* to pull a backup.
  162. To provide a secure setup sockets should be stored in ``/run/borg``, only
  163. accessible to the users that run the backup process. So on both systems,
  164. *borg-server* and *borg-client* the folder ``/run/borg`` has to be created::
  165. sudo mkdir -m 0700 /run/borg
  166. On *borg-server* the socket file is opened by the user running the ``borg
  167. serve`` process writing to the repository
  168. so the user has to have read and write permissions on ``/run/borg``::
  169. borg-server:~$ sudo chown borgs /run/borg
  170. On *borg-client* the socket file is created by ssh, so the user used to connect
  171. to *borg-client* has to have read and write permissions on ``/run/borg``::
  172. borg-client:~$ sudo chown borgc /run/borg
  173. On *borg-server*, we have to start the command ``borg serve`` and make its
  174. standard input and output available to a unix socket::
  175. borg-server:~$ socat UNIX-LISTEN:/run/borg/reponame.sock,fork EXEC:"borg serve --append-only --restrict-to-path /path/to/repo"
  176. Socat will wait until a connection is opened. Then socat will execute the
  177. command given, redirecting Standard Input and Output to the unix socket. The
  178. optional arguments for ``borg serve`` are not necessary but a sane default.
  179. .. note::
  180. When used in production you may also use systemd socket-based activation
  181. instead of socat on the server side. You would wrap the ``borg serve`` command
  182. in a `service unit`_ and configure a matching `socket unit`_
  183. to start the service whenever a client connects to the socket.
  184. .. _service unit: https://www.freedesktop.org/software/systemd/man/systemd.service.html
  185. .. _socket unit: https://www.freedesktop.org/software/systemd/man/systemd.socket.html
  186. Now we need a way to access the unix socket on *borg-client* (holding the
  187. data to be backed up), as we created the unix socket on *borg-server*
  188. Opening a SSH connection from the *borg-server* to the *borg-client* with reverse port
  189. forwarding can do this for us::
  190. borg-server:~$ ssh -R /run/borg/reponame.sock:/run/borg/reponame.sock borgc@borg-client
  191. .. note::
  192. As the default value of OpenSSH for ``StreamLocalBindUnlink`` is ``no``, the
  193. socket file created by sshd is not removed. Trying to connect a second time,
  194. will print a short warning, and the forwarding does **not** take place::
  195. Warning: remote port forwarding failed for listen path /run/borg/reponame.sock
  196. When you are done, you have to manually remove the socket file, otherwise
  197. you may see an error like this when trying to execute borg commands::
  198. Remote: YYYY/MM/DD HH:MM:SS socat[XXX] E connect(5, AF=1 "/run/borg/reponame.sock", 13): Connection refused
  199. Connection closed by remote host. Is borg working on the server?
  200. When a process opens the socket on *borg-client*, SSH will forward all
  201. data to the socket on *borg-server*.
  202. The next step is to tell borg on *borg-client* to use the unix socket to communicate with the
  203. ``borg serve`` command on *borg-server* via the socat socket instead of SSH::
  204. borg-client:~$ export BORG_RSH="sh -c 'exec socat STDIO UNIX-CONNECT:/run/borg/reponame.sock'"
  205. The default value for ``BORG_RSH`` is ``ssh``. By default Borg uses SSH to create
  206. the connection to the backup server. Therefore Borg parses the repo URL
  207. and adds the server name (and other arguments) to the SSH command. Those
  208. arguments can not be handled by socat. We wrap the command with ``sh`` to
  209. ignore all arguments intended for the SSH command.
  210. All Borg commands can now be executed on *borg-client*. For example to create a
  211. backup execute the ``borg create`` command::
  212. borg-client:~$ borg create ssh://borg-server/path/to/repo::archive /path_to_backup
  213. When automating backup creation, the
  214. interactive ssh session may seem inappropriate. An alternative way of creating
  215. a backup may be the following command::
  216. borg-server:~$ ssh \
  217. -R /run/borg/reponame.sock:/run/borg/reponame.sock \
  218. borgc@borg-client \
  219. borg create \
  220. --rsh "sh -c 'exec socat STDIO UNIX-CONNECT:/run/borg/reponame.sock'" \
  221. ssh://borg-server/path/to/repo::archive /path_to_backup \
  222. ';' rm /run/borg/reponame.sock
  223. This command also automatically removes the socket file after the ``borg
  224. create`` command is done.