data-structures.rst 52 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260
  1. .. include:: ../global.rst.inc
  2. .. highlight:: none
  3. .. _data-structures:
  4. Data structures and file formats
  5. ================================
  6. This page documents the internal data structures and storage
  7. mechanisms of Borg. It is partly based on `mailing list
  8. discussion about internals`_ and also on static code analysis.
  9. .. todo:: Clarify terms, perhaps create a glossary.
  10. ID (client?) vs. key (repository?),
  11. chunks (blob of data in repo?) vs. object (blob of data in repo, referred to from another object?),
  12. .. _repository:
  13. Repository
  14. ----------
  15. .. Some parts of this description were taken from the Repository docstring
  16. Borg stores its data in a `Repository`, which is a file system based
  17. transactional key-value store. Thus the repository does not know about
  18. the concept of archives or items.
  19. Each repository has the following file structure:
  20. README
  21. simple text file telling that this is a Borg repository
  22. config
  23. repository configuration
  24. data/
  25. directory where the actual data is stored
  26. hints.%d
  27. hints for repository compaction
  28. index.%d
  29. repository index
  30. lock.roster and lock.exclusive/*
  31. used by the locking system to manage shared and exclusive locks
  32. Transactionality is achieved by using a log (aka journal) to record changes. The log is a series of numbered files
  33. called segments_. Each segment is a series of log entries. The segment number together with the offset of each
  34. entry relative to its segment start establishes an ordering of the log entries. This is the "definition" of
  35. time for the purposes of the log.
  36. .. _config-file:
  37. Config file
  38. ~~~~~~~~~~~
  39. Each repository has a ``config`` file which is a ``INI``-style file
  40. and looks like this::
  41. [repository]
  42. version = 2
  43. segments_per_dir = 1000
  44. max_segment_size = 524288000
  45. id = 57d6c1d52ce76a836b532b0e42e677dec6af9fca3673db511279358828a21ed6
  46. This is where the ``repository.id`` is stored. It is a unique
  47. identifier for repositories. It will not change if you move the
  48. repository around so you can make a local transfer then decide to move
  49. the repository to another (even remote) location at a later time.
  50. Keys
  51. ~~~~
  52. Repository keys are byte-strings of fixed length (32 bytes), they
  53. don't have a particular meaning (except for the Manifest_).
  54. Normally the keys are computed like this::
  55. key = id = id_hash(plaintext_data) # plain = not encrypted, not compressed, not obfuscated
  56. The id_hash function depends on the :ref:`encryption mode <borg_rcreate>`.
  57. As the id / key is used for deduplication, id_hash must be a cryptographically
  58. strong hash or MAC.
  59. Segments
  60. ~~~~~~~~
  61. Objects referenced by a key are stored inline in files (`segments`) of approx.
  62. 500 MB size in numbered subdirectories of ``repo/data``. The number of segments
  63. per directory is controlled by the value of ``segments_per_dir``. If you change
  64. this value in a non-empty repository, you may also need to relocate the segment
  65. files manually.
  66. A segment starts with a magic number (``BORG_SEG`` as an eight byte ASCII string),
  67. followed by a number of log entries. Each log entry consists of (in this order):
  68. * crc32 checksum (uint32):
  69. - for PUT2: CRC32(size + tag + key + digest)
  70. - for PUT: CRC32(size + tag + key + payload)
  71. - for DELETE: CRC32(size + tag + key)
  72. - for COMMIT: CRC32(size + tag)
  73. * size (uint32) of the entry (including the whole header)
  74. * tag (uint8): PUT(0), DELETE(1), COMMIT(2) or PUT2(3)
  75. * key (256 bit) - only for PUT/PUT2/DELETE
  76. * payload (size - 41 bytes) - only for PUT
  77. * xxh64 digest (64 bit) = XXH64(size + tag + key + payload) - only for PUT2
  78. * payload (size - 41 - 8 bytes) - only for PUT2
  79. PUT2 is new since repository version 2. For new log entries PUT2 is used.
  80. PUT is still supported to read version 1 repositories, but not generated any more.
  81. If we talk about ``PUT`` in general, it shall usually mean PUT2 for repository
  82. version 2+.
  83. Those files are strictly append-only and modified only once.
  84. When an object is written to the repository a ``PUT`` entry is written
  85. to the file containing the object id and payload. If an object is deleted
  86. a ``DELETE`` entry is appended with the object id.
  87. A ``COMMIT`` tag is written when a repository transaction is
  88. committed. The segment number of the segment containing
  89. a commit is the **transaction ID**.
  90. When a repository is opened any ``PUT`` or ``DELETE`` operations not
  91. followed by a ``COMMIT`` tag are discarded since they are part of a
  92. partial/uncommitted transaction.
  93. The size of individual segments is limited to 4 GiB, since the offset of entries
  94. within segments is stored in a 32-bit unsigned integer in the repository index.
  95. Objects / Payload structure
  96. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  97. All data (the manifest, archives, archive item stream chunks and file data
  98. chunks) is compressed, optionally obfuscated and encrypted. This produces some
  99. additional metadata (size and compression information), which is separately
  100. serialized and also encrypted.
  101. See :ref:`data-encryption` for a graphic outlining the anatomy of the encryption in Borg.
  102. What you see at the bottom there is done twice: once for the data and once for the metadata.
  103. An object (the payload part of a segment file log entry) must be like:
  104. - length of encrypted metadata (16bit unsigned int)
  105. - encrypted metadata (incl. encryption header), when decrypted:
  106. - msgpacked dict with:
  107. - ctype (compression type 0..255)
  108. - clevel (compression level 0..255)
  109. - csize (overall compressed (and maybe obfuscated) data size)
  110. - psize (only when obfuscated: payload size without the obfuscation trailer)
  111. - size (uncompressed size of the data)
  112. - encrypted data (incl. encryption header), when decrypted:
  113. - compressed data (with an optional all-zero-bytes obfuscation trailer)
  114. This new, more complex repo v2 object format was implemented to be able to query the
  115. metadata efficiently without having to read, transfer and decrypt the (usually much bigger)
  116. data part.
  117. The metadata is encrypted not to disclose potentially sensitive information that could be
  118. used for e.g. fingerprinting attacks.
  119. The compression `ctype` and `clevel` is explained in :ref:`data-compression`.
  120. Index, hints and integrity
  121. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  122. The **repository index** is stored in ``index.<TRANSACTION_ID>`` and is used to
  123. determine an object's location in the repository. It is a HashIndex_,
  124. a hash table using open addressing.
  125. It maps object keys_ to:
  126. * segment number (unit32)
  127. * offset of the object's entry within the segment (uint32)
  128. * size of the payload, not including the entry header (uint32)
  129. * flags (uint32)
  130. The **hints file** is a msgpacked file named ``hints.<TRANSACTION_ID>``.
  131. It contains:
  132. * version
  133. * list of segments
  134. * compact
  135. * shadow_index
  136. * storage_quota_use
  137. The **integrity file** is a msgpacked file named ``integrity.<TRANSACTION_ID>``.
  138. It contains checksums of the index and hints files and is described in the
  139. :ref:`Checksumming data structures <integrity_repo>` section below.
  140. If the index or hints are corrupted, they are re-generated automatically.
  141. If they are outdated, segments are replayed from the index state to the currently
  142. committed transaction.
  143. Compaction
  144. ~~~~~~~~~~
  145. For a given key only the last entry regarding the key, which is called current (all other entries are called
  146. superseded), is relevant: If there is no entry or the last entry is a DELETE then the key does not exist.
  147. Otherwise the last PUT defines the value of the key.
  148. By superseding a PUT (with either another PUT or a DELETE) the log entry becomes obsolete. A segment containing
  149. such obsolete entries is called sparse, while a segment containing no such entries is called compact.
  150. Since writing a ``DELETE`` tag does not actually delete any data and
  151. thus does not free disk space any log-based data store will need a
  152. compaction strategy (somewhat analogous to a garbage collector).
  153. Borg uses a simple forward compacting algorithm, which avoids modifying existing segments.
  154. Compaction runs when a commit is issued with ``compact=True`` parameter, e.g.
  155. by the ``borg compact`` command (unless the :ref:`append_only_mode` is active).
  156. The compaction algorithm requires two inputs in addition to the segments themselves:
  157. (i) Which segments are sparse, to avoid scanning all segments (impractical).
  158. Further, Borg uses a conditional compaction strategy: Only those
  159. segments that exceed a threshold sparsity are compacted.
  160. To implement the threshold condition efficiently, the sparsity has
  161. to be stored as well. Therefore, Borg stores a mapping ``(segment
  162. id,) -> (number of sparse bytes,)``.
  163. (ii) Each segment's reference count, which indicates how many live objects are in a segment.
  164. This is not strictly required to perform the algorithm. Rather, it is used to validate
  165. that a segment is unused before deleting it. If the algorithm is incorrect, or the reference
  166. count was not accounted correctly, then an assertion failure occurs.
  167. These two pieces of information are stored in the hints file (`hints.N`)
  168. next to the index (`index.N`).
  169. Compaction may take some time if a repository has been kept in append-only mode
  170. or ``borg compact`` has not been used for a longer time, which both has caused
  171. the number of sparse segments to grow.
  172. Compaction processes sparse segments from oldest to newest; sparse segments
  173. which don't contain enough deleted data to justify compaction are skipped. This
  174. avoids doing e.g. 500 MB of writing current data to a new segment when only
  175. a couple kB were deleted in a segment.
  176. Segments that are compacted are read in entirety. Current entries are written to
  177. a new segment, while superseded entries are omitted. After each segment an intermediary
  178. commit is written to the new segment. Then, the old segment is deleted
  179. (asserting that the reference count diminished to zero), freeing disk space.
  180. A simplified example (excluding conditional compaction and with simpler
  181. commit logic) showing the principal operation of compaction:
  182. .. figure:: compaction.png
  183. :figwidth: 100%
  184. :width: 100%
  185. (The actual algorithm is more complex to avoid various consistency issues, refer to
  186. the ``borg.repository`` module for more comments and documentation on these issues.)
  187. .. _internals_storage_quota:
  188. Storage quotas
  189. ~~~~~~~~~~~~~~
  190. Quotas are implemented at the Repository level. The active quota of a repository
  191. is determined by the ``storage_quota`` `config` entry or a run-time override (via :ref:`borg_serve`).
  192. The currently used quota is stored in the hints file. Operations (PUT and DELETE) during
  193. a transaction modify the currently used quota:
  194. - A PUT adds the size of the *log entry* to the quota,
  195. i.e. the length of the data plus the 41 byte header.
  196. - A DELETE subtracts the size of the deleted log entry from the quota,
  197. which includes the header.
  198. Thus, PUT and DELETE are symmetric and cancel each other out precisely.
  199. The quota does not track on-disk size overheads (due to conditional compaction
  200. or append-only mode). In normal operation the inclusion of the log entry headers
  201. in the quota act as a faithful proxy for index and hints overheads.
  202. By tracking effective content size, the client can *always* recover from a full quota
  203. by deleting archives. This would not be possible if the quota tracked on-disk size,
  204. since journaling DELETEs requires extra disk space before space is freed.
  205. Tracking effective size on the other hand accounts DELETEs immediately as freeing quota.
  206. .. rubric:: Enforcing the quota
  207. The storage quota is meant as a robust mechanism for service providers, therefore
  208. :ref:`borg_serve` has to enforce it without loopholes (e.g. modified clients).
  209. The following sections refer to using quotas on remotely accessed repositories.
  210. For local access, consider *client* and *serve* the same.
  211. Accordingly, quotas cannot be enforced with local access,
  212. since the quota can be changed in the repository config.
  213. The quota is enforcible only if *all* :ref:`borg_serve` versions
  214. accessible to clients support quotas (see next section). Further, quota is
  215. per repository. Therefore, ensure clients can only access a defined set of repositories
  216. with their quotas set, using ``--restrict-to-repository``.
  217. If the client exceeds the storage quota the ``StorageQuotaExceeded`` exception is
  218. raised. Normally a client could ignore such an exception and just send a ``commit()``
  219. command anyway, circumventing the quota. However, when ``StorageQuotaExceeded`` is raised,
  220. it is stored in the ``transaction_doomed`` attribute of the repository.
  221. If the transaction is doomed, then commit will re-raise this exception, aborting the commit.
  222. The transaction_doomed indicator is reset on a rollback (which erases the quota-exceeding
  223. state).
  224. .. rubric:: Compatibility with older servers and enabling quota after-the-fact
  225. If no quota data is stored in the hints file, Borg assumes zero quota is used.
  226. Thus, if a repository with an enabled quota is written to with an older ``borg serve``
  227. version that does not understand quotas, then the quota usage will be erased.
  228. The client version is irrelevant to the storage quota and has no part in it.
  229. The form of error messages due to exceeding quota varies with client versions.
  230. A similar situation arises when upgrading from a Borg release that did not have quotas.
  231. Borg will start tracking quota use from the time of the upgrade, starting at zero.
  232. If the quota shall be enforced accurately in these cases, either
  233. - delete the ``index.N`` and ``hints.N`` files, forcing Borg to rebuild both,
  234. re-acquiring quota data in the process, or
  235. - edit the msgpacked ``hints.N`` file (not recommended and thus not
  236. documented further).
  237. The object graph
  238. ----------------
  239. On top of the simple key-value store offered by the Repository_,
  240. Borg builds a much more sophisticated data structure that is essentially
  241. a completely encrypted object graph. Objects, such as archives_, are referenced
  242. by their chunk ID, which is cryptographically derived from their contents.
  243. More on how this helps security in :ref:`security_structural_auth`.
  244. .. figure:: object-graph.png
  245. :figwidth: 100%
  246. :width: 100%
  247. .. _manifest:
  248. The manifest
  249. ~~~~~~~~~~~~
  250. The manifest is the root of the object hierarchy. It references
  251. all archives in a repository, and thus all data in it.
  252. Since no object references it, it cannot be stored under its ID key.
  253. Instead, the manifest has a fixed all-zero key.
  254. The manifest is rewritten each time an archive is created, deleted,
  255. or modified. It looks like this:
  256. .. code-block:: python
  257. {
  258. 'version': 1,
  259. 'timestamp': '2017-05-05T12:42:23.042864',
  260. 'item_keys': ['acl_access', 'acl_default', ...],
  261. 'config': {},
  262. 'archives': {
  263. '2017-05-05-system-backup': {
  264. 'id': b'<32 byte binary object ID>',
  265. 'time': '2017-05-05T12:42:22.942864',
  266. },
  267. },
  268. 'tam': ...,
  269. }
  270. The *version* field can be either 1 or 2. The versions differ in the
  271. way feature flags are handled, described below.
  272. The *timestamp* field is used to avoid logical replay attacks where
  273. the server just resets the repository to a previous state.
  274. *item_keys* is a list containing all Item_ keys that may be encountered in
  275. the repository. It is used by *borg check*, which verifies that all keys
  276. in all items are a subset of these keys. Thus, an older version of *borg check*
  277. supporting this mechanism can correctly detect keys introduced in later versions.
  278. The *tam* key is part of the :ref:`tertiary authentication mechanism <tam_description>`
  279. (formerly known as "tertiary authentication for metadata") and authenticates
  280. the manifest, since an ID check is not possible.
  281. *config* is a general-purpose location for additional metadata. All versions
  282. of Borg preserve its contents.
  283. Feature flags
  284. +++++++++++++
  285. Feature flags are used to add features to data structures without causing
  286. corruption if older versions are used to access or modify them. The main issues
  287. to consider for a feature flag oriented design are flag granularity,
  288. flag storage, and cache_ invalidation.
  289. Feature flags are divided in approximately three categories, detailed below.
  290. Due to the nature of ID-based deduplication, write (i.e. creating archives) and
  291. read access are not symmetric; it is possible to create archives referencing
  292. chunks that are not readable with the current feature set. The third
  293. category are operations that require accurate reference counts, for example
  294. archive deletion and check.
  295. As the manifest is always updated and always read, it is the ideal place to store
  296. feature flags, comparable to the super-block of a file system. The only problem
  297. is to recover from a lost manifest, i.e. how is it possible to detect which feature
  298. flags are enabled, if there is no manifest to tell. This issue is left open at this time,
  299. but is not expected to be a major hurdle; it doesn't have to be handled efficiently, it just
  300. needs to be handled.
  301. Lastly, cache_ invalidation is handled by noting which feature
  302. flags were and which were not understood while manipulating a cache.
  303. This allows borg to detect whether the cache needs to be invalidated,
  304. i.e. rebuilt from scratch. See `Cache feature flags`_ below.
  305. The *config* key stores the feature flags enabled on a repository:
  306. .. code-block:: python
  307. config = {
  308. 'feature_flags': {
  309. 'read': {
  310. 'mandatory': ['some_feature'],
  311. },
  312. 'check': {
  313. 'mandatory': ['other_feature'],
  314. }
  315. 'write': ...,
  316. 'delete': ...
  317. },
  318. }
  319. The top-level distinction for feature flags is the operation the client intends
  320. to perform,
  321. | the *read* operation includes extraction and listing of archives,
  322. | the *write* operation includes creating new archives,
  323. | the *delete* (archives) operation,
  324. | the *check* operation requires full understanding of everything in the repository.
  325. |
  326. These are weakly set-ordered; *check* will include everything required for *delete*,
  327. *delete* will likely include *write* and *read*. However, *read* may require more
  328. features than *write* (due to ID-based deduplication, *write* does not necessarily
  329. require reading/understanding repository contents).
  330. Each operation can contain several sets of feature flags. Only one set,
  331. the *mandatory* set is currently defined.
  332. Upon reading the manifest, the Borg client has already determined which operation
  333. should be performed. If feature flags are found in the manifest, the set
  334. of feature flags supported by the client is compared to the mandatory set
  335. found in the manifest. If any unsupported flags are found (i.e. the mandatory set is
  336. not a subset of the features supported by the Borg client used), the operation
  337. is aborted with a *MandatoryFeatureUnsupported* error:
  338. Unsupported repository feature(s) {'some_feature'}. A newer version of borg is required to access this repository.
  339. Older Borg releases do not have this concept and do not perform feature flags checks.
  340. These can be locked out with manifest version 2. Thus, the only difference between
  341. manifest versions 1 and 2 is that the latter is only accepted by Borg releases
  342. implementing feature flags.
  343. Therefore, as soon as any mandatory feature flag is enabled in a repository,
  344. the manifest version must be switched to version 2 in order to lock out all
  345. Borg releases unaware of feature flags.
  346. .. _Cache feature flags:
  347. .. rubric:: Cache feature flags
  348. `The cache`_ does not have its separate set of feature flags. Instead, Borg stores
  349. which flags were used to create or modify a cache.
  350. All mandatory manifest features from all operations are gathered in one set.
  351. Then, two sets of features are computed;
  352. - those features that are supported by the client and mandated by the manifest
  353. are added to the *mandatory_features* set,
  354. - the *ignored_features* set comprised of those features mandated by the manifest,
  355. but not supported by the client.
  356. Because the client previously checked compliance with the mandatory set of features
  357. required for the particular operation it is executing, the *mandatory_features* set
  358. will contain all necessary features required for using the cache safely.
  359. Conversely, the *ignored_features* set contains only those features which were not
  360. relevant to operating the cache. Otherwise, the client would not pass the feature
  361. set test against the manifest.
  362. When opening a cache and the *mandatory_features* set is not a subset of the features
  363. supported by the client, the cache is wiped out and rebuilt,
  364. since a client not supporting a mandatory feature that the cache was built with
  365. would be unable to update it correctly.
  366. The assumption behind this behaviour is that any of the unsupported features could have
  367. been reflected in the cache and there is no way for the client to discern whether
  368. that is the case.
  369. Meanwhile, it may not be practical for every feature to have clients using it track
  370. whether the feature had an impact on the cache.
  371. Therefore, the cache is wiped.
  372. When opening a cache and the intersection of *ignored_features* and the features
  373. supported by the client contains any elements, i.e. the client possesses features
  374. that the previous client did not have and those new features are enabled in the repository,
  375. the cache is wiped out and rebuilt.
  376. While the former condition likely requires no tweaks, the latter condition is formulated
  377. in an especially conservative way to play it safe. It seems likely that specific features
  378. might be exempted from the latter condition.
  379. .. rubric:: Defined feature flags
  380. Currently no feature flags are defined.
  381. From currently planned features, some examples follow,
  382. these may/may not be implemented and purely serve as examples.
  383. - A mandatory *read* feature could be using a different encryption scheme (e.g. session keys).
  384. This may not be mandatory for the *write* operation - reading data is not strictly required for
  385. creating an archive.
  386. - Any additions to the way chunks are referenced (e.g. to support larger archives) would
  387. become a mandatory *delete* and *check* feature; *delete* implies knowing correct
  388. reference counts, so all object references need to be understood. *check* must
  389. discover the entire object graph as well, otherwise the "orphan chunks check"
  390. could delete data still in use.
  391. .. _archive:
  392. Archives
  393. ~~~~~~~~
  394. Each archive is an object referenced by the manifest. The archive object
  395. itself does not store any of the data contained in the archive it describes.
  396. Instead, it contains a list of chunks which form a msgpacked stream of items_.
  397. The archive object itself further contains some metadata:
  398. * *version*
  399. * *name*, which might differ from the name set in the manifest.
  400. When :ref:`borg_check` rebuilds the manifest (e.g. if it was corrupted) and finds
  401. more than one archive object with the same name, it adds a counter to the name
  402. in the manifest, but leaves the *name* field of the archives as it was.
  403. * *item_ptrs*, a list of "pointer chunk" IDs.
  404. Each "pointer chunk" contains a list of chunk IDs of item metadata.
  405. * *command_line*, the command line which was used to create the archive
  406. * *hostname*
  407. * *username*
  408. * *time* and *time_end* are the start and end timestamps, respectively
  409. * *comment*, a user-specified archive comment
  410. * *chunker_params* are the :ref:`chunker-params <chunker-params>` used for creating the archive.
  411. This is used by :ref:`borg_recreate` to determine whether a given archive needs rechunking.
  412. * Some other pieces of information related to recreate.
  413. .. _item:
  414. Items
  415. ~~~~~
  416. Each item represents a file, directory or other file system item and is stored as a
  417. dictionary created by the ``Item`` class that contains:
  418. * path
  419. * list of data chunks (size: count * ~40B)
  420. * user
  421. * group
  422. * uid
  423. * gid
  424. * mode (item type + permissions)
  425. * source (for symlinks)
  426. * hlid (for hardlinks)
  427. * rdev (for device files)
  428. * mtime, atime, ctime, birthtime in nanoseconds
  429. * xattrs
  430. * acl (various OS-dependent fields)
  431. * flags
  432. All items are serialized using msgpack and the resulting byte stream
  433. is fed into the same chunker algorithm as used for regular file data
  434. and turned into deduplicated chunks. The reference to these chunks is then added
  435. to the archive metadata. To achieve a finer granularity on this metadata
  436. stream, we use different chunker params for this chunker, which result in
  437. smaller chunks.
  438. A chunk is stored as an object as well, of course.
  439. .. _chunks:
  440. .. _chunker_details:
  441. Chunks
  442. ~~~~~~
  443. Borg has these chunkers:
  444. - "fixed": a simple, low cpu overhead, fixed blocksize chunker, optionally
  445. supporting a header block of different size.
  446. - "buzhash": variable, content-defined blocksize, uses a rolling hash
  447. computed by the Buzhash_ algorithm.
  448. For some more general usage hints see also ``--chunker-params``.
  449. "fixed" chunker
  450. +++++++++++++++
  451. The fixed chunker triggers (chunks) at even-spaced offsets, e.g. every 4MiB,
  452. producing chunks of same block size (the last chunk is not required to be
  453. full-size).
  454. Optionally, it supports processing a differently sized "header" first, before
  455. it starts to cut chunks of the desired block size.
  456. The default is not to have a differently sized header.
  457. ``borg create --chunker-params fixed,BLOCK_SIZE[,HEADER_SIZE]``
  458. - BLOCK_SIZE: no default value, multiple of the system page size (usually 4096
  459. bytes) recommended. E.g.: 4194304 would cut 4MiB sized chunks.
  460. - HEADER_SIZE: optional, defaults to 0 (no header).
  461. The fixed chunker also supports processing sparse files (reading only the ranges
  462. with data and seeking over the empty hole ranges).
  463. ``borg create --sparse --chunker-params fixed,BLOCK_SIZE[,HEADER_SIZE]``
  464. "buzhash" chunker
  465. +++++++++++++++++
  466. The buzhash chunker triggers (chunks) when the last HASH_MASK_BITS bits of the
  467. hash are zero, producing chunks with a target size of 2^HASH_MASK_BITS bytes.
  468. Buzhash is **only** used for cutting the chunks at places defined by the
  469. content, the buzhash value is **not** used as the deduplication criteria (we
  470. use a cryptographically strong hash/MAC over the chunk contents for this, the
  471. id_hash).
  472. The idea of content-defined chunking is assigning every byte where a
  473. cut *could* be placed a hash. The hash is based on some number of bytes
  474. (the window size) before the byte in question. Chunks are cut
  475. where the hash satisfies some condition
  476. (usually "n numbers of trailing/leading zeroes"). This causes chunks to be cut
  477. in the same location relative to the file's contents, even if bytes are inserted
  478. or removed before/after a cut, as long as the bytes within the window stay the same.
  479. This results in a high chance that a single cluster of changes to a file will only
  480. result in 1-2 new chunks, aiding deduplication.
  481. Using normal hash functions this would be extremely slow,
  482. requiring hashing approximately ``window size * file size`` bytes.
  483. A rolling hash is used instead, which allows to add a new input byte and
  484. compute a new hash as well as *remove* a previously added input byte
  485. from the computed hash. This makes the cost of computing a hash for each
  486. input byte largely independent of the window size.
  487. Borg defines minimum and maximum chunk sizes (CHUNK_MIN_EXP and CHUNK_MAX_EXP, respectively)
  488. which narrows down where cuts may be made, greatly reducing the amount of data
  489. that is actually hashed for content-defined chunking.
  490. ``borg create --chunker-params buzhash,CHUNK_MIN_EXP,CHUNK_MAX_EXP,HASH_MASK_BITS,HASH_WINDOW_SIZE``
  491. can be used to tune the chunker parameters, the default is:
  492. - CHUNK_MIN_EXP = 19 (minimum chunk size = 2^19 B = 512 kiB)
  493. - CHUNK_MAX_EXP = 23 (maximum chunk size = 2^23 B = 8 MiB)
  494. - HASH_MASK_BITS = 21 (target chunk size ~= 2^21 B = 2 MiB)
  495. - HASH_WINDOW_SIZE = 4095 [B] (`0xFFF`)
  496. The buzhash table is altered by XORing it with a seed randomly generated once
  497. for the repository, and stored encrypted in the keyfile. This is to prevent
  498. chunk size based fingerprinting attacks on your encrypted repo contents (to
  499. guess what files you have based on a specific set of chunk sizes).
  500. .. _cache:
  501. The cache
  502. ---------
  503. The **files cache** is stored in ``cache/files`` and is used at backup time to
  504. quickly determine whether a given file is unchanged and we have all its chunks.
  505. In memory, the files cache is a key -> value mapping (a Python *dict*) and contains:
  506. * key: id_hash of the encoded, absolute file path
  507. * value:
  508. - file inode number
  509. - file size
  510. - file ctime_ns (or mtime_ns)
  511. - age (0 [newest], 1, 2, 3, ..., BORG_FILES_CACHE_TTL - 1)
  512. - list of chunk ids representing the file's contents
  513. To determine whether a file has not changed, cached values are looked up via
  514. the key in the mapping and compared to the current file attribute values.
  515. If the file's size, timestamp and inode number is still the same, it is
  516. considered not to have changed. In that case, we check that all file content
  517. chunks are (still) present in the repository (we check that via the chunks
  518. cache).
  519. If everything is matching and all chunks are present, the file is not read /
  520. chunked / hashed again (but still a file metadata item is written to the
  521. archive, made from fresh file metadata read from the filesystem). This is
  522. what makes borg so fast when processing unchanged files.
  523. If there is a mismatch or a chunk is missing, the file is read / chunked /
  524. hashed. Chunks already present in repo won't be transferred to repo again.
  525. The inode number is stored and compared to make sure we distinguish between
  526. different files, as a single path may not be unique across different
  527. archives in different setups.
  528. Not all filesystems have stable inode numbers. If that is the case, borg can
  529. be told to ignore the inode number in the check via --files-cache.
  530. The age value is used for cache management. If a file is "seen" in a backup
  531. run, its age is reset to 0, otherwise its age is incremented by one.
  532. If a file was not seen in BORG_FILES_CACHE_TTL backups, its cache entry is
  533. removed. See also: :ref:`always_chunking` and :ref:`a_status_oddity`
  534. The files cache is a python dictionary, storing python objects, which
  535. generates a lot of overhead.
  536. Borg can also work without using the files cache (saves memory if you have a
  537. lot of files or not much RAM free), then all files are assumed to have changed.
  538. This is usually much slower than with files cache.
  539. The on-disk format of the files cache is a stream of msgpacked tuples (key, value).
  540. Loading the files cache involves reading the file, one msgpack object at a time,
  541. unpacking it, and msgpacking the value (in an effort to save memory).
  542. The **chunks cache** is stored in ``cache/chunks`` and is used to determine
  543. whether we already have a specific chunk, to count references to it and also
  544. for statistics.
  545. The chunks cache is a key -> value mapping and contains:
  546. * key:
  547. - chunk id_hash
  548. * value:
  549. - reference count
  550. - size
  551. The chunks cache is a HashIndex_. Due to some restrictions of HashIndex,
  552. the reference count of each given chunk is limited to a constant, MAX_VALUE
  553. (introduced below in HashIndex_), approximately 2**32.
  554. If a reference count hits MAX_VALUE, decrementing it yields MAX_VALUE again,
  555. i.e. the reference count is pinned to MAX_VALUE.
  556. .. _cache-memory-usage:
  557. Indexes / Caches memory usage
  558. -----------------------------
  559. Here is the estimated memory usage of Borg - it's complicated::
  560. chunk_size ~= 2 ^ HASH_MASK_BITS (for buzhash chunker, BLOCK_SIZE for fixed chunker)
  561. chunk_count ~= total_file_size / chunk_size
  562. repo_index_usage = chunk_count * 48
  563. chunks_cache_usage = chunk_count * 40
  564. files_cache_usage = total_file_count * 240 + chunk_count * 80
  565. mem_usage ~= repo_index_usage + chunks_cache_usage + files_cache_usage
  566. = chunk_count * 164 + total_file_count * 240
  567. Due to the hashtables, the best/usual/worst cases for memory allocation can
  568. be estimated like that::
  569. mem_allocation = mem_usage / load_factor # l_f = 0.25 .. 0.75
  570. mem_allocation_peak = mem_allocation * (1 + growth_factor) # g_f = 1.1 .. 2
  571. All units are Bytes.
  572. It is assuming every chunk is referenced exactly once (if you have a lot of
  573. duplicate chunks, you will have fewer chunks than estimated above).
  574. It is also assuming that typical chunk size is 2^HASH_MASK_BITS (if you have
  575. a lot of files smaller than this statistical medium chunk size, you will have
  576. more chunks than estimated above, because 1 file is at least 1 chunk).
  577. If a remote repository is used the repo index will be allocated on the remote side.
  578. The chunks cache, files cache and the repo index are all implemented as hash
  579. tables. A hash table must have a significant amount of unused entries to be
  580. fast - the so-called load factor gives the used/unused elements ratio.
  581. When a hash table gets full (load factor getting too high), it needs to be
  582. grown (allocate new, bigger hash table, copy all elements over to it, free old
  583. hash table) - this will lead to short-time peaks in memory usage each time this
  584. happens. Usually does not happen for all hashtables at the same time, though.
  585. For small hash tables, we start with a growth factor of 2, which comes down to
  586. ~1.1x for big hash tables.
  587. E.g. backing up a total count of 1 Mi (IEC binary prefix i.e. 2^20) files with a total size of 1TiB.
  588. a) with ``create --chunker-params buzhash,10,23,16,4095`` (custom):
  589. mem_usage = 2.8GiB
  590. b) with ``create --chunker-params buzhash,19,23,21,4095`` (default):
  591. mem_usage = 0.31GiB
  592. .. note:: There is also the ``--files-cache=disabled`` option to disable the files cache.
  593. You'll save some memory, but it will need to read / chunk all the files as
  594. it can not skip unmodified files then.
  595. HashIndex
  596. ---------
  597. The chunks cache and the repository index are stored as hash tables, with
  598. only one slot per bucket, spreading hash collisions to the following
  599. buckets. As a consequence the hash is just a start position for a linear
  600. search. If a key is looked up that is not in the table, then the hash table
  601. is searched from the start position (the hash) until the first empty
  602. bucket is reached.
  603. This particular mode of operation is open addressing with linear probing.
  604. When the hash table is filled to 75%, its size is grown. When it's
  605. emptied to 25%, its size is shrunken. Operations on it have a variable
  606. complexity between constant and linear with low factor, and memory overhead
  607. varies between 33% and 300%.
  608. If an element is deleted, and the slot behind the deleted element is not empty,
  609. then the element will leave a tombstone, a bucket marked as deleted. Tombstones
  610. are only removed by insertions using the tombstone's bucket, or by resizing
  611. the table. They present the same load to the hash table as a real entry,
  612. but do not count towards the regular load factor.
  613. Thus, if the number of empty slots becomes too low (recall that linear probing
  614. for an element not in the index stops at the first empty slot), the hash table
  615. is rebuilt. The maximum *effective* load factor, i.e. including tombstones, is 93%.
  616. Data in a HashIndex is always stored in little-endian format, which increases
  617. efficiency for almost everyone, since basically no one uses big-endian processors
  618. any more.
  619. HashIndex does not use a hashing function, because all keys (save manifest) are
  620. outputs of a cryptographic hash or MAC and thus already have excellent distribution.
  621. Thus, HashIndex simply uses the first 32 bits of the key as its "hash".
  622. The format is easy to read and write, because the buckets array has the same layout
  623. in memory and on disk. Only the header formats differ. The on-disk header is
  624. ``struct HashHeader``:
  625. - First, the HashIndex magic, the eight byte ASCII string "BORG_IDX".
  626. - Second, the signed 32-bit number of entries (i.e. buckets which are not deleted and not empty).
  627. - Third, the signed 32-bit number of buckets, i.e. the length of the buckets array
  628. contained in the file, and the modulus for index calculation.
  629. - Fourth, the signed 8-bit length of keys.
  630. - Fifth, the signed 8-bit length of values. This has to be at least four bytes.
  631. All fields are packed.
  632. The HashIndex is *not* a general purpose data structure.
  633. The value size must be at least 4 bytes, and these first bytes are used for in-band
  634. signalling in the data structure itself.
  635. The constant MAX_VALUE (defined as 2**32-1025 = 4294966271) defines the valid range for
  636. these 4 bytes when interpreted as an uint32_t from 0 to MAX_VALUE (inclusive).
  637. The following reserved values beyond MAX_VALUE are currently in use (byte order is LE):
  638. - 0xffffffff marks empty buckets in the hash table
  639. - 0xfffffffe marks deleted buckets in the hash table
  640. HashIndex is implemented in C and wrapped with Cython in a class-based interface.
  641. The Cython wrapper checks every passed value against these reserved values and
  642. raises an AssertionError if they are used.
  643. .. _data-encryption:
  644. Encryption
  645. ----------
  646. .. seealso:: The :ref:`borgcrypto` section for an in-depth review.
  647. AEAD modes
  648. ~~~~~~~~~~
  649. For new repositories, borg only uses modern AEAD ciphers: AES-OCB or CHACHA20-POLY1305.
  650. For each borg invocation, a new sessionkey is derived from the borg key material
  651. and the 48bit IV starts from 0 again (both ciphers internally add a 32bit counter
  652. to our IV, so we'll just count up by 1 per chunk).
  653. The encryption layout is best seen at the bottom of this diagram:
  654. .. figure:: encryption-aead.png
  655. :figwidth: 100%
  656. :width: 100%
  657. No special IV/counter management is needed here due to the use of session keys.
  658. A 48 bit IV is way more than needed: If you only backed up 4kiB chunks (2^12B),
  659. the IV would "limit" the data encrypted in one session to 2^(12+48)B == 2.3 exabytes,
  660. meaning you would run against other limitations (RAM, storage, time) way before that.
  661. In practice, chunks are usually bigger, for big files even much bigger, giving an
  662. even higher limit.
  663. Legacy modes
  664. ~~~~~~~~~~~~
  665. Old repositories (which used AES-CTR mode) are supported read-only to be able to
  666. ``borg transfer`` their archives to new repositories (which use AEAD modes).
  667. AES-CTR mode is not supported for new repositories and the related code will be
  668. removed in a future release.
  669. Both modes
  670. ~~~~~~~~~~
  671. Encryption keys (and other secrets) are kept either in a key file on the client
  672. ('keyfile' mode) or in the repository config on the server ('repokey' mode).
  673. In both cases, the secrets are generated from random and then encrypted by a
  674. key derived from your passphrase (this happens on the client before the key
  675. is stored into the keyfile or as repokey).
  676. The passphrase is passed through the ``BORG_PASSPHRASE`` environment variable
  677. or prompted for interactive usage.
  678. .. _key_files:
  679. Key files
  680. ---------
  681. .. seealso:: The :ref:`key_encryption` section for an in-depth review of the key encryption.
  682. When initializing a repository with one of the "keyfile" encryption modes,
  683. Borg creates an associated key file in ``$HOME/.config/borg/keys``.
  684. The same key is also used in the "repokey" modes, which store it in the repository
  685. in the configuration file.
  686. The internal data structure is as follows:
  687. version
  688. currently always an integer, 2
  689. repository_id
  690. the ``id`` field in the ``config`` ``INI`` file of the repository.
  691. crypt_key
  692. the initial key material used for the AEAD crypto (512 bits)
  693. id_key
  694. the key used to MAC the plaintext chunk data to compute the chunk's id
  695. chunk_seed
  696. the seed for the buzhash chunking table (signed 32 bit integer)
  697. These fields are packed using msgpack_. The utf-8 encoded passphrase
  698. is processed with argon2_ to derive a 256 bit key encryption key (KEK).
  699. Then the KEK is used to encrypt and authenticate the packed data using
  700. the chacha20-poly1305 AEAD cipher.
  701. The result is stored in a another msgpack_ formatted as follows:
  702. version
  703. currently always an integer, 1
  704. salt
  705. random 256 bits salt used to process the passphrase
  706. argon2_*
  707. some parameters for the argon2 kdf
  708. algorithm
  709. the algorithms used to process the passphrase
  710. (currently the string ``argon2 chacha20-poly1305``)
  711. data
  712. The encrypted, packed fields.
  713. The resulting msgpack_ is then encoded using base64 and written to the
  714. key file, wrapped using the standard ``textwrap`` module with a header.
  715. The header is a single line with a MAGIC string, a space and a hexadecimal
  716. representation of the repository id.
  717. .. _data-compression:
  718. Compression
  719. -----------
  720. Borg supports the following compression methods, each identified by a ctype value
  721. in the range between 0 and 255 (and augmented by a clevel 0..255 value for the
  722. compression level):
  723. - none (no compression, pass through data 1:1), identified by 0x00
  724. - lz4 (low compression, but super fast), identified by 0x01
  725. - zstd (level 1-22 offering a wide range: level 1 is lower compression and high
  726. speed, level 22 is higher compression and lower speed) - identified by 0x03
  727. - zlib (level 0-9, level 0 is no compression [but still adding zlib overhead],
  728. level 1 is low, level 9 is high compression), identified by 0x05
  729. - lzma (level 0-9, level 0 is low, level 9 is high compression), identified
  730. by 0x02.
  731. The type byte is followed by a byte indicating the compression level.
  732. Speed: none > lz4 > zlib > lzma, lz4 > zstd
  733. Compression: lzma > zlib > lz4 > none, zstd > lz4
  734. Be careful, higher compression levels might use a lot of resources (CPU/memory).
  735. The overall speed of course also depends on the speed of your target storage.
  736. If that is slow, using a higher compression level might yield better overall
  737. performance. You need to experiment a bit. Maybe just watch your CPU load, if
  738. that is relatively low, increase compression until 1 core is 70-100% loaded.
  739. Even if your target storage is rather fast, you might see interesting effects:
  740. while doing no compression at all (none) is a operation that takes no time, it
  741. likely will need to store more data to the storage compared to using lz4.
  742. The time needed to transfer and store the additional data might be much more
  743. than if you had used lz4 (which is super fast, but still might compress your
  744. data about 2:1). This is assuming your data is compressible (if you back up
  745. already compressed data, trying to compress them at backup time is usually
  746. pointless).
  747. Compression is applied after deduplication, thus using different compression
  748. methods in one repo does not influence deduplication.
  749. See ``borg create --help`` about how to specify the compression level and its default.
  750. Lock files
  751. ----------
  752. Borg uses locks to get (exclusive or shared) access to the cache and
  753. the repository.
  754. The locking system is based on renaming a temporary directory
  755. to `lock.exclusive` (for
  756. exclusive locks). Inside this directory, there is a file indicating
  757. hostname, process id and thread id of the lock holder.
  758. There is also a json file `lock.roster` that keeps a directory of all shared
  759. and exclusive lockers.
  760. If the process is able to rename a temporary directory (with the
  761. host/process/thread identifier prepared inside it) in the resource directory
  762. to `lock.exclusive`, it has the lock for it. If renaming fails
  763. (because this directory already exists and its host/process/thread identifier
  764. denotes a thread on the host which is still alive), lock acquisition fails.
  765. The cache lock is usually in `~/.cache/borg/REPOID/lock.*`.
  766. The repository lock is in `repository/lock.*`.
  767. In case you run into troubles with the locks, you can use the ``borg break-lock``
  768. command after you first have made sure that no Borg process is
  769. running on any machine that accesses this resource. Be very careful, the cache
  770. or repository might get damaged if multiple processes use it at the same time.
  771. Checksumming data structures
  772. ----------------------------
  773. As detailed in the previous sections, Borg generates and stores various files
  774. containing important meta data, such as the repository index, repository hints,
  775. chunks caches and files cache.
  776. Data corruption in these files can damage the archive data in a repository,
  777. e.g. due to wrong reference counts in the chunks cache. Only some parts of Borg
  778. were designed to handle corrupted data structures, so a corrupted files cache
  779. may cause crashes or write incorrect archives.
  780. Therefore, Borg calculates checksums when writing these files and tests checksums
  781. when reading them. Checksums are generally 64-bit XXH64 hashes.
  782. The canonical xxHash representation is used, i.e. big-endian.
  783. Checksums are stored as hexadecimal ASCII strings.
  784. For compatibility, checksums are not required and absent checksums do not trigger errors.
  785. The mechanisms have been designed to avoid false-positives when various Borg
  786. versions are used alternately on the same repositories.
  787. Checksums are a data safety mechanism. They are not a security mechanism.
  788. .. rubric:: Choice of algorithm
  789. XXH64 has been chosen for its high speed on all platforms, which avoids performance
  790. degradation in CPU-limited parts (e.g. cache synchronization).
  791. Unlike CRC32, it neither requires hardware support (crc32c or CLMUL)
  792. nor vectorized code nor large, cache-unfriendly lookup tables to achieve good performance.
  793. This simplifies deployment of it considerably (cf. src/borg/algorithms/crc32...).
  794. Further, XXH64 is a non-linear hash function and thus has a "more or less" good
  795. chance to detect larger burst errors, unlike linear CRCs where the probability
  796. of detection decreases with error size.
  797. The 64-bit checksum length is considered sufficient for the file sizes typically
  798. checksummed (individual files up to a few GB, usually less).
  799. xxHash was expressly designed for data blocks of these sizes.
  800. Lower layer — file_integrity
  801. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  802. To accommodate the different transaction models used for the cache and repository,
  803. there is a lower layer (borg.crypto.file_integrity.IntegrityCheckedFile)
  804. wrapping a file-like object, performing streaming calculation and comparison of checksums.
  805. Checksum errors are signalled by raising an exception (borg.crypto.file_integrity.FileIntegrityError)
  806. at the earliest possible moment.
  807. .. rubric:: Calculating checksums
  808. Before feeding the checksum algorithm any data, the file name (i.e. without any path)
  809. is mixed into the checksum, since the name encodes the context of the data for Borg.
  810. The various indices used by Borg have separate header and main data parts.
  811. IntegrityCheckedFile allows borg to checksum them independently, which avoids
  812. even reading the data when the header is corrupted. When a part is signalled,
  813. the length of the part name is mixed into the checksum state first (encoded
  814. as an ASCII string via `%10d` printf format), then the name of the part
  815. is mixed in as an UTF-8 string. Lastly, the current position (length)
  816. in the file is mixed in as well.
  817. The checksum state is not reset at part boundaries.
  818. A final checksum is always calculated in the same way as the parts described above,
  819. after seeking to the end of the file. The final checksum cannot prevent code
  820. from processing corrupted data during reading, however, it prevents use of the
  821. corrupted data.
  822. .. rubric:: Serializing checksums
  823. All checksums are compiled into a simple JSON structure called *integrity data*:
  824. .. code-block:: json
  825. {
  826. "algorithm": "XXH64",
  827. "digests": {
  828. "HashHeader": "eab6802590ba39e3",
  829. "final": "e2a7f132fc2e8b24"
  830. }
  831. }
  832. The *algorithm* key notes the used algorithm. When reading, integrity data containing
  833. an unknown algorithm is not inspected further.
  834. The *digests* key contains a mapping of part names to their digests.
  835. Integrity data is generally stored by the upper layers, introduced below. An exception
  836. is the DetachedIntegrityCheckedFile, which automatically writes and reads it from
  837. a ".integrity" file next to the data file.
  838. It is used for archive chunks indexes in chunks.archive.d.
  839. Upper layer
  840. ~~~~~~~~~~~
  841. Storage of integrity data depends on the component using it, since they have
  842. different transaction mechanisms, and integrity data needs to be
  843. transacted with the data it is supposed to protect.
  844. .. rubric:: Main cache files: chunks and files cache
  845. The integrity data of the ``chunks`` and ``files`` caches is stored in the
  846. cache ``config``, since all three are transacted together.
  847. The ``[integrity]`` section is used:
  848. .. code-block:: ini
  849. [cache]
  850. version = 1
  851. repository = 3c4...e59
  852. manifest = 10e...21c
  853. timestamp = 2017-06-01T21:31:39.699514
  854. key_type = 2
  855. previous_location = /path/to/repo
  856. [integrity]
  857. manifest = 10e...21c
  858. chunks = {"algorithm": "XXH64", "digests": {"HashHeader": "eab...39e3", "final": "e2a...b24"}}
  859. The manifest ID is duplicated in the integrity section due to the way all Borg
  860. versions handle the config file. Instead of creating a "new" config file from
  861. an internal representation containing only the data understood by Borg,
  862. the config file is read in entirety (using the Python ConfigParser) and modified.
  863. This preserves all sections and values not understood by the Borg version
  864. modifying it.
  865. Thus, if an older versions uses a cache with integrity data, it would preserve
  866. the integrity section and its contents. If a integrity-aware Borg version
  867. would read this cache, it would incorrectly report checksum errors, since
  868. the older version did not update the checksums.
  869. However, by duplicating the manifest ID in the integrity section, it is
  870. easy to tell whether the checksums concern the current state of the cache.
  871. Integrity errors are fatal in these files, terminating the program,
  872. and are not automatically corrected at this time.
  873. .. rubric:: chunks.archive.d
  874. Indices in chunks.archive.d are not transacted and use DetachedIntegrityCheckedFile,
  875. which writes the integrity data to a separate ".integrity" file.
  876. Integrity errors result in deleting the affected index and rebuilding it.
  877. This logs a warning and increases the exit code to WARNING (1).
  878. .. _integrity_repo:
  879. .. rubric:: Repository index and hints
  880. The repository associates index and hints files with a transaction by including the
  881. transaction ID in the file names. Integrity data is stored in a third file
  882. ("integrity.<TRANSACTION_ID>"). Like the hints file, it is msgpacked:
  883. .. code-block:: python
  884. {
  885. 'version': 2,
  886. 'hints': '{"algorithm": "XXH64", "digests": {"final": "411208db2aa13f1a"}}',
  887. 'index': '{"algorithm": "XXH64", "digests": {"HashHeader": "846b7315f91b8e48", "final": "cb3e26cadc173e40"}}'
  888. }
  889. The *version* key started at 2, the same version used for the hints. Since Borg has
  890. many versioned file formats, this keeps the number of different versions in use
  891. a bit lower.
  892. The other keys map an auxiliary file, like *index* or *hints* to their integrity data.
  893. Note that the JSON is stored as-is, and not as part of the msgpack structure.
  894. Integrity errors result in deleting the affected file(s) (index/hints) and rebuilding the index,
  895. which is the same action taken when corruption is noticed in other ways (e.g. HashIndex can
  896. detect most corrupted headers, but not data corruption). A warning is logged as well.
  897. The exit code is not influenced, since remote repositories cannot perform that action.
  898. Raising the exit code would be possible for local repositories, but is not implemented.
  899. Unlike the cache design this mechanism can have false positives whenever an older version
  900. *rewrites* the auxiliary files for a transaction created by a newer version,
  901. since that might result in a different index (due to hash-table resizing) or hints file
  902. (hash ordering, or the older version 1 format), while not invalidating the integrity file.
  903. For example, using 1.1 on a repository, noticing corruption or similar issues and then running
  904. ``borg-1.0 check --repair``, which rewrites the index and hints, results in this situation.
  905. Borg 1.1 would erroneously report checksum errors in the hints and/or index files and trigger
  906. an automatic rebuild of these files.
  907. HardLinkManager and the hlid concept
  908. ------------------------------------
  909. Dealing with hard links needs some extra care, implemented in borg within the HardLinkManager
  910. class:
  911. - At archive creation time, fs items with st_nlink > 1 indicate that they are a member of
  912. a group of hardlinks all pointing to the same inode. For such fs items, the archived item
  913. includes a hlid attribute (hardlink id), which is computed like H(st_dev, st_ino). Thus,
  914. if archived items have the same hlid value, they pointed to the same inode and form a
  915. group of hardlinks. Besides that, nothing special is done for any member of the group
  916. of hardlinks, meaning that e.g. for regular files, each archived item will have a
  917. chunks list.
  918. - At extraction time, the presence of a hlid attribute indicates that there might be more
  919. hardlinks coming, pointing to the same content (inode), thus borg will remember the "hlid
  920. to extracted path" mapping, so it will know the correct path for extracting (hardlinking)
  921. the next hardlink of that group / with the same hlid.
  922. - This symmetric approach (each item has all the information, e.g. the chunks list)
  923. simplifies dealing with such items a lot, especially for partial extraction, for the
  924. FUSE filesystem, etc.
  925. - This is different from the asymmetric approach of old borg versions (< 2.0) and also from
  926. tar which have the concept of a main item (first hardlink, has the content) and content-less
  927. secondary items with by-name back references for each subsequent hardlink, causing lots
  928. of complications when dealing with them.