Browse Source

Fix test_size_on_disk_accurate for large st_blksize, fixes #7250

python's io.BufferedWriter sizes its buffer based on st_blksize.
If the write fits in this buffer, then it's possible the data from
idx.write() has not been flushed through to ,the underlying filesystem,
and getsize(fileno()) sees a too-short (or even empty) file.

Also, getsize is only documented as accepting path-like objects;
passing a fileno seems to work only because the implementation
blindly forwards everything through to os.stat without checking.

Passing unopened_tempfile avoids all three problems
- on windows, it doesn't rely on re-opening NamedTemporaryFile
  (the issue which led to cc0ad321dc32b78ce2f2f625ae91040fddf3fd8c)
- we're following the documented API of getsize(path-like)
- the file is closed (thus flushed) inside idx.write, before getsize()
Kevin Puetz 2 years ago
parent
commit
079d9256ee
1 changed files with 3 additions and 3 deletions
  1. 3 3
      src/borg/testsuite/hashindex.py

+ 3 - 3
src/borg/testsuite/hashindex.py

@@ -258,9 +258,9 @@ class HashIndexSizeTestCase(BaseTestCase):
         idx = ChunkIndex()
         idx = ChunkIndex()
         for i in range(1234):
         for i in range(1234):
             idx[H(i)] = i, i**2
             idx[H(i)] = i, i**2
-        with tempfile.NamedTemporaryFile() as file:
-            idx.write(file)
-            size = os.path.getsize(file.fileno())
+        with unopened_tempfile() as filepath:
+            idx.write(filepath)
+            size = os.path.getsize(filepath)
         assert idx.size() == size
         assert idx.size() == size