Răsfoiți Sursa

fix logging, add some comments

shutting down logging is problematic as it is global
and we do multi-threaded execution, e.g. in tests.

thus, rather just flush the important loggers and keep
them alive.
Thomas Waldmann 2 ani în urmă
părinte
comite
ae97584218
3 a modificat fișierele cu 30 adăugiri și 8 ștergeri
  1. 26 4
      src/borg/logger.py
  2. 1 1
      src/borg/remote.py
  3. 3 3
      src/borg/testsuite/archiver/__init__.py

+ 26 - 4
src/borg/logger.py

@@ -28,6 +28,24 @@ The way to use this is as follows:
 
 * what is output on INFO level is additionally controlled by commandline
   flags
+
+Logging setup is a bit complicated in borg, as it needs to work under misc. conditions:
+- purely local, not client/server (easy)
+- client/server: RemoteRepository ("borg serve" process) writes log records into a global
+  queue, which is then sent to the client side by the main serve loop (via the RPC protocol,
+  either over ssh stdout, more directly via process stdout without ssh [used in the tests]
+  or via a socket. On the client side, the log records are fed into the clientside logging
+  system. When remote_repo.close() is called, server side must send all queued log records
+  via the RPC channel before returning the close() call's return value (as the client will
+  then shut down the connection).
+- progress output is always given as json to the logger (including the plain text inside
+  the json), but then formatted by the logging system's formatter as either plain text or
+  json depending on the cli args given (--log-json?).
+- tests: potentially running in parallel via pytest-xdist, capturing borg output into a
+  given stream.
+- logging might be short-lived (e.g. when invoking a single borg command via the cli)
+  or long-lived (e.g. borg serve --socket or when running the tests)
+- logging is global and exists only once per process.
 """
 
 import inspect
@@ -115,10 +133,14 @@ def remove_handlers(logger):
         logger.removeHandler(handler)
 
 
-def teardown_logging():
-    global configured
-    logging.shutdown()
-    configured = False
+def flush_logging():
+    # make sure all log output is flushed,
+    # this is especially important for the "borg serve" RemoteRepository logging:
+    # all log output needs to be sent via the ssh / socket connection before closing it.
+    for logger_name in "borg.output.progress", "":
+        logger = logging.getLogger(logger_name)
+        for handler in logger.handlers:
+            handler.flush()
 
 
 def setup_logging(

+ 1 - 1
src/borg/remote.py

@@ -366,7 +366,7 @@ class RepositoryServer:  # pragma: no cover
         if self.repository is not None:
             self.repository.__exit__(None, None, None)
             self.repository = None
-        borg.logger.teardown_logging()
+        borg.logger.flush_logging()
         self.send_queued_log()
 
     def inject_exception(self, kind):

+ 3 - 3
src/borg/testsuite/archiver/__init__.py

@@ -21,7 +21,7 @@ from ...constants import *  # NOQA
 from ...helpers import Location
 from ...helpers import EXIT_SUCCESS
 from ...helpers import bin_to_hex
-from ...logger import teardown_logging
+from ...logger import flush_logging
 from ...manifest import Manifest
 from ...remote import RemoteRepository
 from ...repository import Repository
@@ -83,9 +83,9 @@ def exec_cmd(*args, archiver=None, fork=False, exe=None, input=b"", binary_outpu
                 output_text.flush()
                 return e.code, output.getvalue() if binary_output else output.getvalue().decode()
             try:
-                ret = archiver.run(args)
+                ret = archiver.run(args)  # calls setup_logging internally
             finally:
-                teardown_logging()  # usually done via atexit, but we do not exit here
+                flush_logging()  # usually done via atexit, but we do not exit here
             output_text.flush()
             return ret, output.getvalue() if binary_output else output.getvalue().decode()
         finally: