1. 18 Jan, 2019 1 commit
  2. 17 Jan, 2019 1 commit
  3. 15 Jan, 2019 1 commit
  4. 10 Jan, 2019 2 commits
  5. 09 Jan, 2019 1 commit
  6. 08 Jan, 2019 2 commits
  7. 05 Jan, 2019 1 commit
  8. 21 Dec, 2018 1 commit
  9. 20 Dec, 2018 1 commit
  10. 18 Dec, 2018 1 commit
  11. 17 Dec, 2018 1 commit
  12. 13 Dec, 2018 2 commits
  13. 12 Dec, 2018 2 commits
  14. 10 Dec, 2018 1 commit
    • Eric Cano's avatar
      Implemented promotion of repack requests from Pending to ToExpand · bc52524e
      Eric Cano authored
      This promotion is controlled so that only a limited number a requests
      are in the
      state ToExpand or Starting at any point in time. This ensures both the
      of repack file requests to system while preventing an explosion of file
      level requests.
      Created a one-round popping from the container (algorithms) with status
        - Used for repack requests switching from pendig to to expand
      Added ElementStatus to algorithms.
      Implemented promotion interface in Scheduler and OstoreDb. The actual
      decision is taken at
      the Scheduler level. The function itself is called by the
      Promotion is tested in a unit test.
      Various code maintenance:
      Switched to "using"-based constructor inheritance.
      Fixed privacy of function in cta::range.
  15. 08 Nov, 2018 1 commit
  16. 06 Nov, 2018 1 commit
  17. 19 Oct, 2018 1 commit
  18. 08 Oct, 2018 1 commit
  19. 20 Sep, 2018 1 commit
  20. 10 Sep, 2018 1 commit
  21. 30 Aug, 2018 3 commits
    • Eric Cano's avatar
      Fixed using of wait() instead of get() on promise for reporter. · 5b6b9ff5
      Eric Cano authored
      This prevented the passing of exceptions as output.
      Also integrated the promise in the reporter class.
    • Eric Cano's avatar
      Reworked ArchiveRequest jobs lifecycles. · 391ca9a8
      Eric Cano authored
      Changed the lifecycle of the ArchiveRequest to handle the various
      combinations of several jobs and their respective success/failures.
      Most notably, the request now holds a reportdecided boolan, which
      is set when decing to report. This happens when failing to archive
      one copy (first failure), or when all copies are transferred (success
      for all copies).
      Added support for in-mount retries. On falure, the job will be
      requeued (with a chance to pick it up again) in the same session
      if sane session retries are not exceeded. Otherwise, the job is
      left owned by the session, to be picked up by the garbage collector
      at tape unmount.
      Made disk reporter generic, dealing with both success and failure.
      Improved mount policy support fir queueing.
      Expanded information avaible in popped element from archive queues.
      Added optional parameters to ArchiveRequest::asyncUpdateJobOwner() to
      cover various cases.
      Updated the archive job statuses.
      Clarified naming of functions (transfer/report failure instead of bare
      Updated garbage collector for new archive job statuses.
      Added support for report retries and batch reporting in the scheduler
      Updated obsolete wording in MigrationReportPacker log messages and error
    • Eric Cano's avatar
      Created DiskReportRunner class. · 0654c5cd
      Eric Cano authored
      This class will handle reporting like the GarbageCollector class
      does for garbage collection.
      Adapted the interface of scheduler and ArchiveRequests to allow delegating
      reporting to the disk report runner.
      This commit is not functionnal. We still need to:
      - Implement the ToReport/Failed queues interface.
      - Adapt the queueing in the scheduler/ArchiveMount
      - Implement the popping of jobs to report.
      - Implement the async reporter for the files.
      - Develop the user interface for failed requests.
  22. 16 Aug, 2018 1 commit
  23. 14 Aug, 2018 1 commit
  24. 13 Jun, 2018 1 commit
  25. 11 May, 2018 1 commit
    • Eric Cano's avatar
      Split queuing archive requests in top and bottom half. · fdb435a5
      Eric Cano authored
      The top half goes as far as when the request is safe in the object store.
      At that point, the bottom half is launched in a new thread, and success is returned
      the called. This will enable lower latency for the users while retaining
      the same data safety.
      This version is experimental aas it spans an undetermined number of threads. A more controlled
      version with a work queue should be implenented on the long run.
  26. 02 May, 2018 1 commit
  27. 30 Apr, 2018 1 commit
  28. 26 Mar, 2018 1 commit
  29. 19 Mar, 2018 1 commit
  30. 16 Mar, 2018 1 commit
  31. 28 Feb, 2018 2 commits
  32. 27 Feb, 2018 1 commit
  33. 06 Feb, 2018 1 commit