Skip to content
Snippets Groups Projects
  1. Jul 29, 2022
  2. Jul 19, 2022
  3. Jul 07, 2022
  4. Jun 10, 2022
  5. May 20, 2022
  6. May 18, 2022
  7. May 16, 2022
  8. May 13, 2022
  9. May 10, 2022
  10. May 09, 2022
  11. May 06, 2022
  12. May 05, 2022
  13. Apr 29, 2022
  14. Apr 28, 2022
  15. Apr 22, 2022
  16. Apr 19, 2022
  17. Apr 11, 2022
  18. Apr 01, 2022
  19. Mar 28, 2022
  20. Mar 24, 2022
  21. Mar 17, 2022
  22. Mar 15, 2022
  23. Mar 14, 2022
  24. Mar 10, 2022
  25. Mar 04, 2022
  26. Feb 24, 2022
  27. Feb 10, 2022
  28. Feb 04, 2022
  29. Jan 26, 2022
  30. Jan 20, 2022
  31. Jan 17, 2022
  32. Jan 11, 2022
  33. Jan 07, 2022
  34. Jan 06, 2022
  35. Dec 13, 2021
    • mvelosob's avatar
      Avoid large buffer reservations with RAO (#1054) · 5e602c9f
      mvelosob authored
      Refactor the RecallTaskInjector to limit the number of tasks passed
      at once to the TapeReadSingleThread for a tape with RAO and to reserve
      disk space in smaller batchs
      
      In every call of RecallTaskInjector::synchronousFetch the injector
      will try to pop jobs from the queue so that it holds an ammount of
      work equal to the limit of files or bytes imposed by the RAO
      implementation or by the value given by the
      BulkRequestRecallMaxBytes and BulkRequestRecallMaxFiles config
      options in /etc/cta/cta-taped.conf.
      
      In every call of RecallTaskInjector::injectBulkRecalls the
      RecallTaskInjector will inject a set of tasks limited by the
      BulkRequestRecallMaxBytes and BulkRequestRecallMaxFiles config
      options in /etc/cta/cta-taped.conf to the TapeReadSingleThread
      and DiskWriteThreadPool.
      
      The disk space reservation is done once for every job batch (instead of
      all the disk space being reserved when the jobs are popped)
      
      This prevents tapeservers with RAO from reserving a large amount
      of the disk buffer upfront, which would cause the buffer to fill quickly
      by a few drives, which cannot fill it fast enough
      5e602c9f
Loading