1. 30 Mar, 2017 1 commit
  2. 25 Feb, 2017 1 commit
  3. 14 Feb, 2017 1 commit
  4. 13 Feb, 2017 1 commit
  5. 09 Feb, 2017 1 commit
  6. 07 Feb, 2017 1 commit
  7. 04 Jan, 2017 2 commits
  8. 15 Dec, 2016 7 commits
  9. 14 Dec, 2016 2 commits
  10. 08 Dec, 2016 1 commit
  11. 02 Dec, 2016 1 commit
  12. 30 Nov, 2016 1 commit
  13. 21 Nov, 2016 1 commit
  14. 16 Nov, 2016 2 commits
  15. 10 Nov, 2016 3 commits
  16. 08 Nov, 2016 3 commits
  17. 07 Nov, 2016 3 commits
  18. 03 Nov, 2016 1 commit
  19. 26 Oct, 2016 1 commit
  20. 20 Oct, 2016 1 commit
    • Victor Kotlyar's avatar
      Ported commits from castor/master: · a6c49d63
      Victor Kotlyar authored
      1d92302d5304266ee14a86b8d9fcbd671b567e5c
        Fix for drive::xx::clearCompressionStats
          - For DriveT10000, DriveMHVTL: stateful accumulator
          - For DriveIBM3592, DriveLTO: log select on specific log page
      
      e6780a8f0a5ed6366ae750eaecc189ac7d0d07fe
        Fix overflowing mountTotal{Read,Write}BytesProcessed by bumping up
        to 64-bit container
      
      12c31edf5193dae90ef65eaa6c3d0eb81a7c6927
        Refactor fix for drive::xx::clearCompressionStats
      
      f626a773aae60b36146d3170e4eb2ef5fd13db35
        Merge branch 'fix_clear_compression_stats' into 'master'
        Fix clear compression stats
      
        ## Description
          The aim of this merge request is to fix the
          `drive::clearCompressionStats()` method which deleted all log metrics
          with log select on page 0x00 (aka reset all metrics).
          The following changes are introduced:
            * For **IBM and LTO drives**: We are selectively clearing only the
            * compression statistics (pages 0x38, 0x32 equivalently)
            * For **Oracle and MHVTL drives**: Wa are making compression
            * statistics stateful, by accumulating values and subtracting on
            * each flush
          Moreover, we are piggybacking a fix for overflowing
          mountTotal{Read,Write}BytesProcessed
          (drive::xx::getTape{Read,Write}Errors) by bumping up the container map
          in 64-bit value
      
        ## Testing
          The results of the changes were tested against IBM and Oracle
          drives.
          More specifically, we attempted to write 40 randomly generated files
          of 2GB in size with the previous CASTOR version (2.1.16-9), extracted
          the compression metrics and compared
      
        ### IBM
          **before:**
            ```
            Oct  6 12:23:09 tpsrv240 tapeserverd[14257]: LVL="Info" TID="14268"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tps
            Oct  6 12:24:53 tpsrv240 tapeserverd[14257]: LVL="Info" TID="14268"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tps
            Oct  6 12:25:45 tpsrv240 tapeserverd[14257]: LVL="Info" TID="14268"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tps
            ```
          **after: **
            ```
            Oct  6 12:34:28 tpsrv240 tapeserverd[14836]: LVL="Info" TID="14847"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tps
            Oct  6 12:36:12 tpsrv240 tapeserverd[14836]: LVL="Info" TID="14847"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tps
            Oct  6 12:37:04 tpsrv240 tapeserverd[14836]: LVL="Info" TID="14847"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tps
            ```
      
        ### Oracle
          **before:**
            ```
            Oct  6 12:21:44 tpsrv614 tapeserverd[3062]: LVL="Info" TID="3073"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tpsrv
            Oct  6 12:24:02 tpsrv614 tapeserverd[3062]: LVL="Info" TID="3073"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tpsrv
            Oct  6 12:25:12 tpsrv614 tapeserverd[3062]: LVL="Info" TID="3073"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tpsrv
            ```
          **after:**
            ```
            Oct  6 12:34:58 tpsrv614 tapeserverd[3691]: LVL="Info" TID="3702"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tpsrv
            Oct  6 12:37:16 tpsrv614 tapeserverd[3691]: LVL="Info" TID="3702"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tpsrv
            Oct  6 12:38:26 tpsrv614 tapeserverd[3691]: LVL="Info" TID="3702"
            MSG="Reported to the client that a batch of file was written on tape"
            thread="ReportPacker" clientHost="tpsrv
            ```
      See merge request !11
      a6c49d63
  21. 19 Oct, 2016 2 commits
  22. 17 Oct, 2016 1 commit
  23. 11 Oct, 2016 2 commits