Skip to content
Snippets Groups Projects
user avatar
mvelosob authored
During the July datachallenge the archiveJobTransferForUser queue for the r_atlas_test_datachallenge
tapepool became full of ArchiveJobs whose bytes field became zero after popped from the queue.
This caused the tape servers to pop ~5TB of work from the queue. To prevent this from happening in
the future, instead of summing the sizes of the individual elements popped from the queue, we now
subtract them from the total size popped. This way, if there are popped jobs that have incorrectly
set their bytes field to zero, the algorithm will consume less data than expected, not exponencially
more
2bec11ea
History
Name Last commit Last update