- Jul 14, 2017
-
-
Steven Murray authored
-
Steven Murray authored
This is a TEMPORARY modification to the CTA front end. When an "eos rm" command is executed the xCom_deletearchive() method of the CTA front is called. Before this commit the mthod would try to removed the archive file from both the object store and the CTA catalogue. This took more than 15 seconds to complete under som ecircumstances. This commit removes the call to the object store. Please note that this commit will causes a leak in archive files. If a file is "in-flight" and therefore has not yet been written to tape and recorded in the CTA catalogue then a call to "eos rm" will do nothing and the file will eventually appear in the CTA catalogue when it is written to tape. The file will therefore NOT be in the EOS namespace but it WILL be in the CTA catalogue.
-
Eric Cano authored
This lead to a lapse moment when a mount was not wisible, and other drives tried to mount the busy tape.
-
Eric Cano authored
-
Victor Kotlyar authored
sessions. Related to 6824973c but now it will calculate the speed between two updates and not an average tape session speed.
-
Victor Kotlyar authored
threads.
-
Vladimir Bahyl authored
-
Vladimir Bahyl authored
-
Victor Kotlyar authored
Fill creationLog for archive and retrive requests in the frontend with actual data. Scheduler will use real data for the requests creation time.
-
- Jul 13, 2017
-
-
Steven Murray authored
-
Steven Murray authored
-
Steven Murray authored
-
Steven Murray authored
-
Steven Murray authored
-
Steven Murray authored
-
Steven Murray authored
-
Victor Kotlyar authored
drive. Change frontend reporting for the current stats from bytes to Mbytes and from bytes/s to Mbytes/s.
-
- Jul 12, 2017
-
-
Steven Murray authored
If the "cta af ls" command is executed and the listing encounters duplicate tape copy numbers then the listing continues with "workaround" number displayed in place of the duplicates and inow with this commit a log message is alseo written to /var/log/cta/cta-frontent.log An example log: 2017-07-12T18:04:40.691236+02:00 itdssbuild01 cta-frontend: LVL="Warn" PID="9423" TID="9436" MSG="Found a duplicate tape copy number when listing archive files" archiveFileID="5" duplicateCopyNb="1" workaroundCopyNb="2" vid="V41001" fSeq="5" blockId="41"
-
Victor Kotlyar authored
-
Victor Kotlyar authored
recall tape transfer session. TaskWatchDog pereodically update Object store information with current values for the tape session stats.
-
Steven Murray authored
-
Eric Cano authored
If the drive is set to force down mode, we stop finding new jobs and let the current session drain.
-
Vladimir Bahyl authored
-
Eric Cano authored
The typo made Vlado's eyes bleed.
-
Eric Cano authored
This will help diagnose failure to honor maxDrivesAllowed.
-
Eric Cano authored
The queue is now fully event driven: a write to object store will start as soon as the previous one completed (if any). The notion of timeout and max number of elements before flush has been removed. This harmonises the mechanism with the one used in MemArchiveQueue::sharedAddToArchiveQueue() (see a591129c).
-
- Jul 11, 2017
-
-
Eric Cano authored
-
Eric Cano authored
-
Steven Murray authored
-
Steven Murray authored
-
Steven Murray authored
-
Steven Murray authored
This constraint prevents a tape server from writing the same file more than once to the same tape. We should actually allow this to happen so that we can tolerate the pathological case of a cta-taped daemon crashing immediately after inserting a row into the TAPE_FILE table and before updating the object store.
-
Steven Murray authored
-
- Jul 06, 2017
-
-
Eric Cano authored
Implemented GarbageCollector::reinjectOwnedObject() and called it on garbage collector shutdown. This fixes the issue of non-empty garbage collectors exiting.
-
- Jul 05, 2017
-
-
Michael Davis authored
Simplifies code and avoids unnecessary string copy construction
-
Eric Cano authored
-
Michael Davis authored
-
Eric Cano authored
We now only do one queue update at a time (per process). While one batch is being pushed to the objet store, the following one is being built up, and written as soon as the previous write completes. This can be see in the timings from the logs of the unit test OStoreSchedulerDatabaseTestVFS/SchedulerDatabaseTest.createManyArchiveJobs/0 here: [...] MSG="In MemArchiveQueue::sharedAddToArchiveQueue(): added batch of jobs to the queue." objectQueue="archiveQueue-[...]-68" sizeBefore="0" sizeAfter="1" addedJobs="1" waitTime="0.002403" enqueueTime="0.036594" [...] MSG="In MemArchiveQueue::sharedAddToArchiveQueue(): added batch of jobs to the queue." objectQueue="archiveQueue-[...]-68" sizeBefore="1" sizeAfter="100" addedJobs="99" waitTime="0.036547" enqueueTime="0.001629" [...] MSG="In MemArchiveQueue::sharedAddToArchiveQueue(): added batch of jobs to the queue." objectQueue="archiveQueue-[...]-174" sizeBefore="0" sizeAfter="1" addedJobs="1" waitTime="0.000058" enqueueTime="0.061447" [...] MSG="In MemArchiveQueue::sharedAddToArchiveQueue(): added batch of jobs to the queue." objectQueue="archiveQueue-[...]-174" sizeBefore="1" sizeAfter="198" addedJobs="197" waitTime="0.061434" enqueueTime="0.001431" [...] MSG="In MemArchiveQueue::sharedAddToArchiveQueue(): added batch of jobs to the queue." objectQueue="archiveQueue-[...]-174" sizeBefore="198" sizeAfter="200" addedJobs="2" waitTime="0.000840" enqueueTime="0.037259"
-