- Dec 19, 2017
-
-
Steven Murray authored
-
Eric Cano authored
-
Eric Cano authored
-
- Dec 18, 2017
-
-
Eric Cano authored
Also added an access after each context in order serialise their creations. Logs should at least permit better understanding of crashes.
-
- Dec 15, 2017
- Dec 07, 2017
- Dec 05, 2017
-
-
Eric Cano authored
-
- Nov 23, 2017
-
-
Eric Cano authored
This ensures the uniqueness of the agent name even if a process creates an agent several times in the same second.
-
- Nov 17, 2017
- Nov 10, 2017
- Nov 07, 2017
- Oct 30, 2017
-
-
Eric Cano authored
This led to garbage collactor segfaults.
-
- Oct 27, 2017
-
-
Eric Cano authored
This allows previously developped asyncLockfreeFetch to apply to it.
-
- Oct 26, 2017
-
-
Eric Cano authored
The previous strategy was made under the assumption that we needed to lock sparingly. With the introduction of lockfree strategy, this is not true anymore. The new strategy will be immune from the A watches B, B watches A, both die, and no one garbage collects them situation (also called cyclers in utility cta-objectstore-unfollow-agent).
-
Eric Cano authored
-
Eric Cano authored
-
Eric Cano authored
This will remove contention on the drive register as drives uipdate their statuses. Also simplified structures in OStoreDB implementation (less references with added friend relations).
-
- Oct 20, 2017
- Oct 19, 2017
-
-
Eric Cano authored
The missing unwatch fix should improve performance of watch/notify based locking significantly. Instrumentation will log any call to rados longer that 1s to /var/tmp/cta-rados-slow-calls.log. Also prepared a structure to allow switching between watch/notify and backoff based locking. Backoff code is not yet brought back (will test with the unwatch fix first).
-
- Oct 16, 2017
-
-
Eric Cano authored
The unwatch step is pretty slow, so the notification structure is now in a seprate internal object, which is left to be deleted by the callback of aio_unwatch. We need to keep the structure around for that time as notifications could still arrive between the call to aio_unwatch and the actual unwatching.
-
- Oct 09, 2017
-
-
Eric Cano authored
The code in this part was uselessly taking exclusive root locks, leading to locking fights when starting all drive as the same time in bigger systems.
-
- Oct 08, 2017
-
-
Eric Cano authored
-
Eric Cano authored
This happens when the notification is received multiple times. This did not show before full scale test.
-
Eric Cano authored
The locking is now handled in a single function. This functions is also used in the asynchronous operations. This operations have been simplified in this operation: we now do a synchronous stat after the locking. Previously stat was the first asynchronous operation. This will slightly slow down the async operations (for the gain of code simplicity).
-
Eric Cano authored
This commit only replaces backoff with notifications in exclusive locks, for validation with a multithreaded unit test. This proof of concept is considered as working, as the measured gap between a lock release and a lock take never exceeds ~1/4 of a second, with a typical release-lock gap in the few ms to few tens of ms.
-
- Oct 03, 2017
-
-
Steven Murray authored
Replaced set_target_properties( .. .. INSTALL_RPATH ..) with set_property (TARGET .. APPEND PROPERTY INSTALL_RPATH ..)
-
- Sep 30, 2017
-
-
Eric Cano authored
We do not do a stat before reading. Instead we ask for an arbitrarily big read, and find out the size of the data while reading. This avoids a race condition in lockfree reads where we failed to get the full object if it got re-written to a bigger size between stat an read.
-