2
0
mirror of https://github.com/openvswitch/ovs synced 2025-10-29 15:28:56 +00:00

lib/ovs-atomic: Add ovs_refcount_unref_relaxed(), ovs_refcount_try_ref_rcu().

When a reference counted object is also RCU protected the deletion of
the object's memory is always postponed.  This allows
memory_order_relaxed to be used also for unreferencing, as RCU
quiescing provides a full memory barrier (it has to, or otherwise
there could be lingering accesses to objects after they are recycled).

Also, when access to the reference counted object is protected via a
mutex or a lock, the locking primitives provide the required memory
barrier functionality.

Also, add ovs_refcount_try_ref_rcu(), which takes a reference only if
the refcount is non-zero and returns true if a reference was taken,
false otherwise.  This can be used in combined RCU/refcount scenarios
where we have an RCU protected reference to an refcounted object, but
which may be unref'ed at any time.  If ovs_refcount_try_ref_rcu()
fails, the object may still be safely used until the current thread
quiesces.

Signed-off-by: Jarno Rajahalme <jrajahalme@nicira.com>
Acked-by: Ben Pfaff <blp@nicira.com>
This commit is contained in:
Jarno Rajahalme
2014-07-07 13:18:46 -07:00
parent 25045d755e
commit 6969766b75
2 changed files with 78 additions and 1 deletions

View File

@@ -400,4 +400,78 @@ ovs_refcount_read(const struct ovs_refcount *refcount_)
return count;
}
/* Increments 'refcount', but only if it is non-zero.
*
* This may only be called for an object which is RCU protected during
* this call. This implies that its possible destruction is postponed
* until all current RCU threads quiesce.
*
* Returns false if the refcount was zero. In this case the object may
* be safely accessed until the current thread quiesces, but no additional
* references to the object may be taken.
*
* Does not provide a memory barrier, as the calling thread must have
* RCU protected access to the object already.
*
* It is critical that we never increment a zero refcount to a
* non-zero value, as whenever a refcount reaches the zero value, the
* protected object may be irrevocably scheduled for deletion. */
static inline bool
ovs_refcount_try_ref_rcu(struct ovs_refcount *refcount)
{
unsigned int count;
atomic_read_explicit(&refcount->count, &count, memory_order_relaxed);
do {
if (count == 0) {
return false;
}
} while (!atomic_compare_exchange_weak_explicit(&refcount->count, &count,
count + 1,
memory_order_relaxed,
memory_order_relaxed));
return true;
}
/* Decrements 'refcount' and returns the previous reference count. To
* be used only when a memory barrier is already provided for the
* protected object independently.
*
* For example:
*
* if (ovs_refcount_unref_relaxed(&object->ref_cnt) == 1) {
* // Schedule uninitialization and freeing of the object:
* ovsrcu_postpone(destructor_function, object);
* }
*
* Here RCU quiescing already provides a full memory barrier. No additional
* barriers are needed here.
*
* Or:
*
* if (stp && ovs_refcount_unref_relaxed(&stp->ref_cnt) == 1) {
* ovs_mutex_lock(&mutex);
* list_remove(&stp->node);
* ovs_mutex_unlock(&mutex);
* free(stp->name);
* free(stp);
* }
*
* Here a mutex is used to guard access to all of 'stp' apart from
* 'ref_cnt'. Hence all changes to 'stp' by other threads must be
* visible when we get the mutex, and no access after the unlock can
* be reordered to happen prior the lock operation. No additional
* barriers are needed here.
*/
static inline unsigned int
ovs_refcount_unref_relaxed(struct ovs_refcount *refcount)
{
unsigned int old_refcount;
atomic_sub_explicit(&refcount->count, 1, &old_refcount,
memory_order_relaxed);
ovs_assert(old_refcount > 0);
return old_refcount;
}
#endif /* ovs-atomic.h */