new api for reference counters (was: atomic v_usecount and v_holdcnt)

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

new api for reference counters (was: atomic v_usecount and v_holdcnt)

Mateusz Guzik
On Thu, Jun 25, 2015 at 09:53:52PM +0200, Edward Tomasz Napierała wrote:

> On 0626T0429, Bruce Evans wrote:
> > On Thu, 25 Jun 2015, Mateusz Guzik wrote:
> >
> > > On Wed, Mar 18, 2015 at 12:44:42PM +0200, Konstantin Belousov wrote:
> > >> On Tue, Mar 17, 2015 at 02:44:12AM +0100, Mateusz Guzik wrote:
> >
> > >>> I replaced them with refcount_acquire_if_not_zero and
> > >>> refcount_release_if_not_last.
> > >> I dislike the length of the names.  Can you propose something shorter ?
> > >
> > > Unfortunately the original API is alreday quite verbose and I don't have
> > > anything readable which would retain "refcount_acquire" (instead of a
> > > "ref_get" or "ref_acq"). Adding "_nz" as a suffix does not look good
> > > ("refcount_acquire_if_nz").
> >
> > refcount -> rc
> > acquire -> acq
> >
> > The "acq" abbreviation is already used a lot for atomic ops.
>
> How about refcount_acquire_gt_0() and refcount_release_gt_1()1?
>

Current refcount(9) api is not only quite limited, but it also operates
on u_int instead of a type opaque to its consumers. This gives fewer
places which can assert counter sanity and increases chances of an
accidental abuse/type mismatch.

As such, how about deprecating current refcount_* funcs and introducing
a new family instead.

Say funcs would be prefixed with ref_. Consumers would use a ref_t type
(the obvious refcount_t is stolen by zfs).

Apart from aforementioned 0->1 and 1->0 funcs, this introduces ref_read
to obtain the state of passed counter. A cast to volatile pointer +
dereference of that guarantees us a read at that point, so we don't have
to put the qualifier in the struct.

Finally, a blessed type is provided for use by all consumers so that the
correct type is enforced at compile time.

Roughly speaking:

diff --git a/sys/sys/refcount.h b/sys/sys/refcount.h
index 4611664..c7c3732 100644
--- a/sys/sys/refcount.h
+++ b/sys/sys/refcount.h
@@ -38,6 +38,11 @@
 #define KASSERT(exp, msg) /* */
 #endif
 
+struct ref {
+ u_int r_count;
+};
+typedef struct ref ref_t;
+
 static __inline void
 refcount_init(volatile u_int *count, u_int value)
 {
@@ -64,4 +69,64 @@ refcount_release(volatile u_int *count)
  return (old == 1);
 }
 
+static __inline void
+ref_init(ref_t *ref, u_int value)
+{
+
+ refcount_init(&ref->r_count, value);
+}
+
+static __inline u_int
+ref_read(ref_t *ref)
+{
+ u_int count;
+
+ count = *(volatile u_int *)&(ref->r_count);
+ KASSERT(count < UINT_MAX, ("refcount %p overflowed", ref));
+
+ return (count);
+}
+
+static __inline void
+ref_acq(ref_t *ref)
+{
+
+ refcount_acquire(&ref->r_count);
+}
+
+static __inline int
+ref_rel(ref_t *ref)
+{
+
+ return (refcount_release(&ref->r_count));
+}
+
+static __inline int
+ref_acq_gt_0(ref_t *ref)
+{
+ u_int old;
+
+ for (;;) {
+ old = ref_read(ref);
+ if (old == 0)
+ return (0);
+ if (atomic_cmpset_int(&ref->r_count, old, old + 1))
+ return (1);
+ }
+}
+
+static __inline int
+ref_rel_gt_1(ref_t *ref)
+{
+ u_int old;
+
+ for (;;) {
+ old = ref_read(ref);
+ if (old == 1)
+ return (0);
+ if (atomic_cmpset_int(&ref->r_count, old, old - 1))
+ return (1);
+ }
+}
+
 #endif /* ! __SYS_REFCOUNT_H__ */

--
Mateusz Guzik <mjguzik gmail.com>
_______________________________________________
[hidden email] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-arch
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: new api for reference counters (was: atomic v_usecount and v_holdcnt)

Mateusz Guzik
Can I get some flames on this one?

On Thu, Jul 02, 2015 at 11:30:49PM +0200, Mateusz Guzik wrote:

> On Thu, Jun 25, 2015 at 09:53:52PM +0200, Edward Tomasz Napierała wrote:
> > On 0626T0429, Bruce Evans wrote:
> > > On Thu, 25 Jun 2015, Mateusz Guzik wrote:
> > >
> > > > On Wed, Mar 18, 2015 at 12:44:42PM +0200, Konstantin Belousov wrote:
> > > >> On Tue, Mar 17, 2015 at 02:44:12AM +0100, Mateusz Guzik wrote:
> > >
> > > >>> I replaced them with refcount_acquire_if_not_zero and
> > > >>> refcount_release_if_not_last.
> > > >> I dislike the length of the names.  Can you propose something shorter ?
> > > >
> > > > Unfortunately the original API is alreday quite verbose and I don't have
> > > > anything readable which would retain "refcount_acquire" (instead of a
> > > > "ref_get" or "ref_acq"). Adding "_nz" as a suffix does not look good
> > > > ("refcount_acquire_if_nz").
> > >
> > > refcount -> rc
> > > acquire -> acq
> > >
> > > The "acq" abbreviation is already used a lot for atomic ops.
> >
> > How about refcount_acquire_gt_0() and refcount_release_gt_1()1?
> >
>
> Current refcount(9) api is not only quite limited, but it also operates
> on u_int instead of a type opaque to its consumers. This gives fewer
> places which can assert counter sanity and increases chances of an
> accidental abuse/type mismatch.
>
> As such, how about deprecating current refcount_* funcs and introducing
> a new family instead.
>
> Say funcs would be prefixed with ref_. Consumers would use a ref_t type
> (the obvious refcount_t is stolen by zfs).
>
> Apart from aforementioned 0->1 and 1->0 funcs, this introduces ref_read
> to obtain the state of passed counter. A cast to volatile pointer +
> dereference of that guarantees us a read at that point, so we don't have
> to put the qualifier in the struct.
>
> Finally, a blessed type is provided for use by all consumers so that the
> correct type is enforced at compile time.
>
> Roughly speaking:
>
> diff --git a/sys/sys/refcount.h b/sys/sys/refcount.h
> index 4611664..c7c3732 100644
> --- a/sys/sys/refcount.h
> +++ b/sys/sys/refcount.h
> @@ -38,6 +38,11 @@
>  #define KASSERT(exp, msg) /* */
>  #endif
>  
> +struct ref {
> + u_int r_count;
> +};
> +typedef struct ref ref_t;
> +
>  static __inline void
>  refcount_init(volatile u_int *count, u_int value)
>  {
> @@ -64,4 +69,64 @@ refcount_release(volatile u_int *count)
>   return (old == 1);
>  }
>  
> +static __inline void
> +ref_init(ref_t *ref, u_int value)
> +{
> +
> + refcount_init(&ref->r_count, value);
> +}
> +
> +static __inline u_int
> +ref_read(ref_t *ref)
> +{
> + u_int count;
> +
> + count = *(volatile u_int *)&(ref->r_count);
> + KASSERT(count < UINT_MAX, ("refcount %p overflowed", ref));
> +
> + return (count);
> +}
> +
> +static __inline void
> +ref_acq(ref_t *ref)
> +{
> +
> + refcount_acquire(&ref->r_count);
> +}
> +
> +static __inline int
> +ref_rel(ref_t *ref)
> +{
> +
> + return (refcount_release(&ref->r_count));
> +}
> +
> +static __inline int
> +ref_acq_gt_0(ref_t *ref)
> +{
> + u_int old;
> +
> + for (;;) {
> + old = ref_read(ref);
> + if (old == 0)
> + return (0);
> + if (atomic_cmpset_int(&ref->r_count, old, old + 1))
> + return (1);
> + }
> +}
> +
> +static __inline int
> +ref_rel_gt_1(ref_t *ref)
> +{
> + u_int old;
> +
> + for (;;) {
> + old = ref_read(ref);
> + if (old == 1)
> + return (0);
> + if (atomic_cmpset_int(&ref->r_count, old, old - 1))
> + return (1);
> + }
> +}
> +
>  #endif /* ! __SYS_REFCOUNT_H__ */
>
> --
> Mateusz Guzik <mjguzik gmail.com>

--
Mateusz Guzik <mjguzik gmail.com>
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-arch
To unsubscribe, send any mail to "[hidden email]"
Reply | Threaded
Open this post in threaded view
|

Re: new api for reference counters (was: atomic v_usecount and v_holdcnt)

Bruce Evans-4
_______________________________________________
[hidden email] mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-arch
To unsubscribe, send any mail to "[hidden email]"