#[repr(C, align(4))]pub struct AtomicUsize { /* private fields */ }Expand description
An integer type which can be safely shared between threads.
This type has the same in-memory representation as the underlying integer type,
usize.
If the compiler and the platform support atomic loads and stores of usize, this type is a wrapper for the standard library’s AtomicUsize. If the platform supports it but the compiler does not, atomic operations are implemented using
inline assembly. Otherwise synchronizes using global locks.
You can call AtomicUsize::is_lock_free() to check whether
atomic instructions or locks will be used.
Implementations§
Source§impl AtomicUsize
impl AtomicUsize
Sourcepub const fn new(v: usize) -> AtomicUsize
pub const fn new(v: usize) -> AtomicUsize
Creates a new atomic integer.
§Examples
use portable_atomic::AtomicUsize;
let atomic_forty_two = AtomicUsize::new(42);Sourcepub const unsafe fn from_ptr<'a>(ptr: *mut usize) -> &'a AtomicUsize
pub const unsafe fn from_ptr<'a>(ptr: *mut usize) -> &'a AtomicUsize
Creates a new reference to an atomic integer from a pointer.
This is const fn on Rust 1.83+.
§Safety
ptrmust be aligned toalign_of::<AtomicUsize>()(note that on some platforms this can be bigger thanalign_of::<usize>()).ptrmust be valid for both reads and writes for the whole lifetime'a.- If this atomic type is lock-free, non-atomic accesses to the value
behind
ptrmust have a happens-before relationship with atomic accesses via the returned value (or vice-versa).- In other words, time periods where the value is accessed atomically may not overlap with periods where the value is accessed non-atomically.
- This requirement is trivially satisfied if
ptris never used non-atomically for the duration of lifetime'a. Most use cases should be able to follow this guideline. - This requirement is also trivially satisfied if all accesses (atomic or not) are done from the same thread.
- If this atomic type is not lock-free:
- Any accesses to the value behind
ptrmust have a happens-before relationship with accesses via the returned value (or vice-versa). - Any concurrent accesses to the value behind
ptrfor the duration of lifetime'amust be compatible with operations performed by this atomic type.
- Any accesses to the value behind
- This method must not be used to create overlapping or mixed-size atomic accesses, as these are not supported by the memory model.
Sourcepub fn is_lock_free() -> bool
pub fn is_lock_free() -> bool
Returns true if operations on values of this type are lock-free.
If the compiler or the platform doesn’t support the necessary atomic instructions, global locks for every potentially concurrent atomic operation will be used.
§Examples
use portable_atomic::AtomicUsize;
let is_lock_free = AtomicUsize::is_lock_free();Sourcepub const fn is_always_lock_free() -> bool
pub const fn is_always_lock_free() -> bool
Returns true if operations on values of this type are lock-free.
If the compiler or the platform doesn’t support the necessary atomic instructions, global locks for every potentially concurrent atomic operation will be used.
Note: If the atomic operation relies on dynamic CPU feature detection, this type may be lock-free even if the function returns false.
§Examples
use portable_atomic::AtomicUsize;
const IS_ALWAYS_LOCK_FREE: bool = AtomicUsize::is_always_lock_free();Sourcepub const fn get_mut(&mut self) -> &mut usize
pub const fn get_mut(&mut self) -> &mut usize
Returns a mutable reference to the underlying integer.
This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.
This is const fn on Rust 1.83+.
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let mut some_var = AtomicUsize::new(10);
assert_eq!(*some_var.get_mut(), 10);
*some_var.get_mut() = 5;
assert_eq!(some_var.load(Ordering::SeqCst), 5);Sourcepub const fn into_inner(self) -> usize
pub const fn into_inner(self) -> usize
Consumes the atomic and returns the contained value.
This is safe because passing self by value guarantees that no other threads are
concurrently accessing the atomic data.
This is const fn on Rust 1.56+.
§Examples
use portable_atomic::AtomicUsize;
let some_var = AtomicUsize::new(5);
assert_eq!(some_var.into_inner(), 5);Sourcepub fn load(&self, order: Ordering) -> usize
pub fn load(&self, order: Ordering) -> usize
Loads a value from the atomic integer.
load takes an Ordering argument which describes the memory ordering of this operation.
Possible values are [SeqCst], [Acquire] and [Relaxed].
§Panics
Panics if order is [Release] or [AcqRel].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let some_var = AtomicUsize::new(5);
assert_eq!(some_var.load(Ordering::Relaxed), 5);Sourcepub fn store(&self, val: usize, order: Ordering)
pub fn store(&self, val: usize, order: Ordering)
Stores a value into the atomic integer.
store takes an Ordering argument which describes the memory ordering of this operation.
Possible values are [SeqCst], [Release] and [Relaxed].
§Panics
Panics if order is [Acquire] or [AcqRel].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let some_var = AtomicUsize::new(5);
some_var.store(10, Ordering::Relaxed);
assert_eq!(some_var.load(Ordering::Relaxed), 10);Sourcepub fn swap(&self, val: usize, order: Ordering) -> usize
pub fn swap(&self, val: usize, order: Ordering) -> usize
Stores a value into the atomic integer, returning the previous value.
swap takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let some_var = AtomicUsize::new(5);
assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);Sourcepub fn compare_exchange(
&self,
current: usize,
new: usize,
success: Ordering,
failure: Ordering,
) -> Result<usize, usize>
pub fn compare_exchange( &self, current: usize, new: usize, success: Ordering, failure: Ordering, ) -> Result<usize, usize>
Stores a value into the atomic integer if the current value is the same as
the current value.
The return value is a result indicating whether the new value was written and
containing the previous value. On success this value is guaranteed to be equal to
current.
compare_exchange takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using [Acquire] as success ordering makes the store part
of this operation [Relaxed], and using [Release] makes the successful load
[Relaxed]. The failure ordering can only be [SeqCst], [Acquire] or [Relaxed].
§Panics
Panics if failure is [Release], [AcqRel].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let some_var = AtomicUsize::new(5);
assert_eq!(
some_var.compare_exchange(5, 10, Ordering::Acquire, Ordering::Relaxed),
Ok(5),
);
assert_eq!(some_var.load(Ordering::Relaxed), 10);
assert_eq!(
some_var.compare_exchange(6, 12, Ordering::SeqCst, Ordering::Acquire),
Err(10),
);
assert_eq!(some_var.load(Ordering::Relaxed), 10);Sourcepub fn compare_exchange_weak(
&self,
current: usize,
new: usize,
success: Ordering,
failure: Ordering,
) -> Result<usize, usize>
pub fn compare_exchange_weak( &self, current: usize, new: usize, success: Ordering, failure: Ordering, ) -> Result<usize, usize>
Stores a value into the atomic integer if the current value is the same as
the current value.
Unlike compare_exchange
this function is allowed to spuriously fail even
when the comparison succeeds, which can result in more efficient code on some
platforms. The return value is a result indicating whether the new value was
written and containing the previous value.
compare_exchange_weak takes two Ordering arguments to describe the memory
ordering of this operation. success describes the required ordering for the
read-modify-write operation that takes place if the comparison with current succeeds.
failure describes the required ordering for the load operation that takes place when
the comparison fails. Using [Acquire] as success ordering makes the store part
of this operation [Relaxed], and using [Release] makes the successful load
[Relaxed]. The failure ordering can only be [SeqCst], [Acquire] or [Relaxed].
§Panics
Panics if failure is [Release], [AcqRel].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let val = AtomicUsize::new(4);
let mut old = val.load(Ordering::Relaxed);
loop {
let new = old * 2;
match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
Ok(_) => break,
Err(x) => old = x,
}
}Sourcepub fn fetch_add(&self, val: usize, order: Ordering) -> usize
pub fn fetch_add(&self, val: usize, order: Ordering) -> usize
Adds to the current value, returning the previous value.
This operation wraps around on overflow.
fetch_add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0);
assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
assert_eq!(foo.load(Ordering::SeqCst), 10);Sourcepub fn add(&self, val: usize, order: Ordering)
pub fn add(&self, val: usize, order: Ordering)
Adds to the current value.
This operation wraps around on overflow.
Unlike fetch_add, this does not return the previous value.
add takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
This function may generate more efficient code than fetch_add on some platforms.
- MSP430:
addinstead of disabling interrupts ({8,16}-bit atomics)
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0);
foo.add(10, Ordering::SeqCst);
assert_eq!(foo.load(Ordering::SeqCst), 10);Sourcepub fn fetch_sub(&self, val: usize, order: Ordering) -> usize
pub fn fetch_sub(&self, val: usize, order: Ordering) -> usize
Subtracts from the current value, returning the previous value.
This operation wraps around on overflow.
fetch_sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(20);
assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
assert_eq!(foo.load(Ordering::SeqCst), 10);Sourcepub fn sub(&self, val: usize, order: Ordering)
pub fn sub(&self, val: usize, order: Ordering)
Subtracts from the current value.
This operation wraps around on overflow.
Unlike fetch_sub, this does not return the previous value.
sub takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
This function may generate more efficient code than fetch_sub on some platforms.
- MSP430:
subinstead of disabling interrupts ({8,16}-bit atomics)
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(20);
foo.sub(10, Ordering::SeqCst);
assert_eq!(foo.load(Ordering::SeqCst), 10);Sourcepub fn fetch_and(&self, val: usize, order: Ordering) -> usize
pub fn fetch_and(&self, val: usize, order: Ordering) -> usize
Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0b101101);
assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
assert_eq!(foo.load(Ordering::SeqCst), 0b100001);Sourcepub fn and(&self, val: usize, order: Ordering)
pub fn and(&self, val: usize, order: Ordering)
Bitwise “and” with the current value.
Performs a bitwise “and” operation on the current value and the argument val, and
sets the new value to the result.
Unlike fetch_and, this does not return the previous value.
and takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
This function may generate more efficient code than fetch_and on some platforms.
- x86/x86_64:
lock andinstead ofcmpxchgloop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64) - MSP430:
andinstead of disabling interrupts ({8,16}-bit atomics)
Note: On x86/x86_64, the use of either function should not usually affect the generated code, because LLVM can properly optimize the case where the result is unused.
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0b101101);
assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
assert_eq!(foo.load(Ordering::SeqCst), 0b100001);Sourcepub fn fetch_nand(&self, val: usize, order: Ordering) -> usize
pub fn fetch_nand(&self, val: usize, order: Ordering) -> usize
Bitwise “nand” with the current value.
Performs a bitwise “nand” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_nand takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0x13);
assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));Sourcepub fn fetch_or(&self, val: usize, order: Ordering) -> usize
pub fn fetch_or(&self, val: usize, order: Ordering) -> usize
Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0b101101);
assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
assert_eq!(foo.load(Ordering::SeqCst), 0b111111);Sourcepub fn or(&self, val: usize, order: Ordering)
pub fn or(&self, val: usize, order: Ordering)
Bitwise “or” with the current value.
Performs a bitwise “or” operation on the current value and the argument val, and
sets the new value to the result.
Unlike fetch_or, this does not return the previous value.
or takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
This function may generate more efficient code than fetch_or on some platforms.
- x86/x86_64:
lock orinstead ofcmpxchgloop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64) - MSP430:
orinstead of disabling interrupts ({8,16}-bit atomics)
Note: On x86/x86_64, the use of either function should not usually affect the generated code, because LLVM can properly optimize the case where the result is unused.
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0b101101);
assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
assert_eq!(foo.load(Ordering::SeqCst), 0b111111);Sourcepub fn fetch_xor(&self, val: usize, order: Ordering) -> usize
pub fn fetch_xor(&self, val: usize, order: Ordering) -> usize
Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0b101101);
assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
assert_eq!(foo.load(Ordering::SeqCst), 0b011110);Sourcepub fn xor(&self, val: usize, order: Ordering)
pub fn xor(&self, val: usize, order: Ordering)
Bitwise “xor” with the current value.
Performs a bitwise “xor” operation on the current value and the argument val, and
sets the new value to the result.
Unlike fetch_xor, this does not return the previous value.
xor takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
This function may generate more efficient code than fetch_xor on some platforms.
- x86/x86_64:
lock xorinstead ofcmpxchgloop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64) - MSP430:
xorinstead of disabling interrupts ({8,16}-bit atomics)
Note: On x86/x86_64, the use of either function should not usually affect the generated code, because LLVM can properly optimize the case where the result is unused.
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0b101101);
foo.xor(0b110011, Ordering::SeqCst);
assert_eq!(foo.load(Ordering::SeqCst), 0b011110);Sourcepub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
f: F,
) -> Result<usize, usize>
pub fn fetch_update<F>( &self, set_order: Ordering, fetch_order: Ordering, f: F, ) -> Result<usize, usize>
Fetches the value, and applies a function to it that returns an optional
new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else
Err(previous_value).
Note: This may call the function multiple times if the value has been changed from other threads in
the meantime, as long as the function returns Some(_), but the function will have been applied
only once to the stored value.
fetch_update takes two Ordering arguments to describe the memory ordering of this operation.
The first describes the required ordering for when the operation finally succeeds while the second
describes the required ordering for loads. These correspond to the success and failure orderings of
compare_exchange respectively.
Using [Acquire] as success ordering makes the store part
of this operation [Relaxed], and using [Release] makes the final successful load
[Relaxed]. The (failed) load ordering can only be [SeqCst], [Acquire] or [Relaxed].
§Panics
Panics if fetch_order is [Release], [AcqRel].
§Considerations
This method is not magic; it is not provided by the hardware.
It is implemented in terms of compare_exchange_weak,
and suffers from the same drawbacks.
In particular, this method will not circumvent the ABA Problem.
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let x = AtomicUsize::new(7);
assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
assert_eq!(x.load(Ordering::SeqCst), 9);Sourcepub fn fetch_max(&self, val: usize, order: Ordering) -> usize
pub fn fetch_max(&self, val: usize, order: Ordering) -> usize
Maximum with the current value.
Finds the maximum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_max takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(23);
assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
assert_eq!(foo.load(Ordering::SeqCst), 42);If you want to obtain the maximum value in one step, you can use the following:
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(23);
let bar = 42;
let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
assert!(max_foo == 42);Sourcepub fn fetch_min(&self, val: usize, order: Ordering) -> usize
pub fn fetch_min(&self, val: usize, order: Ordering) -> usize
Minimum with the current value.
Finds the minimum of the current value and the argument val, and
sets the new value to the result.
Returns the previous value.
fetch_min takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(23);
assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 23);
assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
assert_eq!(foo.load(Ordering::Relaxed), 22);If you want to obtain the minimum value in one step, you can use the following:
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(23);
let bar = 12;
let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
assert_eq!(min_foo, 12);Sourcepub fn bit_set(&self, bit: u32, order: Ordering) -> bool
pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
Sets the bit at the specified bit-position to 1.
Returns true if the specified bit was previously set to 1.
bit_set takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
This corresponds to x86’s lock bts, and the implementation calls them on x86/x86_64.
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0b0000);
assert!(!foo.bit_set(0, Ordering::Relaxed));
assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
assert!(foo.bit_set(0, Ordering::Relaxed));
assert_eq!(foo.load(Ordering::Relaxed), 0b0001);Sourcepub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
Clears the bit at the specified bit-position to 1.
Returns true if the specified bit was previously set to 1.
bit_clear takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
This corresponds to x86’s lock btr, and the implementation calls them on x86/x86_64.
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0b0001);
assert!(foo.bit_clear(0, Ordering::Relaxed));
assert_eq!(foo.load(Ordering::Relaxed), 0b0000);Sourcepub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
Toggles the bit at the specified bit-position.
Returns true if the specified bit was previously set to 1.
bit_toggle takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
This corresponds to x86’s lock btc, and the implementation calls them on x86/x86_64.
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0b0000);
assert!(!foo.bit_toggle(0, Ordering::Relaxed));
assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
assert!(foo.bit_toggle(0, Ordering::Relaxed));
assert_eq!(foo.load(Ordering::Relaxed), 0b0000);Sourcepub fn fetch_not(&self, order: Ordering) -> usize
pub fn fetch_not(&self, order: Ordering) -> usize
Logical negates the current value, and sets the new value to the result.
Returns the previous value.
fetch_not takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0);
assert_eq!(foo.fetch_not(Ordering::Relaxed), 0);
assert_eq!(foo.load(Ordering::Relaxed), !0);Sourcepub fn not(&self, order: Ordering)
pub fn not(&self, order: Ordering)
Logical negates the current value, and sets the new value to the result.
Unlike fetch_not, this does not return the previous value.
not takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
This function may generate more efficient code than fetch_not on some platforms.
- x86/x86_64:
lock notinstead ofcmpxchgloop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64) - MSP430:
invinstead of disabling interrupts ({8,16}-bit atomics)
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(0);
foo.not(Ordering::Relaxed);
assert_eq!(foo.load(Ordering::Relaxed), !0);Sourcepub fn fetch_neg(&self, order: Ordering) -> usize
pub fn fetch_neg(&self, order: Ordering) -> usize
Negates the current value, and sets the new value to the result.
Returns the previous value.
fetch_neg takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(5);
assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5);
assert_eq!(foo.load(Ordering::Relaxed), 5_usize.wrapping_neg());
assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5_usize.wrapping_neg());
assert_eq!(foo.load(Ordering::Relaxed), 5);Sourcepub fn neg(&self, order: Ordering)
pub fn neg(&self, order: Ordering)
Negates the current value, and sets the new value to the result.
Unlike fetch_neg, this does not return the previous value.
neg takes an Ordering argument which describes the memory ordering
of this operation. All ordering modes are possible. Note that using
[Acquire] makes the store part of this operation [Relaxed], and
using [Release] makes the load part [Relaxed].
This function may generate more efficient code than fetch_neg on some platforms.
- x86/x86_64:
lock neginstead ofcmpxchgloop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
§Examples
use portable_atomic::{AtomicUsize, Ordering};
let foo = AtomicUsize::new(5);
foo.neg(Ordering::Relaxed);
assert_eq!(foo.load(Ordering::Relaxed), 5_usize.wrapping_neg());
foo.neg(Ordering::Relaxed);
assert_eq!(foo.load(Ordering::Relaxed), 5);Sourcepub const fn as_ptr(&self) -> *mut usize
pub const fn as_ptr(&self) -> *mut usize
Returns a mutable pointer to the underlying integer.
Returning an *mut pointer from a shared reference to this atomic is
safe because the atomic types work with interior mutability. Any use of
the returned raw pointer requires an unsafe block and has to uphold
the safety requirements. If there is concurrent access, note the following
additional safety requirements:
- If this atomic type is lock-free, any concurrent operations on it must be atomic.
- Otherwise, any concurrent operations on it must be compatible with operations performed by this atomic type.
This is const fn on Rust 1.58+.