Concurrent Objects

Prev: Mutual Exclusion Next: Foundations of shared memory

Concurrency and correctness

To learn more about correctness, lets start off with a lock based queue which acquires a lock:

class LockBasedQueue<T> {
  int head, tail;
  T[] items;
  public LockBasedQueue(int capacity) {
    head = 0; tail = 0;
    items = (T[]) new Object[capacity];
  }
 
  public synchronized void enq(T x) throws FullException {
    if (tail - head == items.length) {
      throw new FullException();
    }
    items[tail % items.length] = x;
    tail++;
  }
 
  public synchronized T deq() throws EmptyException {
    if (tail == head) {
      throw new EmptyException();
    }
    T x = items[head % items.length];
    head++;
    return x;
  }
}

Since both enq and deq are synchronized, all calls to the queue must operate sequentially, and is this correct.

However, since both enq and deq use the same lock, this is not that performant. A better implementation would use fewer locks.

To do so, imagine a non-synchronized queue (Wait Free):

Since there is no need for the lock to coordinate access, this is still correct, but only if there is one enqueuer or dequeuer.

class LockBasedQueue<T> {
  int head, tail;
  T[] items;
  public LockBasedQueue(int capacity) {
    head = 0; tail = 0;
    items = (T[]) new Object[capacity];
  }
 
  public void enq(T x) throws FullException {
    if (tail - head == items.length) {
      throw new FullException();
    }
    items[tail % items.length] = x;
    tail++;
  }
 
  public T deq() throws EmptyException {
    if (tail == head) {
      throw new EmptyException();
    }
    T x = items[head % items.length];
    head++;
    return x;
  }
}

Sequential objects

In an object used sequentially, it can be described with a precondition and a postcondition, where the object that is in X state is mutated to one in Y state after a certain method call.

In a multi-threaded world, however, there are no certain pre and post conditions — an object may always have an in-progress call.

Sequential Consistency

We can try to look at concurrent objects the same way we look at sequential objects. To do this, split up a concurrent call into two parts, its invocation and its response. A method that has been called on a concurrent object but has not returned is considered pending.

To make concurrent objects easy to understand, method calls should be:

  1. Appear to happen one-at-a-time in sequential order.
  2. Executed in First-Come-First-Served order.
  3. Method calls should appear to take effect in program order.

These constraints define sequential consistency, which is easier to understand.

Sequential consistency versus real-time order

Imagine the case where thread A enqueues X, later on thread B enqueues y, and A dequeues y. This is inconsistent with the real-time order (X should be dequeued), but it doesn’t always occur.

Sequential consistency is nonblocking

To be sequentially consistent, an object must immediately respond to an invocation, otherwise it blocks other objects.

Compositionality

Unfortunately, sequential consistency is not compositional. If you have two concurrent queues, is the system (both queues) concurrent? No.

Linearizability

Since sequential consistency is not compositional, we need a stronger constraint:

  • Each method call should appear to take effect instantaneously at some moment between its invocation and response.

This elevates sequential consistency to linearizability.

Linearization points

If a method mutates state at one point, that is called its linearization point.

Linearizability versus sequential consistency

Linearizability is nonblocking, but compositional. It’s thus better for describing larger systems.

Quiescent Consistency

To trade off consistency for performance, there’s also quiescent consistency. If an object has no pending method calls, it can be totally ordered, otherwise, there is no guarantee that method calls take effect in their real-time order.

Memory consistency models

Code can be affected by the memory consistency models of the hardware and language. Java and C++ have a memory order on architectures they run on, which provide keywords to prohibit reordering of operations by compilers and hardware.

Progress conditions

Wait-freedom

A method is wait-free if every call finishes its execution in a finite number of steps. A lock based queue is not wait-free because enqueueing or dequeuing could be infinitely blocked by the other operation.

Lock-freedom

A weaker progress condition is lock-freedom, where at least some threads terminate in a finite number of steps.

Obstruction-freedom

A method is obstruction free if no other thread obstructs it — i.e. it finishes in a bounded number of steps if no other threads block it.

Blocking progress conditions

A method is starvation-free if it completes in a finite number of steps provided that every thread with a pending method call keeps taking steps.

Characterizing progress conditions

Independent, Nonblockingdependent, nonblockingdependent, blocking
maximal progressWait-freeObstruction-freeStarvation-free
minimal progressLock-freeDeadlock-free

Exercises

Exercise 3.1. Explain why quiescent consistency is compositional.


Exercise 3.2. Consider a memory object that encompasses two register components. We know that if both registers are quiescently consistent, then so is the memory. Does the converse hold? That is, if the memory is quiescently consistent, are the individual registers quiescently consistent? Outline a proof, or give a counterexample.


Exercise 3.3. Give an example of an execution that is quiescently consistent but not sequentially consistent, and another that is sequentially consistent but not quiescently consistent.


Exercise 3.4. For each of the histories shown in Figs. 3.11 and 3.12 of the book, are they quiescently consistent? Sequentially consistent? Linearizable? Justify your answer.


Exercise 3.5. If we drop condition L2 from the linearizability definition, is the resulting property the same as sequential consistency? Explain.


Exercise 3.6. Prove the “only if” part of Theorem 3.6.3.


Exercise 3.7. The AtomicInteger class in the java.util.concurrent.atomic package is a container for an integer value. One of its methods is:

boolean compareAndSet(int expect, int update)

This method compares the object’s current value with expect. If the values are equal, then it atomically replaces the object’s value with update and returns true. Otherwise, it leaves the object’s value unchanged, and returns false. This class also provides:

int get()

which returns the object’s value.

Consider the FIFO queue implementation shown below. It stores its items in an array items, which, for simplicity, we assume has unbounded size. It has two AtomicInteger fields: head is the index of the next slot from which to remove an item, and tail is the index of the next slot in which to place an item. Give an example showing that this implementation is not linearizable.

class IQueue<T> {
  AtomicInteger head = new AtomicInteger(0);
  AtomicInteger tail = new AtomicInteger(0);
  T[] items = (T[]) new Object[Integer.MAX_VALUE];
 
  public void enq(T x) {
    int slot;
    do {
      slot = tail.get();
    } while (!tail.compareAndSet(slot, slot + 1));
    items[slot] = x;
  }
 
  public T deq() throws EmptyException {
    T value;
    int slot;
    do {
      slot = head.get();
      value = items[slot];
      if (value == null)
        throw new EmptyException();
    } while (!head.compareAndSet(slot, slot + 1));
    return value;
  }
}

Exercise 3.8. Consider the following rather unusual implementation of a method m: in every history, the -th time a thread calls m, the call returns after steps. Is this method wait-free?


Exercise 3.9. Consider a system with an object x and threads. Determine if each of the following properties is equivalent to saying x is deadlock-free, starvation-free, obstruction-free, lock-free, wait-free, or none of these. Briefly justify your answers.

  1. For every infinite history of x, an infinite number of method calls complete.
  2. For every finite history of x, there is an infinite history .
  3. For every infinite history of x, every thread takes an infinite number of steps.
  4. For every infinite history of x, every thread that takes an infinite number of steps in completes an infinite number of method calls.
  5. For every finite history of x, there are infinite histories where only thread takes steps in , and it completes an infinite number of method calls.
  6. For every finite history of x, there is an infinite history where every thread completes an infinite number of method calls in .

Exercise 3.10. This exercise examines the queue implementation shown below, whose enq() method does not have a single fixed linearization point in the code.

The queue stores its items in an items array, which, for simplicity, we assume is unbounded. The tail field is an AtomicInteger, initially zero.

The enq() method reserves a slot by incrementing tail, and then stores the item at that location. Note that these two steps are not atomic: there is an interval after tail has been incremented but before the item has been stored in the array.

The deq() method reads the value of tail, and then traverses the array in ascending order from slot zero to the tail. For each slot, it swaps null with the current contents, returning the first non-null item it finds. If all slots are null, the procedure is restarted.

public class HWQueue<T> {
  AtomicReference<T>[] items;
  AtomicInteger tail;
  static final int CAPACITY = Integer.MAX_VALUE;
 
  public HWQueue() {
    items = (AtomicReference<T>[]) Array.newInstance(
        AtomicReference.class, CAPACITY);
    for (int i = 0; i < items.length; i++) {
      items[i] = new AtomicReference<T>(null);
    }
    tail = new AtomicInteger(0);
  }
 
  public void enq(T x) {
    int i = tail.getAndIncrement();
    items[i].set(x);
  }
 
  public T deq() {
    while (true) {
      int range = tail.get();
      for (int i = 0; i < range; i++) {
        T value = items[i].getAndSet(null);
        if (value != null) {
          return value;
        }
      }
    }
  }
}
  • Give an execution showing that the linearization point for enq() cannot occur at line 15.
  • Give another execution showing that the linearization point for enq() cannot occur at line 16.
  • Since these are the only two memory accesses in enq(), we must conclude that enq() has no single linearization point. Does this mean enq() is not linearizable?

Exercise 3.11. This exercise examines a stack implementation whose push() method does not have a single fixed linearization point in the code.

The stack stores its items in an items array, which, for simplicity, we assume is unbounded. The top field is an AtomicInteger, initially zero.

The push() method reserves a slot by incrementing top, and then stores the item at that location. Note that these two steps are not atomic: there is an interval after top has been incremented but before the item has been stored in the array.

The pop() method reads the value of top and then traverses the array in descending order from the top to slot zero. For each slot, it swaps null with the current contents, returning the first non-null item it finds. If all slots are null, the method returns null, indicating an empty stack.

public class AGMStack<T> {
  AtomicReferenceArray<T> items;
  AtomicInteger top;
  static final int CAPACITY = Integer.MAX_VALUE;
 
  public AGMStack() {
    items = new AtomicReferenceArray<T>(CAPACITY);
    top = new AtomicInteger(0);
  }
 
  public void push(T x) {
    int i = top.getAndIncrement();
    items.set(i, x);
  }
 
  public T pop() {
    int range = top.get();
    for (int i = range - 1; i > -1; i--) {
      T value = items.getAndSet(i, null);
      if (value != null) {
        return value;
      }
    }
    return null;
  }
}
  • Give an execution showing that the linearization point for push() cannot occur at line 11.
  • Give another execution showing that the linearization point for push() cannot occur at line 12.
  • Since these are the only two memory accesses in push(), we conclude that push() has no single fixed linearization point. Does this mean push() is not linearizable?

Exercise 3.12. Prove that sequential consistency is nonblocking.

Prev: Mutual Exclusion Next: Foundations of shared memory