Linked lists: the role of locking

Prev: Monitors and blocking synchronization Next: Queues, memory management, and the ABA problem

Exercises

Exercise 9.1. Describe how to modify each of the linked list algorithms if object hash codes are not guaranteed to be unique.


Exercise 9.2. Suppose every method call of CoarseList is linearized at the instant the lock is acquired. Explain why we cannot use the abstraction map described in Section 9.3. Give an alternative abstraction map that works for these linearization points.


Exercise 9.3. Explain why the fine-grained locking algorithm does not deadlock.


Exercise 9.4. Explain why the fine-grained list’s add() method is linearizable.


Exercise 9.5. Explain why the optimistic and lazy locking algorithms are not subject to deadlock.


Exercise 9.6. Show an execution of the optimistic algorithm in which a thread is forever attempting to delete a node.

Hint: since we assume that all the individual node locks are starvation-free, the livelock is not on any individual lock, and a bad execution must repeatedly add and remove nodes from the list.


Exercise 9.7. Provide the code for the contains() method missing from the fine-grained algorithm. Explain why your implementation is correct.


Exercise 9.8. Is the optimistic list implementation still correct if we switch the order in which add() locks the pred and curr nodes?


Exercise 9.9. Show that in the optimistic list algorithm, if pred_A is not null, then tail is reachable from pred_A, even if pred_A itself is not reachable.


Exercise 9.10. Show that in the optimistic algorithm, the add() method needs to lock only pred.


Exercise 9.11. Design a coarse-grained optimistic locking linked list-based set algorithm that does not traverse the list while holding the lock by augmenting the list with a version number.


Exercise 9.12. Design a fine-grained optimistic locking algorithm that uses a version number to avoid traversing the list while holding any lock if the list does not change during the first traversal of the list. What are the advantages and disadvantages of this list compared with the coarse-grained list from the previous exercise?


Exercise 9.13. For each of the following modifications of the sorted linked list algorithms, explain why the respective algorithm is still linearizable, or give a counterexample showing it is not.

  1. In the optimistic algorithm, the contains() method locks two nodes before deciding whether a key is present. Suppose, instead, it locks no nodes, returning true if it observes the value, and false otherwise.
  2. In the lazy algorithm, the contains() method executes without inspecting the locks, but it inspects the mark bit; it returns false if a node is marked for removal. Suppose, instead, the contains() does not inspect the mark bit of the nodes, and returns true even for nodes that may be marked.

Exercise 9.14. Would the lazy algorithm still work if we marked a node as removed simply by setting its next field to null? Why or why not? What about the lock-free algorithm?


Exercise 9.15. In the lazy algorithm, can pred_A ever be unreachable? Justify your answer.


Exercise 9.16. Your new employee claims that the lazy list’s validation method, Fig. 9.16, can be simplified by dropping the check that pred.next is equal to curr. After all, the code always sets pred to the old value of curr, and before pred.next can be changed, the new value of curr must be marked, causing the validation to fail. Explain the error in this reasoning.


Exercise 9.17. Can you modify the lazy algorithm’s remove() so it locks only one node?


Exercise 9.18. In the lock-free algorithm, argue the benefits and drawbacks of having the contains() method help in the cleanup of logically removed nodes.


Exercise 9.19. In the lock-free algorithm, if an add() method call fails because pred does not point to curr, but pred is not marked, do we need to traverse the list again from head in order to attempt to complete the call?


Exercise 9.20. Would the contains() method of the lazy and lock-free algorithms still be correct if logically removed entries were not guaranteed to be sorted?


Exercise 9.21. The add() method of the lock-free algorithm never finds a marked node with the same key. Can one modify the algorithm so that it will simply insert its new added object into the existing marked node with the same key if such a node exists in the list, thus saving the need to insert a new node?


Exercise 9.22. Explain why the following cannot happen in the LockFreeList algorithm: a node with item x is logically but not yet physically removed by some thread, then the same item x is added into the list by another thread, and finally a contains() call by a third thread traverses the list, finding the logically removed node, and returning false, even though the linearization order of the remove() and add() implies that x is in the set.


Exercise 9.23. Consider the following two modifications for the sorted linked list algorithms:

  1. In the optimistic algorithm, the contains() method locks two nodes before deciding whether a key is present. Suppose, instead, it locks no nodes, returning true if it observes the value, and false otherwise.
  2. In the lazy algorithm, the contains() method executes without inspecting the locks, but it inspects the mark bit; it returns false if a node is marked for removal. Suppose, instead, the contains() does not inspect the mark bit of the nodes, and returns true even for nodes that may be marked.

For both of the modifications, explain why the respective algorithm is still linearizable, or give a counterexample showing it is not.


Exercise 9.24. In the lock-free algorithm, we attempt to logically remove the node curr by calling curr.next.compareAndSet(succ, succ, false, true) at line 55 of Fig. 9.25. For each of the following implementations in which this call is replaced with a different method call, either explain why it is correct or describe an execution in which it fails.

  1. We instead call curr.next.compareAndSetMark(false, true), where compareAndSetMark() is a fictional method that atomically performs a normal compare-and-swap operation on just the mark bit.
  2. We instead call curr.next.attemptMark(succ, true), where attemptMark() is a real method of the AtomicMarkableReference<T> class that atomically changes the mark bit to the specified value if the reference has the expected value, but is allowed to spuriously fail if there are concurrent modifications.

Prev: Monitors and blocking synchronization Next: Queues, memory management, and the ABA problem