If you have a whole sequence of items you want to add to an existing IImmutableQueue<T>, it’s pretty simple to write a loop to add them all. We can make this even easier by writing a simple extension method that handles the looping on our behalf.

public static IImmutableQueue<T> EnqueueAll<T>(
    this IImmutableQueue<T> queue, 
    IEnumerable<T> items)
    => items.Aggregate(
        queue,
        (q, i) => q.Enqueue(i));

This straightforwardly handles the case where we already have the items to add in some kind of sequence, perhaps a List<T>.

But what happens if the items we want to add are already in an IImmutableQueue<T>?

The .EnqueueAll() method above will iterate through the entire queue, adding each item in turn to the original queue, allocating new instances as it goes. When finished, the original queue will be discarded for the garbage collector to clean up.

This approach doesn’t sound particularly efficient. In fact, it sounds very much like the problem we previously identified with our use of the .Reverse() method for the _inbound stack.

What can we do instead?

The key insight here is to observe that we’re essentially making a decision about what happens later: after the code finishes consuming items from this queue, it should consume the items from that queue.

Instead of acting immediately, let’s turn the decision into a class that will do the right thing later on. We’ll call this class ConcatenatedQueue<T>.

To begin, we need a variation of our extension method to handle enqueuing an entire queue of items by returning an instance of our new class:

public static IImmutableQueue<T> EnqueueAll<T>(
    this IImmutableQueue<T> queue, 
    IImmutableQueue<T> items)
{
    if (queue.IsEmpty)
    {
        return items;
    }

    if (items.IsEmpty)
    {
        return queue;
    }

    return new ConcatenatedQueue<T>(queue, items);
}

We take the opportunity to do a bit of optimization, ensuring that if one or the other queue is empty, we don’t create anything new.

Here’s a partial declaration of ConcatenatedQueue<T>:

public class ConcatenatedQueue<T> : IImmutableQueue<T>
{
    // Queue from which we supply items
    private readonly IImmutableQueue<T> _outbound;

    // Queue that we'll use when _outbound is empty
    private readonly IImmutableQueue<T> _buffered;

    public ConcatenatedQueue(
        IImmutableQueue<T> first, 
        IImmutableQueue<T> last)
    {
        _outbound = first;
        _buffered = last;
    }
}

When we .Dequeue() an item from the front of the queue, we need to discard it from the front of our _outbound queue; if that’s now empty, we can implement the earlier decision by returning the _buffer for continued use:

public IImmutableQueue<T> Dequeue()
{
    var queue = _outbound.Dequeue();
    if (queue.IsEmpty)
    {
        return _buffered;
    }

    return new ConcatenatedQueue<T>(queue, _buffered);
}

Enumerating through the concatenated queue is straightforward - we first work through the _outbound queue, then the _buffered queue.

public IEnumerator<T> GetEnumerator()
{
    foreach (var i in _outbound)
    {
        yield return i;
    }

    foreach (var i in _buffered)
    {
        yield return i;
    }
}

The remainder of the implementation is straightforward.

public T Head => _outbound.Head;

public bool IsEmpty => false;

public IImmutableQueue<T> Enqueue(T value)
    => new ConcatenatedQueue<T>(
        _outbound, 
        _buffered.Enqueue(value));

It turns out that this idea - of taking a decision about future behaviour and wrapping it into a class to be actioned later - is a particularly powerful one. Much of the power of the LINQ library found in the .NET Framework is based on this idea of deferred execution.

Comments

blog comments powered by Disqus